DOI QR코드

DOI QR Code

Crowdsourcing Software Development: Task Assignment Using PDDL Artificial Intelligence Planning

  • Tunio, Muhammad Zahid (School of Software Engineering, Beijing University of Posts and Telecommunication) ;
  • Luo, Haiyong (Institute of Computer Technology, Chinese Academy of Science, Beijing and Beijing Key Laboratory of Mobile Computing and Pervasive Devices) ;
  • Wang, Cong (and Beijing Key Laboratory of Mobile Computing and Pervasive Devices) ;
  • Zhao, Fang (School of Software Engineering, Beijing University of Posts and Telecommunication) ;
  • Shao, Wenhua (School of Software Engineering, Beijing University of Posts and Telecommunication) ;
  • Pathan, Zulfiqar Hussain (School of Economics and Management, Beijing University of Posts and Telecommunications)
  • 투고 : 2017.04.07
  • 심사 : 2017.09.01
  • 발행 : 2018.02.28

초록

The crowdsourcing software development (CSD) is growing rapidly in the open call format in a competitive environment. In CSD, tasks are posted on a web-based CSD platform for CSD workers to compete for the task and win rewards. Task searching and assigning are very important aspects of the CSD environment because tasks posted on different platforms are in hundreds. To search and evaluate a thousand submissions on the platform are very difficult and time-consuming process for both the developer and platform. However, there are many other problems that are affecting CSD quality and reliability of CSD workers to assign the task which include the required knowledge, large participation, time complexity and incentive motivations. In order to attract the right person for the right task, the execution of action plans will help the CSD platform as well the CSD worker for the best matching with their tasks. This study formalized the task assignment method by utilizing different situations in a CSD competition-based environment in artificial intelligence (AI) planning. The results from this study suggested that assigning the task has many challenges whenever there are undefined conditions, especially in a competitive environment. Our main focus is to evaluate the AI automated planning to provide the best possible solution to matching the CSD worker with their personality type.

키워드

E1JBB0_2018_v14n1_129_f0001.png 이미지

Fig. 1. Mapping of the CSD elements.

Table 1. MBTI personality numbers and types

E1JBB0_2018_v14n1_129_t0001.png 이미지

Table 2. Task predicates

E1JBB0_2018_v14n1_129_t0002.png 이미지

Table 3. CSD developer and personality predicates

E1JBB0_2018_v14n1_129_t0003.png 이미지

Table 4. Plan predicates

E1JBB0_2018_v14n1_129_t0004.png 이미지

Table 5. Basic elements in CSD competitive environment

E1JBB0_2018_v14n1_129_t0005.png 이미지

Table 6. Symbols representing the predicates

E1JBB0_2018_v14n1_129_t0006.png 이미지

참고문헌

  1. K. Mao, Y. Yang, Q. Wang, Y. Jia, and M. Harman, "Developer recommendation for crowdsourced software development tasks," in Proceeedings of the 9th IEEE Symposium on Service-Oridented System Engineering, San Francisco Bay, CA, 2015, pp. 347-356.
  2. K. Mao, L. Capra, M. Harman, and Y. Jia, "A survey of the use of crowdsourcing in software engineering," Journal of Systems and Software, vol. 126, pp. 57-84, 2017. https://doi.org/10.1016/j.jss.2016.09.015
  3. Y. Fu, H. Chen, and F. Song, "STWM: a solution to self-adaptive task-worker matching in software crowdsourcing," in Algorithms and Architectures for Parallel Processing. Cham: Springer International Publishing, 2015, pp. 383-398.
  4. L. B. Chilton, J. J. Horton, R. C. Miller, and S. Azenkot, "Task search in a human computation market," in Proceedings of the ACM SIGKDD Workshop on Human Computation, Washington, DC, 2010, pp. 1-9.
  5. E. Aldhahri, V. Shandilya, and S. Shiva, "Towards an effective crowdsourcing recommendation system: a survey of the state-of-the-art," in Proceedings of IEEE Symposium on Service-Oriented System Engineering, San Francisco Bay, CA, 2015, pp. 372-377.
  6. L. B. Chilton, G. Little, D. Edge, D. S. Weld, and J. A. Landay, "Cascade: crowdsourcing taxonomy creation," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Paris, France, 2013, pp. 1999-2008.
  7. T. D. LaToza and A. Van Der Hoek, "A vision of crowd development," in Proceedings of the 37th IEEE International Conference on Software Engineering, Florence, Italy, 2015, pp. 563-566.
  8. L. Machado, R. Prikladnicki, F. Meneguzzi, C. R. de Souza, and E. Carmel, "Task allocation for crowdsourcing using AI planning," in Proceedings of the 3rd International Workshop on Crowdsourcing in Software Engineering, Austin, TX, 2016, pp. 36-40.
  9. A. R. Gilal, J. Jaafar, M. Omar, S. Basri, and A. Waqas, "A rule-based model for software development team composition: team leader role with personality types and gender classification," Information and Software Technology, vol. 74, pp. 105-113, 2016. https://doi.org/10.1016/j.infsof.2016.02.007
  10. P. K. Murukannaiah, N. Ajmeri, and M. P. Singh, "Acquiring creative requirements from the crowd: understanding the influences of personality and creative potential in crowd RE," in Proceedings of IEEE 24th International Requirements Engineering Conference (RE), Beijing, China, 2016, pp. 176-185.
  11. I. Lykourentzou, D. J. Vergados, K. Papadaki, and Y. Naudet, "Guided crowdsourcing for collective work coordination in corporate environments," in Computational Collective Intelligence: Technologies and Applications. Heidelberg: Springer, 2013, pp. 90-99.
  12. K. Talamadupula, S. Kambhampati, Y. Hu, T. A. Nguyen, and H. H. Zhuo, "Herding the crowd: automated planning for crowdsourced planning," in Proceedings of the 1st AAAI Conference on Human Computation and Crowdsourcing, Palm Springs, CA, 2013.
  13. R. Snow, B. O'Connor, D. Jurafsky, and A. Y. Ng, "Cheap and fast-but is it good?: evaluating non-expert annotations for natural language tasks," in Proceedings of the Conference on Empirical Methods in Natural Language Processing, Honolulu, HI, 2008, pp. 254-263.
  14. V. Ambati, S. Vogel, and J. G. Carbonell, "Towards task recommendation in micro-task markets," in Human Computation: Papers from the 2011 AAAI Workshop. Menlo Park, CA: AAAI Press, 2011, pp. 80-83.
  15. M. C. Yuen, I. King, and K. S. Leung, "Task matching in crowdsourcing," in Proceedings of 2011 International Conference on Internet of Things and 4th International Conference on Cyber, Physical and Social Computing (iThings/CPSCom), Dalian, China, 2011, pp. 409-412.
  16. L. F. Capretz and F. Ahmed, "Making sense of software development and personality types," IT Professional, vol. 12, no. 1, pp. 6-13, 2010. https://doi.org/10.1109/MITP.2010.33
  17. L. F. Capretz, D. Varona, and A. Raza, "Influence of personality types in software tasks choices," Computers in Human Behavior, vol. 52, pp. 373-378, 2015. https://doi.org/10.1016/j.chb.2015.05.050
  18. D. Geiger and M. Schader, "Personalized task recommendation in crowdsourcing information systems: current state of the art," Decision Support Systems, vol. 65, pp. 3-16, 2014. https://doi.org/10.1016/j.dss.2014.05.007
  19. S. Cruz, S., F. Q. da Silva, and L. F. Capretz, "Forty years of research on personality in software engineering: a mapping study," Computers in Human Behavior, vol. 46, pp. 94-113, 2015. https://doi.org/10.1016/j.chb.2014.12.008
  20. R. Valencia-Garcia, F. Garciia-Sanchez, D. Castellanos-Nieves, J. T. Fernandez-Breis, and A. Toval, "Exploitation of social semantic technology for software development team configuration," IET Software, vol. 4, no. 6, pp. 373-385, 2010. https://doi.org/10.1049/iet-sen.2010.0043
  21. N. R. Mead, "Software engineering education: How far we've come and how far we have to go," Journal of Systems and Software, vol. 82, no. 4, pp. 571-575, 2009. https://doi.org/10.1016/j.jss.2008.12.038