DOI QR코드

DOI QR Code

자동화된 의사결정 시스템의 공정성과 편향성이 수용성에 미치는 영향: 신뢰성의 매개효과와 투명성의 조절된 매개효과를 중심으로

The Impact of Fairness and Bias of Automated Decision-making Systems on Acceptance: Focusing on the Mediating Effect of Trustworthiness and the Moderated Mediating Effect of Transparency

  • 이준혁 (연세대학교 바른ICT연구소) ;
  • 김채원 (연세대학교 정보대학원) ;
  • 김현정 (연세대학교 바른ICT연구소)
  • Jun-hyuk Lee (Barun ICT Research Center, Yonsei University) ;
  • Chae-won Kim (Graduate School of Information, Yonsei University) ;
  • Hyeonjeong Kim (Barun ICT Research Center, Yonsei University)
  • 투고 : 2024.11.26
  • 심사 : 2025.01.22
  • 발행 : 2025.02.28

초록

최근 AI 기술과 알고리즘의 발전에 따라 자동화된 의사결정 시스템(ADMS)에 대한 기대와 관심이 높아지고 있다. 그러나 ADMS의 공정성, 편향성, 투명성과 관련된 다양한 윤리적 측면의 문제점이 드러나면서 사용자의 신뢰성과 수용성 저하를 유발하고 있다. 이에 본 연구에서는 ADMS에 대한 사용자들의 수용성을 높이기 위한 전략을 수립하기 위해, ADMS에 대한 사용자의 공정성, 편향성, 투명성, 신뢰성 인식이 수용성에 미치는 영향을 살펴보고자 하였다. 구체적으로 본 연구에서는 ATIAS 모델과 공정성 이론을 바탕으로 공정성과 편향성이 신뢰성에 영향을 미치고, 이를 통해 수용성에 영향을 미칠 것으로 가설을 설정하였다. 또한, 신뢰성이 수용성에 미치는 효과는 투명성에 의해 조절될 것이며, 이에 따라 공정성과 편향성이 신뢰성을 매개로 수용성에 미치는 효과가 투명성에 의해 조절될 것으로 가설을 설정하였다. 수립된 가설을 검증하기 위해 ADMS 사용 경험이 있는 국내 성인을 대상으로 온라인 설문조사를 실시하였으며, 500부의 연구 데이터를 수집하였다. 수집된 데이터는 SmartPLS의 PLS-SEM, PLSPredict 모듈과 SPSS의 PROCESS Macro를 사용하여 분석하였으며, 분석한 결과는 다음과 같다. 첫째, 공정성은 신뢰성에 유의한 정(+)의 영향을 미쳤다. 둘째, 편향성은 신뢰성에 유의한 영향을 미치지 않았다. 셋째, 신뢰성은 수용성에 유의한 정(+)의 영향을 미쳤다. 넷째, 신뢰성은 공정성과 수용성을 매개하였다. 다섯째, 신뢰성이 수용성에 미치는 효과는 투명성에 의해 조절되었다. 여섯째, 공정성이 신뢰성을 매개로 수용성에 미치는 효과는 투명성에 의해 조절되었다. 이상의 연구결과를 바탕으로 본 연구는 ADMS에 대한 신뢰성과 수용성을 높이기 위한 전략과 사회적으로 신뢰할 수 있는 ADMS 기술의 발전 방향을 제시하였다.

The rapid advancements in AI technologies and algorithms have significantly amplified expectations and interested in surrounding automated decision-making systems (ADMS). However, growing concerns about ethical issues such as fairness, bias, and transparency have undermined user trust and acceptance. This study aims to develop strategies to enhance user acceptance of ADMS by investigating the impacts of users' perceptions of fairness, bias, transparency, and trust. Specifically, it examines how fairness and bias influence trust, which subsequently mediates their effects on acceptance, and whether transparency moderates these relationships. Based on the Model of AI trust and the intention to Use AI systems(ATIAS) and Fairness Theory, this study hypothesizes that fairness and bias shape trust, which in turn impacts acceptance. Additionally, it assumes that transparency moderates the relationship between trust and acceptance, as well as the mediated effect of fairness and bias on acceptance via trust. To test these hypotheses, an online survey was conducted with 500 South Korean adults experienced in using ADMS. The collected data were analyzed using PLS-SEM and PLSPredict in SmartPLS, along with PROCESS Macro in SPSS. The findings indicate that, first, fairness positively affected trust. Second, bias did not significantly affected trust. Third, trust positively affected acceptance. Fourth, the relationship between fairness and acceptance was mediated by trust. Fifth, the effect of trust on acceptance was moderated by transparency. Finally, the transparency moderated the mediating effect of trust. These findings offer practical strategies to bolster user trust and acceptance of ADMS while providing valuable insights for the development of socially trustworthy ADMS technologies.

키워드

참고문헌

  1. 정인영, "미국, 자동차보험료 인종차별 논란", 보험연구원, 2017.5, Available at https://www.kiri.or.kr/report/downloadFile.do?docId=2838.
  2. Angwin, J., J. Larson, S. Mattu, and L. Kirchner, "Machine bias", In Ethics of data and analytics, Auerbach Publications, 2022, pp. 254-264.
  3. Autor, D. H., "Why are there still so many jobs? The history and future of workplace automation", Journal of Economic Perspectives, Vol.29, No.3, 2015, pp. 3-30. https://doi.org/10.1257/jep.29.3.3
  4. Barocas, S., M. Hardt, and A. Narayanan, Fairness and machine learning, MIT Press, 2023, Available at https://fairmlbook.org.
  5. Barrett, I., Human vs. machine: An empirical study of him professionals' perceptions of bias and fairness issues in AI-driven evaluations (Master's thesis), The American College of Greece, 2024.
  6. Bartlett, R., A. Morse, R. Stanton, and N. Wallace, "Consumer-lending discrimination in the FinTech era", Journal of Financial Economics, Vol.143, No.1, 2022, pp. 30-56. https://doi.org/10.1016/j.jfineco.2021.05.047
  7. Bellamy, R. K. E., K. Dey, M. Hind, S. C. Hoffman, S. Houde, K. Kannan, and Y. Zhang, "AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias", IBM Journal of Research and Development, Vol.63, No.4/5, 2019, pp. 4:1-4:15. https://doi.org/10.1147/JRD.2019.2942287
  8. Binns, R., "Fairness in machine learning: Lessons from political philosophy", In Conference on Fairness, Accountability and Transparency, PMLR, 2018, pp. 149-159.
  9. Chen, I., F.D. Johansson, and D. Sontag, "Why is my classifier discriminatory?", Proceedings of the 2021 Conference on Fairness, Accountability, and Transparency, 2021, pp. 524-534.
  10. Chiu, C. M., H. Y. Lin, S. Y. Sun, and M. H. Hsu, "Understanding customers' loyalty intentions towards online shopping: An integration of technology acceptance model and fairness theory", Behaviour & Information Technology, Vol.28, No.4, 2009, pp. 347-360. https://doi.org/10.1080/01449290801892492
  11. Colquitt, J. A., "On the dimensionality of organizational justice: A construct validation of a measure", Journal of Applied Psychology, Vol.86, No.3, 2001, pp. 386-400. https://doi.org/10.1037/0021-9010.86.3.386
  12. Das, S., R. Stanton, and N. Wallace, "Algorithmic fairness", Annual Review of Financial Economics, Vol.15, No.1, 2023, pp. 565-593. https://doi.org/10.1146/annurev-financial-110921-125930
  13. Dastin, J., "Amazon scraps secret AI recruiting tool that showed bias against women", In Ethics of Data and Analytics, Auerbach Publications, 2018, pp. 296-299.
  14. Datta, A., S. Sen, and Y. Zick, "Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems", 2017 IEEE Symposium on Security and Privacy (SP), 2017, pp. 598-617.
  15. Davis, F. D., "Perceived usefulness, perceived ease of use, and user acceptance of information technology", MIS quarterly, Vol.13, No.3, 1989, pp. 319-340. https://doi.org/10.2307/249008
  16. Dhagarra, D., M. Goswami, and G. Kumar, "Impact of trust and privacy concerns on technology acceptance in healthcare: An Indian perspective", International Journal of Medical Informatics, Vol. 141, 2020, pp. 104-164. https://doi.org/10.1016/j.ijmedinf.2020.104164
  17. Doshi-Velez, F. and B. Kim, "Towards a rigorous science of interpretable machine learning", arXiv: Machine Learning, 2017, Available at https://doi.org/10.48550/arXiv.1702.08608.
  18. Enqvist, L., "Rule-based versus AI-driven benefits allocation: GDPR and AIA legal implications and challenges for automation in public social security administration", Information & Communications Technology Law, 2024, pp. 1-25. https://doi.org/10.1080/13600834.2024.2349835
  19. Eubanks, V., Automating inequality: How high-tech tools profile, police, and punish the poor, St. Martin's Press, 2018.
  20. Faruqe, F., L. Medsker, and R. Watkins, "ATIAS: A model for understanding intentions to use AI technology", In K. Daimi, A. Alsadoon, and L. Coelho (eds.), Cutting Edge Applications of Computational Intelligence Tools and Techniques, Cham: Springer, 2023, pp. 85-112.
  21. Friedler, S. A., C. Scheidegger and S. Venkatasubramanian, "The (im)possibility of fairness: different value systems require different mechanisms for fair decision making", Communications of the ACM, Vol.64, No.4, 2019, pp. 136-143. https://doi.org/10.1145/3433949
  22. Gardner, R. G., T. B. Harris, N. Li, B. L. Kirkman, and J. E. Mathieu, "Understanding "it depends" in organizational research: A theory-based taxonomy, review, and future research agenda concerning interactive and quadratic relationships", Organizational Research Methods, Vol.20, No.4, 2017, pp. 610-638. https://doi.org/10.1177/1094428117708856
  23. Gasser, U. and V. A. F. Almeida, "A layered model for AI governance", IEEE Internet Computing, Vol.21, No.6, 2017, pp. 58-62. https://doi.org/10.1109/MIC.2017.4180835
  24. Gilpin, L. H., D. Bau, B. Z. Yuan, A. Bajwa, M. Specter, and L. Kagal, "Explaining explanations: An overview of interpretability of machine learning", 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), 2018, pp. 80-89.
  25. Grimmelikhuijsen, S., G. Porumbescu, B. Hong, and T. Im, "The effect of transparency on trust in government: A cross‐national comparative experiment", Public Administration Review, Vol.73, No.4, 2013, pp. 575-586. https://doi.org/10.1111/puar.12047
  26. Hair, J. F., C. M. Ringle, and M. Sarstedt, "PLS-SEM: Indeed a silver bullet", Journal of Marketing theory and Practice, Vol.19, No.2, 2011, pp. 139-152. https://doi.org/10.2753/MTP1069-6679190202
  27. Hair, J. F., J. J. Risher, M. Sarstedt, and C. M. Ringle, "When to use and how to report the results of PLS-SEM", European Business Review, Vol.31, No.1, 2019, pp. 2-24. https://doi.org/10.1108/EBR-11-2018-0203
  28. Hardt, M., E. Price, and N. Srebro, "Equality of opportunity in supervised learning", Advances in Neural Information Processing Systems, Vol.29, 2016, pp. 3315-3323.
  29. Hayes, A. F., Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (3rd Ed.), The Guilford Press, 2022.
  30. Henseler, J., C. M. Ringle, and R. R. Sinkovics, "The use of partial least squares path modeling in international marketing", In New Challenges to International Marketing, Vol. 20, 2009, pp. 277-319. https://doi.org/10.1108/S1474-7979(2009)0000020014
  31. Howard, J., "Algorithms and the future of work", American Journal of Industrial Medicine, Vol.65, No.12, 2022, pp. 943-952. https://doi.org/10.1002/ajim.23429
  32. Hu, L. T. and P. M. Bentler, "Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives", Structural Equation Modeling: A Multidisciplinary Journal, Vol.6, No.1, 1999, pp. 1-55. https://doi.org/10.1080/10705519909540118
  33. Hu, P., Y. Zeng, D. Wang, and H. Teng, "Too much light blinds: The transparency-resistance paradox in algorithmic management", Computers in Human Behavior, Vol.161, 2024, 108403. https://doi.org/10.1016/j.chb.2024.108403
  34. Lee, S., J. Oh, and W. K. Moon, "Adopting voice assistants in online shopping: Examining the role of social presence, performance risk, and machine heuristic", International Journal of Human–Computer Interaction, Vol.39, No.14, 2023, pp. 2978-2992. https://doi.org/10.1080/10447318.2022.2089813
  35. Leventhal, G. S., "What should be done with equity theory? New approaches to the study of fairness in social relationships", In K. J. Gergen, M. S. Greenberg, and R. H. Willis (eds.), Social Exchange: Advances in Theory and Research, New York: Plenum Press, 1980, pp. 27-55.
  36. Lipton, Z. C., "The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery", Queue, Vol.16, No.3, 2018, pp. 31-57. https://doi.org/10.1145/3236386.3241340
  37. Liu, Y. and X. Sun, "Towards more legitimate algorithms: A model of algorithmic ethical perception, legitimacy, and continuous usage intentions of e-commerce platforms", Computers in Human Behavior, Vol.150, 2024, 108006. https://doi.org/10.1016/j.chb.2023.108006
  38. Lukács, A. and S. Váradi, "GDPR-compliant AI-based automated decision-making in the world of work", Computer Law & Security Review, Vol.50, 2023, pp. 105-848. https://doi.org/10.1016/j.clsr.2023.105848
  39. Marangunić, N. and A. Granić, "Technology acceptance model: A literature review from 1986 to 2013", Universal Access in the Information Society, Vol.14, 2015, pp. 81-95. https://doi.org/10.1007/s10209-014-0348-1
  40. Mayer, R. C., J. H. Davis, and F. D. Schoorman, "An integrative model of organizational trust", Academy of Management Review, Vol.20, No.3, 1995, pp. 709-734. https://doi.org/10.2307/258792
  41. McAfee, A. and E. Brynjolfsson, Machine, platform, crowd: Harnessing our digital future, WW Norton & Company, 2017.
  42. McBride, M., L. Carter, and C. Ntuen, "Human-machine trust, bias and automated decision aid acceptance", Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol.57, No.1, 2013, pp. 349-353.
  43. Mehrabi, N., F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, "A survey on bias and fairness in machine learning", ACM Computing Surveys, Vol.54, No.6, 2021, pp. 1-35. https://doi.org/10.1145/3457607
  44. Mitchell, M., S. Wu, A. Zaldivar, P. Barnes, L. Vasserman, B. Hutchinson, and T. Gebru, "Model cards for model reporting", Proceedings of the Conference on Fairness, Accountability, and Transparency, 2019, pp. 220-229.
  45. Morris, J. H. and D. J. Moberg, "Work organizations as contexts for trust and betrayal", In T. R. Sarbin, R. M. Carney, & C. Eoyang (Eds.), Citizen espionage: Studies in trust and betrayal, Praeger Publishers/Greenwood Publishing Group, 1994, pp. 163-187.
  46. O'Neil, C., Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown Publishing Group, 2016.
  47. Parasuraman, R. and V. Riley, "Humans and automation: Use, misuse, disuse, abuse", Human Factors, Vol.39, No.2, 1997, pp. 230-253. https://doi.org/10.1518/001872097778543886
  48. Raghavan, M., S. Barocas, J. Kleinberg, and K. Levy, "Mitigating bias in algorithmic hiring: Evaluating claims and practices", Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020, pp. 469-481.
  49. Raji, I. D. and J. Buolamwini, "Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products", Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 2019, pp. 429-435.
  50. Ribeiro, M.T., S. Singh, and C. Guestrin, "Why should I trust you?: Explaining the predictions of any classifier", Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135-1144.
  51. Ringle, C. M., M. Sarstedt, N. Sinkovics, and R. R. Sinkovics, "A perspective on using partial least squares structural equation modelling in data articles", Data in Brief, Vol.48, 2023, 109074. https://doi.org/10.1016/j.dib.2023.109074
  52. Sarstedt, M., J. F. Hair, M. Pick, B. D. Liengaard, L. Radomir, and C. M. Ringle, "Progress in partial least squares structural equation modeling use in marketing research in the last decade", Psychology & Marketing, Vol.39, No.5, 2022, pp. 1035-1064. https://doi.org/10.1002/mar.21640
  53. Selbst, A. D. and J. Powles, "Meaningful information and the right to explanation", International Data Privacy Law, Vol.7, No.4, 2017, pp. 233-242. https://doi.org/10.1093/idpl/ipx022
  54. Shin, D., "The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI", International Journal of Human-computer Studies, Vol.146, 2021, p. 102551. https://doi.org/10.1016/j.ijhcs.2020.102551
  55. Spiller, A. P., G. J. Fitzsimons, J. G. Lynch, and G. H. Mcclelland, "Spotlights, floodlights, and the magic number zero: Simple effects tests in moderated regression", Journal of Marketing Research, Vol.50, No.2, 2013, pp. 277-288, Available at http://doi.org/10.1509/jmr.12.0420
  56. Sundar, S. S. and J. Kim, "Machine heuristic: When we trust computers more than humans with our personal information", Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019, pp. 1-9.
  57. Sundar, S. S., "The MAIN model: A heuristic approach to understanding technology effects on credibility", In M. J. Metzger and A. J. Flanagin (eds.), Digital media, youth, and credibility, Cambridge, MA: The MIT Press, 2008, pp.72-100.
  58. Suresh, H. and J. V. Guttag, "A framework for understanding unintended consequences of machine learning", arXiv preprint arXiv:1901.10002, Vol.2, No.8, 2019, p. 73.
  59. Topol, E., Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, Hachette UK, 2019.
  60. Venkatesh, V., M. G. Morris, G. B. Davis, and F. D. Davis, "User acceptance of information technology: Toward a unified view", MIS Quarterly, Vol.27, No.3, 2003, pp. 425-478. https://doi.org/10.2307/30036540
  61. Vorm, E. and D. Combs, "Integrating transparency, trust, and acceptance: The Intelligent Systems Technology Acceptance Model (ISTAM)", International Journal of Human-Computer Interaction, Vol.38, 2022, pp. 1828-1845, Available at https://doi.org/10.1080/10447318.2022.2070107.
  62. Zhang, Y., Y. Weng, and J. Lund, "Applications of explainable artificial intelligence in diagnosis and surgery", Diagnostics, Vol.12, No.2, 2022, p. 237. https://doi.org/10.3390/diagnostics12020237
  63. Zliobaite, I., "Measuring discrimination in algorithmic decision making", Data Mining and Knowledge Discovery, Vol. 31, No. 4, 2017, pp. 1060-1089. https://doi.org/10.1007/s10618-017-0506-1