참고문헌
- 정인영, "미국, 자동차보험료 인종차별 논란", 보험연구원, 2017.5, Available at https://www.kiri.or.kr/report/downloadFile.do?docId=2838.
- Angwin, J., J. Larson, S. Mattu, and L. Kirchner, "Machine bias", In Ethics of data and analytics, Auerbach Publications, 2022, pp. 254-264.
- Autor, D. H., "Why are there still so many jobs? The history and future of workplace automation", Journal of Economic Perspectives, Vol.29, No.3, 2015, pp. 3-30. https://doi.org/10.1257/jep.29.3.3
- Barocas, S., M. Hardt, and A. Narayanan, Fairness and machine learning, MIT Press, 2023, Available at https://fairmlbook.org.
- Barrett, I., Human vs. machine: An empirical study of him professionals' perceptions of bias and fairness issues in AI-driven evaluations (Master's thesis), The American College of Greece, 2024.
- Bartlett, R., A. Morse, R. Stanton, and N. Wallace, "Consumer-lending discrimination in the FinTech era", Journal of Financial Economics, Vol.143, No.1, 2022, pp. 30-56. https://doi.org/10.1016/j.jfineco.2021.05.047
- Bellamy, R. K. E., K. Dey, M. Hind, S. C. Hoffman, S. Houde, K. Kannan, and Y. Zhang, "AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias", IBM Journal of Research and Development, Vol.63, No.4/5, 2019, pp. 4:1-4:15. https://doi.org/10.1147/JRD.2019.2942287
- Binns, R., "Fairness in machine learning: Lessons from political philosophy", In Conference on Fairness, Accountability and Transparency, PMLR, 2018, pp. 149-159.
- Chen, I., F.D. Johansson, and D. Sontag, "Why is my classifier discriminatory?", Proceedings of the 2021 Conference on Fairness, Accountability, and Transparency, 2021, pp. 524-534.
- Chiu, C. M., H. Y. Lin, S. Y. Sun, and M. H. Hsu, "Understanding customers' loyalty intentions towards online shopping: An integration of technology acceptance model and fairness theory", Behaviour & Information Technology, Vol.28, No.4, 2009, pp. 347-360. https://doi.org/10.1080/01449290801892492
- Colquitt, J. A., "On the dimensionality of organizational justice: A construct validation of a measure", Journal of Applied Psychology, Vol.86, No.3, 2001, pp. 386-400. https://doi.org/10.1037/0021-9010.86.3.386
- Das, S., R. Stanton, and N. Wallace, "Algorithmic fairness", Annual Review of Financial Economics, Vol.15, No.1, 2023, pp. 565-593. https://doi.org/10.1146/annurev-financial-110921-125930
- Dastin, J., "Amazon scraps secret AI recruiting tool that showed bias against women", In Ethics of Data and Analytics, Auerbach Publications, 2018, pp. 296-299.
- Datta, A., S. Sen, and Y. Zick, "Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems", 2017 IEEE Symposium on Security and Privacy (SP), 2017, pp. 598-617.
- Davis, F. D., "Perceived usefulness, perceived ease of use, and user acceptance of information technology", MIS quarterly, Vol.13, No.3, 1989, pp. 319-340. https://doi.org/10.2307/249008
- Dhagarra, D., M. Goswami, and G. Kumar, "Impact of trust and privacy concerns on technology acceptance in healthcare: An Indian perspective", International Journal of Medical Informatics, Vol. 141, 2020, pp. 104-164. https://doi.org/10.1016/j.ijmedinf.2020.104164
- Doshi-Velez, F. and B. Kim, "Towards a rigorous science of interpretable machine learning", arXiv: Machine Learning, 2017, Available at https://doi.org/10.48550/arXiv.1702.08608.
- Enqvist, L., "Rule-based versus AI-driven benefits allocation: GDPR and AIA legal implications and challenges for automation in public social security administration", Information & Communications Technology Law, 2024, pp. 1-25. https://doi.org/10.1080/13600834.2024.2349835
- Eubanks, V., Automating inequality: How high-tech tools profile, police, and punish the poor, St. Martin's Press, 2018.
- Faruqe, F., L. Medsker, and R. Watkins, "ATIAS: A model for understanding intentions to use AI technology", In K. Daimi, A. Alsadoon, and L. Coelho (eds.), Cutting Edge Applications of Computational Intelligence Tools and Techniques, Cham: Springer, 2023, pp. 85-112.
- Friedler, S. A., C. Scheidegger and S. Venkatasubramanian, "The (im)possibility of fairness: different value systems require different mechanisms for fair decision making", Communications of the ACM, Vol.64, No.4, 2019, pp. 136-143. https://doi.org/10.1145/3433949
- Gardner, R. G., T. B. Harris, N. Li, B. L. Kirkman, and J. E. Mathieu, "Understanding "it depends" in organizational research: A theory-based taxonomy, review, and future research agenda concerning interactive and quadratic relationships", Organizational Research Methods, Vol.20, No.4, 2017, pp. 610-638. https://doi.org/10.1177/1094428117708856
- Gasser, U. and V. A. F. Almeida, "A layered model for AI governance", IEEE Internet Computing, Vol.21, No.6, 2017, pp. 58-62. https://doi.org/10.1109/MIC.2017.4180835
- Gilpin, L. H., D. Bau, B. Z. Yuan, A. Bajwa, M. Specter, and L. Kagal, "Explaining explanations: An overview of interpretability of machine learning", 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), 2018, pp. 80-89.
- Grimmelikhuijsen, S., G. Porumbescu, B. Hong, and T. Im, "The effect of transparency on trust in government: A cross‐national comparative experiment", Public Administration Review, Vol.73, No.4, 2013, pp. 575-586. https://doi.org/10.1111/puar.12047
- Hair, J. F., C. M. Ringle, and M. Sarstedt, "PLS-SEM: Indeed a silver bullet", Journal of Marketing theory and Practice, Vol.19, No.2, 2011, pp. 139-152. https://doi.org/10.2753/MTP1069-6679190202
- Hair, J. F., J. J. Risher, M. Sarstedt, and C. M. Ringle, "When to use and how to report the results of PLS-SEM", European Business Review, Vol.31, No.1, 2019, pp. 2-24. https://doi.org/10.1108/EBR-11-2018-0203
- Hardt, M., E. Price, and N. Srebro, "Equality of opportunity in supervised learning", Advances in Neural Information Processing Systems, Vol.29, 2016, pp. 3315-3323.
- Hayes, A. F., Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (3rd Ed.), The Guilford Press, 2022.
- Henseler, J., C. M. Ringle, and R. R. Sinkovics, "The use of partial least squares path modeling in international marketing", In New Challenges to International Marketing, Vol. 20, 2009, pp. 277-319. https://doi.org/10.1108/S1474-7979(2009)0000020014
- Howard, J., "Algorithms and the future of work", American Journal of Industrial Medicine, Vol.65, No.12, 2022, pp. 943-952. https://doi.org/10.1002/ajim.23429
- Hu, L. T. and P. M. Bentler, "Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives", Structural Equation Modeling: A Multidisciplinary Journal, Vol.6, No.1, 1999, pp. 1-55. https://doi.org/10.1080/10705519909540118
- Hu, P., Y. Zeng, D. Wang, and H. Teng, "Too much light blinds: The transparency-resistance paradox in algorithmic management", Computers in Human Behavior, Vol.161, 2024, 108403. https://doi.org/10.1016/j.chb.2024.108403
- Lee, S., J. Oh, and W. K. Moon, "Adopting voice assistants in online shopping: Examining the role of social presence, performance risk, and machine heuristic", International Journal of Human–Computer Interaction, Vol.39, No.14, 2023, pp. 2978-2992. https://doi.org/10.1080/10447318.2022.2089813
- Leventhal, G. S., "What should be done with equity theory? New approaches to the study of fairness in social relationships", In K. J. Gergen, M. S. Greenberg, and R. H. Willis (eds.), Social Exchange: Advances in Theory and Research, New York: Plenum Press, 1980, pp. 27-55.
- Lipton, Z. C., "The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery", Queue, Vol.16, No.3, 2018, pp. 31-57. https://doi.org/10.1145/3236386.3241340
- Liu, Y. and X. Sun, "Towards more legitimate algorithms: A model of algorithmic ethical perception, legitimacy, and continuous usage intentions of e-commerce platforms", Computers in Human Behavior, Vol.150, 2024, 108006. https://doi.org/10.1016/j.chb.2023.108006
- Lukács, A. and S. Váradi, "GDPR-compliant AI-based automated decision-making in the world of work", Computer Law & Security Review, Vol.50, 2023, pp. 105-848. https://doi.org/10.1016/j.clsr.2023.105848
- Marangunić, N. and A. Granić, "Technology acceptance model: A literature review from 1986 to 2013", Universal Access in the Information Society, Vol.14, 2015, pp. 81-95. https://doi.org/10.1007/s10209-014-0348-1
- Mayer, R. C., J. H. Davis, and F. D. Schoorman, "An integrative model of organizational trust", Academy of Management Review, Vol.20, No.3, 1995, pp. 709-734. https://doi.org/10.2307/258792
- McAfee, A. and E. Brynjolfsson, Machine, platform, crowd: Harnessing our digital future, WW Norton & Company, 2017.
- McBride, M., L. Carter, and C. Ntuen, "Human-machine trust, bias and automated decision aid acceptance", Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol.57, No.1, 2013, pp. 349-353.
- Mehrabi, N., F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, "A survey on bias and fairness in machine learning", ACM Computing Surveys, Vol.54, No.6, 2021, pp. 1-35. https://doi.org/10.1145/3457607
- Mitchell, M., S. Wu, A. Zaldivar, P. Barnes, L. Vasserman, B. Hutchinson, and T. Gebru, "Model cards for model reporting", Proceedings of the Conference on Fairness, Accountability, and Transparency, 2019, pp. 220-229.
- Morris, J. H. and D. J. Moberg, "Work organizations as contexts for trust and betrayal", In T. R. Sarbin, R. M. Carney, & C. Eoyang (Eds.), Citizen espionage: Studies in trust and betrayal, Praeger Publishers/Greenwood Publishing Group, 1994, pp. 163-187.
- O'Neil, C., Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown Publishing Group, 2016.
- Parasuraman, R. and V. Riley, "Humans and automation: Use, misuse, disuse, abuse", Human Factors, Vol.39, No.2, 1997, pp. 230-253. https://doi.org/10.1518/001872097778543886
- Raghavan, M., S. Barocas, J. Kleinberg, and K. Levy, "Mitigating bias in algorithmic hiring: Evaluating claims and practices", Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020, pp. 469-481.
- Raji, I. D. and J. Buolamwini, "Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products", Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 2019, pp. 429-435.
- Ribeiro, M.T., S. Singh, and C. Guestrin, "Why should I trust you?: Explaining the predictions of any classifier", Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135-1144.
- Ringle, C. M., M. Sarstedt, N. Sinkovics, and R. R. Sinkovics, "A perspective on using partial least squares structural equation modelling in data articles", Data in Brief, Vol.48, 2023, 109074. https://doi.org/10.1016/j.dib.2023.109074
- Sarstedt, M., J. F. Hair, M. Pick, B. D. Liengaard, L. Radomir, and C. M. Ringle, "Progress in partial least squares structural equation modeling use in marketing research in the last decade", Psychology & Marketing, Vol.39, No.5, 2022, pp. 1035-1064. https://doi.org/10.1002/mar.21640
- Selbst, A. D. and J. Powles, "Meaningful information and the right to explanation", International Data Privacy Law, Vol.7, No.4, 2017, pp. 233-242. https://doi.org/10.1093/idpl/ipx022
- Shin, D., "The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI", International Journal of Human-computer Studies, Vol.146, 2021, p. 102551. https://doi.org/10.1016/j.ijhcs.2020.102551
- Spiller, A. P., G. J. Fitzsimons, J. G. Lynch, and G. H. Mcclelland, "Spotlights, floodlights, and the magic number zero: Simple effects tests in moderated regression", Journal of Marketing Research, Vol.50, No.2, 2013, pp. 277-288, Available at http://doi.org/10.1509/jmr.12.0420
- Sundar, S. S. and J. Kim, "Machine heuristic: When we trust computers more than humans with our personal information", Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019, pp. 1-9.
- Sundar, S. S., "The MAIN model: A heuristic approach to understanding technology effects on credibility", In M. J. Metzger and A. J. Flanagin (eds.), Digital media, youth, and credibility, Cambridge, MA: The MIT Press, 2008, pp.72-100.
- Suresh, H. and J. V. Guttag, "A framework for understanding unintended consequences of machine learning", arXiv preprint arXiv:1901.10002, Vol.2, No.8, 2019, p. 73.
- Topol, E., Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, Hachette UK, 2019.
- Venkatesh, V., M. G. Morris, G. B. Davis, and F. D. Davis, "User acceptance of information technology: Toward a unified view", MIS Quarterly, Vol.27, No.3, 2003, pp. 425-478. https://doi.org/10.2307/30036540
- Vorm, E. and D. Combs, "Integrating transparency, trust, and acceptance: The Intelligent Systems Technology Acceptance Model (ISTAM)", International Journal of Human-Computer Interaction, Vol.38, 2022, pp. 1828-1845, Available at https://doi.org/10.1080/10447318.2022.2070107.
- Zhang, Y., Y. Weng, and J. Lund, "Applications of explainable artificial intelligence in diagnosis and surgery", Diagnostics, Vol.12, No.2, 2022, p. 237. https://doi.org/10.3390/diagnostics12020237
- Zliobaite, I., "Measuring discrimination in algorithmic decision making", Data Mining and Knowledge Discovery, Vol. 31, No. 4, 2017, pp. 1060-1089. https://doi.org/10.1007/s10618-017-0506-1