• Title/Summary/Keyword: eXplainable Artificial Intelligence

Search Result 26, Processing Time 0.024 seconds

Trend in eXplainable Machine Learning for Intelligent Self-organizing Networks (지능형 Self-Organizing Network를 위한 설명 가능한 기계학습 연구 동향)

  • D.S. Kwon;J.H. Na
    • Electronics and Telecommunications Trends
    • /
    • v.38 no.6
    • /
    • pp.95-106
    • /
    • 2023
  • As artificial intelligence has become commonplace in various fields, the transparency of AI in its development and implementation has become an important issue. In safety-critical areas, the eXplainable and/or understandable of artificial intelligence is being actively studied. On the other hand, machine learning have been applied to the intelligence of self-organizing network (SON), but transparency in this application has been neglected, despite the critical decision-makings in the operation of mobile communication systems. We describes concepts of eXplainable machine learning (ML), along with research trends, major issues, and research directions. After summarizing the ML research on SON, research directions are analyzed for explainable ML required in intelligent SON of beyond 5G and 6G communication.

A Methodology for Bankruptcy Prediction in Imbalanced Datasets using eXplainable AI (데이터 불균형을 고려한 설명 가능한 인공지능 기반 기업부도예측 방법론 연구)

  • Heo, Sun-Woo;Baek, Dong Hyun
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.45 no.2
    • /
    • pp.65-76
    • /
    • 2022
  • Recently, not only traditional statistical techniques but also machine learning algorithms have been used to make more accurate bankruptcy predictions. But the insolvency rate of companies dealing with financial institutions is very low, resulting in a data imbalance problem. In particular, since data imbalance negatively affects the performance of artificial intelligence models, it is necessary to first perform the data imbalance process. In additional, as artificial intelligence algorithms are advanced for precise decision-making, regulatory pressure related to securing transparency of Artificial Intelligence models is gradually increasing, such as mandating the installation of explanation functions for Artificial Intelligence models. Therefore, this study aims to present guidelines for eXplainable Artificial Intelligence-based corporate bankruptcy prediction methodology applying SMOTE techniques and LIME algorithms to solve a data imbalance problem and model transparency problem in predicting corporate bankruptcy. The implications of this study are as follows. First, it was confirmed that SMOTE can effectively solve the data imbalance issue, a problem that can be easily overlooked in predicting corporate bankruptcy. Second, through the LIME algorithm, the basis for predicting bankruptcy of the machine learning model was visualized, and derive improvement priorities of financial variables that increase the possibility of bankruptcy of companies. Third, the scope of application of the algorithm in future research was expanded by confirming the possibility of using SMOTE and LIME through case application.

Injection Process Yield Improvement Methodology Based on eXplainable Artificial Intelligence (XAI) Algorithm (XAI(eXplainable Artificial Intelligence) 알고리즘 기반 사출 공정 수율 개선 방법론)

  • Ji-Soo Hong;Yong-Min Hong;Seung-Yong Oh;Tae-Ho Kang;Hyeon-Jeong Lee;Sung-Woo Kang
    • Journal of Korean Society for Quality Management
    • /
    • v.51 no.1
    • /
    • pp.55-65
    • /
    • 2023
  • Purpose: The purpose of this study is to propose an optimization process to improve product yield in the process using process data. Recently, research for low-cost and high-efficiency production in the manufacturing process using machine learning or deep learning has continued. Therefore, this study derives major variables that affect product defects in the manufacturing process using eXplainable Artificial Intelligence(XAI) method. After that, the optimal range of the variables is presented to propose a methodology for improving product yield. Methods: This study is conducted using the injection molding machine AI dataset released on the Korea AI Manufacturing Platform(KAMP) organized by KAIST. Using the XAI-based SHAP method, major variables affecting product defects are extracted from each process data. XGBoost and LightGBM were used as learning algorithms, 5-6 variables are extracted as the main process variables for the injection process. Subsequently, the optimal control range of each process variable is presented using the ICE method. Finally, the product yield improvement methodology of this study is proposed through a validation process using Test Data. Results: The results of this study are as follows. In the injection process data, it was confirmed that XGBoost had an improvement defect rate of 0.21% and LightGBM had an improvement defect rate of 0.29%, which were improved by 0.79%p and 0.71%p, respectively, compared to the existing defect rate of 1.00%. Conclusion: This study is a case study. A research methodology was proposed in the injection process, and it was confirmed that the product yield was improved through verification.

A Study on the Educational Meaning of eXplainable Artificial Intelligence for Elementary Artificial Intelligence Education (초등 인공지능 교육을 위한 설명 가능한 인공지능의 교육적 의미 연구)

  • Park, Dabin;Shin, Seungki
    • Journal of The Korean Association of Information Education
    • /
    • v.25 no.5
    • /
    • pp.803-812
    • /
    • 2021
  • This study explored the concept of artificial intelligence and the problem-solving process that can be explained through literature research. Through this study, the educational meaning and application plan of artificial intelligence that can be explained were presented. XAI education is a human-centered artificial intelligence education that deals with human-related artificial intelligence problems, and students can cultivate problem-solving skills. In addition, through algorithmic education, it is possible to understand the principles of artificial intelligence, explain artificial intelligence models related to real-life problem situations, and expand to the field of application of artificial intelligence. In order for such XAI education to be applied in elementary schools, examples related to real world must be used, and it is recommended to utilize those that the algorithm itself has interpretability. In addition, various teaching and learning methods and tools should be used for understanding to move toward explanation. Ahead of the introduction of artificial intelligence in the revised curriculum in 2022, we hope that this study will be meaningfully used as the basis for actual classes.

Performance improvement of artificial neural network based water quality prediction model using explainable artificial intelligence technology (설명가능한 인공지능 기술을 이용한 인공신경망 기반 수질예측 모델의 성능향상)

  • Lee, Won Jin;Lee, Eui Hoon
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.11
    • /
    • pp.801-813
    • /
    • 2023
  • Recently, as studies about Artificial Neural Network (ANN) are actively progressing, studies for predicting water quality of rivers using ANN are being conducted. However, it is difficult to analyze the operation process inside ANN, because ANN is form of Black-box. Although eXplainable Artificial Intelligence (XAI) is used to analyze the computational process of ANN, research using XAI technology in the field of water resources is insufficient. This study analyzed Multi Layer Perceptron (MLP) to predict Water Temperature (WT), Dissolved Oxygen (DO), hydrogen ion concentration (pH) and Chlorophyll-a (Chl-a) at the Dasan water quality observatory in the Nakdong river using Layer-wise Relevance Propagation (LRP) among XAI technologies. The MLP that learned water quality was analyzed using LRP to select the optimal input data to predict water quality, and the prediction results of the MLP learned using the optimal input data were analyzed. As a result of selecting the optimal input data using LRP, the prediction accuracy of MLP, which learned the input data except daily precipitation in the surrounding area, was the highest. Looking at the analysis of MLP's DO prediction results, it was analyzed that the pH and DO a had large influence at the highest point, and the effect of WT was large at the lowest point.

Explainable Artificial Intelligence Applied in Deep Learning for Review Helpfulness Prediction (XAI 기법을 이용한 리뷰 유용성 예측 결과 설명에 관한 연구)

  • Dongyeop Ryu;Xinzhe Li;Jaekyeong Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.35-56
    • /
    • 2023
  • With the development of information and communication technology, numerous reviews are continuously posted on websites, which causes information overload problems. Therefore, users face difficulty in exploring reviews for their decision-making. To solve such a problem, many studies on review helpfulness prediction have been actively conducted to provide users with helpful and reliable reviews. Existing studies predict review helpfulness mainly based on the features included in the review. However, such studies disable providing the reason why predicted reviews are helpful. Therefore, this study aims to propose a methodology for applying eXplainable Artificial Intelligence (XAI) techniques in review helpfulness prediction to address such a limitation. This study uses restaurant reviews collected from Yelp.com to compare the prediction performance of six models widely used in previous studies. Next, we propose an explainable review helpfulness prediction model by applying the XAI technique to the model with the best prediction performance. Therefore, the methodology proposed in this study can recommend helpful reviews in the user's purchasing decision-making process and provide the interpretation of why such predicted reviews are helpful.

A Study on the Defect Detection of Fabrics using Deep Learning (딥러닝을 이용한 직물의 결함 검출에 관한 연구)

  • Eun Su Nam;Yoon Sung Choi;Choong Kwon Lee
    • Smart Media Journal
    • /
    • v.11 no.11
    • /
    • pp.92-98
    • /
    • 2022
  • Identifying defects in textiles is a key procedure for quality control. This study attempted to create a model that detects defects by analyzing the images of the fabrics. The models used in the study were deep learning-based VGGNet and ResNet, and the defect detection performance of the two models was compared and evaluated. The accuracy of the VGGNet and the ResNet model was 0.859 and 0.893, respectively, which showed the higher accuracy of the ResNet. In addition, the region of attention of the model was derived by using the Grad-CAM algorithm, an eXplainable Artificial Intelligence (XAI) technique, to find out the location of the region that the deep learning model recognized as a defect in the fabric image. As a result, it was confirmed that the region recognized by the deep learning model as a defect in the fabric was actually defective even with the naked eyes. The results of this study are expected to reduce the time and cost incurred in the fabric production process by utilizing deep learning-based artificial intelligence in the defect detection of the textile industry.

A reliable intelligent diagnostic assistant for nuclear power plants using explainable artificial intelligence of GRU-AE, LightGBM and SHAP

  • Park, Ji Hun;Jo, Hye Seon;Lee, Sang Hyun;Oh, Sang Won;Na, Man Gyun
    • Nuclear Engineering and Technology
    • /
    • v.54 no.4
    • /
    • pp.1271-1287
    • /
    • 2022
  • When abnormal operating conditions occur in nuclear power plants, operators must identify the occurrence cause and implement the necessary mitigation measures. Accordingly, the operator must rapidly and accurately analyze the symptom requirements of more than 200 abnormal scenarios from the trends of many variables to perform diagnostic tasks and implement mitigation actions rapidly. However, the probability of human error increases owing to the characteristics of the diagnostic tasks performed by the operator. Researches regarding diagnostic tasks based on Artificial Intelligence (AI) have been conducted recently to reduce the likelihood of human errors; however, reliability issues due to the black box characteristics of AI have been pointed out. Hence, the application of eXplainable Artificial Intelligence (XAI), which can provide AI diagnostic evidence for operators, is considered. In conclusion, the XAI to solve the reliability problem of AI is included in the AI-based diagnostic algorithm. A reliable intelligent diagnostic assistant based on a merged diagnostic algorithm, in the form of an operator support system, is developed, and includes an interface to efficiently inform operators.

A Personal Credit Rating Using Convolutional Neural Networks with Transformation of Credit Data to Imaged Data and eXplainable Artificial Intelligence(XAI) (신용 데이터의 이미지 변환을 활용한 합성곱 신경망과 설명 가능한 인공지능(XAI)을 이용한 개인신용평가)

  • Won, Jong Gwan;Hong, Tae Ho;Bae, Kyoung Il
    • The Journal of Information Systems
    • /
    • v.30 no.4
    • /
    • pp.203-226
    • /
    • 2021
  • Purpose The purpose of this study is to enhance the accuracy score of personal credit scoring using the convolutional neural networks and secure the transparency of the deep learning model using eXplainalbe Artifical Inteligence(XAI) technique. Design/methodology/approach This study built a classification model by using the convolutional neural networks(CNN) and applied a methodology that is transformation of numerical data to imaged data to apply CNN on personal credit data. Then layer-wise relevance propagation(LRP) was applied to model we constructed to find what variables are more influenced to the output value. Findings According to the empirical analysis result, this study confirmed that accuracy score by model using CNN is highest among other models using logistic regression, neural networks, and support vector machines. In addition, With the LRP that is one of the technique of XAI, variables that have a great influence on calculating the output value for each observation could be found.

정보보호 분야의 XAI 기술 동향

  • Kim, Hongbi;Lee, Taejin
    • Review of KIISC
    • /
    • v.31 no.5
    • /
    • pp.21-31
    • /
    • 2021
  • 컴퓨터 기술의 발전에 따라 ML(Machine Learning) 및 AI(Artificial Intelligence)의 도입이 활발히 진행되고 있으며, 정보보호 분야에서도 활용이 증가하고 있는 추세이다. 그러나 이러한 모델들은 black-box 특성을 가지고 있으므로 의사결정 과정을 이해하기 어렵다. 특히, 오탐지 리스크가 큰 정보보호 환경에서 이러한 문제점은 AI 기술을 널리 활용하는데 상당한 장애로 작용한다. 이를 해결하기 위해 XAI(eXplainable Artificial Intelligence) 방법론에 대한 연구가 주목받고 있다. XAI는 예측의 해석이 어려운 AI의 문제점을 보완하기 위해 등장한 방법으로 AI의 학습 과정을 투명하게 보여줄 수 있으며, 예측에 대한 신뢰성을 제공할 수 있다. 본 논문에서는 이러한 XAI 기술의 개념 및 필요성, XAI 방법론의 정보보호 분야 적용 사례에 설명한다. 또한, XAI 평가 방법을 제시하며, XAI 방법론을 보안 시스템에 적용한 경우의 결과도 논의한다. XAI 기술은 AI 판단에 대한 사람 중심의 해석정보를 제공하여, 한정된 인력에 많은 분석데이터를 처리해야 하는 보안담당자들의 분석 및 의사결정 시간을 줄이는데 기여할 수 있을 것으로 예상된다.