• Title/Summary/Keyword: machine learning

Search Result 5,099, Processing Time 0.036 seconds

Trend of Utilization of Machine Learning Technology for Digital Healthcare Data Analysis (디지털 헬스케어 데이터 분석을 위한 머신 러닝 기술 활용 동향)

  • Woo, Y.C.;Lee, S.Y.;Choi, W.;Ahn, C.W.;Baek, O.K.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.1
    • /
    • pp.98-110
    • /
    • 2019
  • Machine learning has been applied to medical imaging and has shown an excellent recognition rate. Recently, there has been much interest in preventive medicine. If data are accessible, machine learning packages can be used easily in digital healthcare fields. However, it is necessary to prepare the data in advance, and model evaluation and tuning are required to construct a reliable model. On average, these processes take more than 80% of the total effort required. In this study, we describe the basic concepts of machine learning, pre-processing and visualization of datasets, feature engineering for reliable models, model evaluation and tuning, and the latest trends in popular machine learning frameworks. Finally, we survey a explainable machine learning analysis tool and will discuss the future direction of machine learning.

Learning of Adaptive Behavior of artificial Ant Using Classifier System (분류자 시스템을 이용한 인공개미의 적응행동의 학습)

  • 정치선;심귀보
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.10a
    • /
    • pp.361-367
    • /
    • 1998
  • The main two applications of the Genetic Algorithms(GA) are the optimization and the machine learning. Machine Learning has two objectives that make the complex system learn its environment and produce the proper output of a system. The machine learning using the Genetic Algorithms is called GA machine learning or genetic-based machine learning (GBML). The machine learning is different from the optimization problems in finding the rule set. In optimization problems, the population of GA should converge into the best individual because optimization problems, the population of GA should converge into the best individual because their objective is the production of the individual near the optimal solution. On the contrary, the machine learning systems need to find the set of cooperative rules. There are two methods in GBML, Michigan method and Pittsburgh method. The former is that each rule is expressed with a string, the latter is that the set of rules is coded into a string. Th classifier system of Holland is the representative model of the Michigan method. The classifier systems arrange the strength of classifiers of classifier list using the message list. In this method, the real time process and on-line learning is possible because a set of rule is adjusted on-line. A classifier system has three major components: Performance system, apportionment of credit system, rule discovery system. In this paper, we solve the food search problem with the learning and evolution of an artificial ant using the learning classifier system.

  • PDF

Machine Learning and Deep Learning Models to Predict Income and Employment with Busan's Strategic Industry and Export (머신러닝과 딥러닝 기법을 이용한 부산 전략산업과 수출에 의한 고용과 소득 예측)

  • Chae-Deug Yi
    • Korea Trade Review
    • /
    • v.46 no.1
    • /
    • pp.169-187
    • /
    • 2021
  • This paper analyzes the feasibility of using machine learning and deep learning methods to forecast the income and employment using the strategic industries as well as investment, export, and exchange rates. The decision tree, artificial neural network, support vector machine, and deep learning models were used to forecast the income and employment in Busan. The following were the main findings of the comparison of their predictive abilities. First, the decision tree models predict the income and employment well. The forecasting values for the income and employment appeared somewhat differently according to the depth of decision trees and several conditions of strategic industries as well as investment, export, and exchange rates. Second, since the artificial neural network models show that the coefficients are somewhat low and RMSE are somewhat high, these models are not good forecasting the income and employment. Third, the support vector machine models show the high predictive power with the high coefficients of determination and low RMSE. Fourth, the deep neural network models show the higher predictive power with appropriate epochs and batch sizes. Thus, since the machine learning and deep learning models can predict the employment well, we need to adopt the machine learning and deep learning models to forecast the income and employment.

Distributed In-Memory Caching Method for ML Workload in Kubernetes (쿠버네티스에서 ML 워크로드를 위한 분산 인-메모리 캐싱 방법)

  • Dong-Hyeon Youn;Seokil Song
    • Journal of Platform Technology
    • /
    • v.11 no.4
    • /
    • pp.71-79
    • /
    • 2023
  • In this paper, we analyze the characteristics of machine learning workloads and, based on them, propose a distributed in-memory caching technique to improve the performance of machine learning workloads. The core of machine learning workload is model training, and model training is a computationally intensive task. Performing machine learning workloads in a Kubernetes-based cloud environment in which the computing framework and storage are separated can effectively allocate resources, but delays can occur because IO must be performed through network communication. In this paper, we propose a distributed in-memory caching technique to improve the performance of machine learning workloads performed in such an environment. In particular, we propose a new method of precaching data required for machine learning workloads into the distributed in-memory cache by considering Kubflow pipelines, a Kubernetes-based machine learning pipeline management tool.

  • PDF

Development of Medical Cost Prediction Model Based on the Machine Learning Algorithm (머신러닝 알고리즘 기반의 의료비 예측 모델 개발)

  • Han Bi KIM;Dong Hoon HAN
    • Journal of Korea Artificial Intelligence Association
    • /
    • v.1 no.1
    • /
    • pp.11-16
    • /
    • 2023
  • Accurate hospital case modeling and prediction are crucial for efficient healthcare. In this study, we demonstrate the implementation of regression analysis methods in machine learning systems utilizing mathematical statics and machine learning techniques. The developed machine learning model includes Bayesian linear, artificial neural network, decision tree, decision forest, and linear regression analysis models. Through the application of these algorithms, corresponding regression models were constructed and analyzed. The results suggest the potential of leveraging machine learning systems for medical research. The experiment aimed to create an Azure Machine Learning Studio tool for the speedy evaluation of multiple regression models. The tool faciliates the comparision of 5 types of regression models in a unified experiment and presents assessment results with performance metrics. Evaluation of regression machine learning models highlighted the advantages of boosted decision tree regression, and decision forest regression in hospital case prediction. These findings could lay the groundwork for the deliberate development of new directions in medical data processing and decision making. Furthermore, potential avenues for future research may include exploring methods such as clustering, classification, and anomaly detection in healthcare systems.

Feasibility Study of Google's Teachable Machine in Diagnosis of Tooth-Marked Tongue

  • Jeong, Hyunja
    • Journal of dental hygiene science
    • /
    • v.20 no.4
    • /
    • pp.206-212
    • /
    • 2020
  • Background: A Teachable Machine is a kind of machine learning web-based tool for general persons. In this paper, the feasibility of Google's Teachable Machine (ver. 2.0) was studied in the diagnosis of the tooth-marked tongue. Methods: For machine learning of tooth-marked tongue diagnosis, a total of 1,250 tongue images were used on Kaggle's web site. Ninety percent of the images were used for the training data set, and the remaining 10% were used for the test data set. Using Google's Teachable Machine (ver. 2.0), machine learning was performed using separated images. To optimize the machine learning parameters, I measured the diagnosis accuracies according to the value of epoch, batch size, and learning rate. After hyper-parameter tuning, the ROC (receiver operating characteristic) analysis method determined the sensitivity (true positive rate, TPR) and specificity (false positive rate, FPR) of the machine learning model to diagnose the tooth-marked tongue. Results: To evaluate the usefulness of the Teachable Machine in clinical application, I used 634 tooth-marked tongue images and 491 no-marked tongue images for machine learning. When the epoch, batch size, and learning rate as hyper-parameters were 75, 0.0001, and 128, respectively, the accuracy of the tooth-marked tongue's diagnosis was best. The accuracies for the tooth-marked tongue and the no-marked tongue were 92.1% and 72.6%, respectively. And, the sensitivity (TPR) and specificity (FPR) were 0.92 and 0.28, respectively. Conclusion: These results are more accurate than Li's experimental results calculated with convolution neural network. Google's Teachable Machines show good performance by hyper-parameters tuning in the diagnosis of the tooth-marked tongue. We confirmed that the tool is useful for several clinical applications.

Research Trends in Wi-Fi Performance Improvement in Coexistence Networks with Machine Learning (기계학습을 활용한 이종망에서의 Wi-Fi 성능 개선 연구 동향 분석)

  • Kang, Young-myoung
    • Journal of Platform Technology
    • /
    • v.10 no.3
    • /
    • pp.51-59
    • /
    • 2022
  • Machine learning, which has recently innovatively developed, has become an important technology that can solve various optimization problems. In this paper, we introduce the latest research papers that solve the problem of channel sharing in heterogeneous networks using machine learning, analyze the characteristics of mainstream approaches, and present a guide to future research directions. Existing studies have generally adopted Q-learning since it supports fast learning both on online and offline environment. On the contrary, conventional studies have either not considered various coexistence scenarios or lacked consideration for the location of machine learning controllers that can have a significant impact on network performance. One of the powerful ways to overcome these disadvantages is to selectively use a machine learning algorithm according to changes in network environment based on the logical network architecture for machine learning proposed by ITU.

A study on the standardization strategy for building of learning data set for machine learning applications (기계학습 활용을 위한 학습 데이터세트 구축 표준화 방안에 관한 연구)

  • Choi, JungYul
    • Journal of Digital Convergence
    • /
    • v.16 no.10
    • /
    • pp.205-212
    • /
    • 2018
  • With the development of high performance CPU / GPU, artificial intelligence algorithms such as deep neural networks, and a large amount of data, machine learning has been extended to various applications. In particular, a large amount of data collected from the Internet of Things, social network services, web pages, and public data is accelerating the use of machine learning. Learning data sets for machine learning exist in various formats according to application fields and data types, and thus it is difficult to effectively process data and apply them to machine learning. Therefore, this paper studied a method for building a learning data set for machine learning in accordance with standardized procedures. This paper first analyzes the requirement of learning data set according to problem types and data types. Based on the analysis, this paper presents the reference model to build learning data set for machine learning applications. This paper presents the target standardization organization and a standard development strategy for building learning data set.

COMPARATIVE ANALYSIS ON MACHINE LEARNING MODELS FOR PREDICTING KOSPI200 INDEX RETURNS

  • Gu, Bonsang;Song, Joonhyuk
    • The Pure and Applied Mathematics
    • /
    • v.24 no.4
    • /
    • pp.211-226
    • /
    • 2017
  • In this paper, machine learning models employed in various fields are discussed and applied to KOSPI200 stock index return forecasting. The results of hyperparameter analysis of the machine learning models are also reported and practical methods for each model are presented. As a result of the analysis, Support Vector Machine and Artificial Neural Network showed a better performance than k-Nearest Neighbor and Random Forest.

Design of Fuzzy Pattern Classifier based on Extreme Learning Machine (Extreme Learning Machine 기반 퍼지 패턴 분류기 설계)

  • Ahn, Tae-Chon;Roh, Sok-Beom;Hwang, Kuk-Yeon;Wang, Jihong;Kim, Yong Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.5
    • /
    • pp.509-514
    • /
    • 2015
  • In this paper, we introduce a new pattern classifier which is based on the learning algorithm of Extreme Learning Machine the sort of artificial neural networks and fuzzy set theory which is well known as being robust to noise. The learning algorithm used in Extreme Learning Machine is faster than the conventional artificial neural networks. The key advantage of Extreme Learning Machine is the generalization ability for regression problem and classification problem. In order to evaluate the classification ability of the proposed pattern classifier, we make experiments with several machine learning data sets.