• Title/Summary/Keyword: machine learning

Search Result 5,156, Processing Time 0.035 seconds

ACCELERATION OF MACHINE LEARNING ALGORITHMS BY TCHEBYCHEV ITERATION TECHNIQUE

  • LEVIN, MIKHAIL P.
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.22 no.1
    • /
    • pp.15-28
    • /
    • 2018
  • Recently Machine Learning algorithms are widely used to process Big Data in various applications and a lot of these applications are executed in run time. Therefore the speed of Machine Learning algorithms is a critical issue in these applications. However the most of modern iteration Machine Learning algorithms use a successive iteration technique well-known in Numerical Linear Algebra. But this technique has a very low convergence, needs a lot of iterations to get solution of considering problems and therefore a lot of time for processing even on modern multi-core computers and clusters. Tchebychev iteration technique is well-known in Numerical Linear Algebra as an attractive candidate to decrease the number of iterations in Machine Learning iteration algorithms and also to decrease the running time of these algorithms those is very important especially in run time applications. In this paper we consider the usage of Tchebychev iterations for acceleration of well-known K-Means and SVM (Support Vector Machine) clustering algorithms in Machine Leaning. Some examples of usage of our approach on modern multi-core computers under Apache Spark framework will be considered and discussed.

Trend Analysis of Korea Papers in the Fields of 'Artificial Intelligence', 'Machine Learning' and 'Deep Learning' ('인공지능', '기계학습', '딥 러닝' 분야의 국내 논문 동향 분석)

  • Park, Hong-Jin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.4
    • /
    • pp.283-292
    • /
    • 2020
  • Artificial intelligence, which is one of the representative images of the 4th industrial revolution, has been highly recognized since 2016. This paper analyzed domestic paper trends for 'Artificial Intelligence', 'Machine Learning', and 'Deep Learning' among the domestic papers provided by the Korea Academic Education and Information Service. There are approximately 10,000 searched papers, and word count analysis, topic modeling and semantic network is used to analyze paper's trends. As a result of analyzing the extracted papers, compared to 2015, in 2016, it increased 600% in the field of artificial intelligence, 176% in machine learning, and 316% in the field of deep learning. In machine learning, a support vector machine model has been studied, and in deep learning, convolutional neural networks using TensorFlow are widely used in deep learning. This paper can provide help in setting future research directions in the fields of 'artificial intelligence', 'machine learning', and 'deep learning'.

Research Trend on Machine Learning Healthcare Based on Keyword Frequency and Centrality Analysis : Focusing on the United States, the United Kingdom, Korea (키워드 빈도 및 중심성 분석 기반의 머신러닝 헬스케어 연구 동향 : 미국·영국·한국을 중심으로)

  • Lee Taekkyeun
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.3
    • /
    • pp.149-163
    • /
    • 2023
  • In this study we analyze research trends on machine learning healthcare based on papers from the United States, the United Kingdom, and Korea. In Elsevier's Scopus, we collected 3425 papers related to machine learning healthcare published from 2018 to 2022. Keyword frequency and centrality analysis were conducted using the abstracts of the collected papers. We identified keywords with high frequency of appearance by calculating keyword frequency and found central research keywords through the centrality analysis by country. Through the analysis results, research related to machine learning, deep learning, healthcare, and the covid virus was conducted as the most central and highly mediating research in each country. As the implication, studies related to electronic health information-based treatment, natural language processing, and privacy in Korea have lower degree centrality and betweenness centrality than those of the United States and the United Kingdom. Thus, various convergence research applied with machine learning is needed for these fields.

Recent advances in deep learning-based side-channel analysis

  • Jin, Sunghyun;Kim, Suhri;Kim, HeeSeok;Hong, Seokhie
    • ETRI Journal
    • /
    • v.42 no.2
    • /
    • pp.292-304
    • /
    • 2020
  • As side-channel analysis and machine learning algorithms share the same objective of classifying data, numerous studies have been proposed for adapting machine learning to side-channel analysis. However, a drawback of machine learning algorithms is that their performance depends on human engineering. Therefore, recent studies in the field focus on exploiting deep learning algorithms, which can extract features automatically from data. In this study, we survey recent advances in deep learning-based side-channel analysis. In particular, we outline how deep learning is applied to side-channel analysis, based on deep learning architectures and application methods. Furthermore, we describe its properties when using different architectures and application methods. Finally, we discuss our perspective on future research directions in this field.

Generating Training Dataset of Machine Learning Model for Context-Awareness in a Health Status Notification Service (사용자 건강 상태알림 서비스의 상황인지를 위한 기계학습 모델의 학습 데이터 생성 방법)

  • Mun, Jong Hyeok;Choi, Jong Sun;Choi, Jae Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.1
    • /
    • pp.25-32
    • /
    • 2020
  • In the context-aware system, rule-based AI technology has been used in the abstraction process for getting context information. However, the rules are complicated by the diversification of user requirements for the service and also data usage is increased. Therefore, there are some technical limitations to maintain rule-based models and to process unstructured data. To overcome these limitations, many studies have applied machine learning techniques to Context-aware systems. In order to utilize this machine learning-based model in the context-aware system, a management process of periodically injecting training data is required. In the previous study on the machine learning based context awareness system, a series of management processes such as the generation and provision of learning data for operating several machine learning models were considered, but the method was limited to the applied system. In this paper, we propose a training data generating method of a machine learning model to extend the machine learning based context-aware system. The proposed method define the training data generating model that can reflect the requirements of the machine learning models and generate the training data for each machine learning model. In the experiment, the training data generating model is defined based on the training data generating schema of the cardiac status analysis model for older in health status notification service, and the training data is generated by applying the model defined in the real environment of the software. In addition, it shows the process of comparing the accuracy by learning the training data generated in the machine learning model, and applied to verify the validity of the generated learning data.

Load Balancing Scheme for Machine Learning Distributed Environment (기계학습 분산 환경을 위한 부하 분산 기법)

  • Kim, Younggwan;Lee, Jusuk;Kim, Ajung;Hong, Jiman
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.25-31
    • /
    • 2021
  • As the machine learning becomes more common, development of application using machine learning is actively increasing. In addition, research on machine learning platform to support development of application is also increasing. However, despite the increasing of research on machine learning platform, research on suitable load balancing for machine learning platform is insufficient. Therefore, in this paper, we propose a load balancing scheme that can be applied to machine learning distributed environment. The proposed scheme composes distributed servers in a level hash table structure and assigns machine learning task to the server in consideration of the performance of each server. We implemented distributed servers and experimented, and compared the performance with the existing hashing scheme. Compared with the existing hashing scheme, the proposed scheme showed an average 26% speed improvement, and more than 38% reduced the number of waiting tasks to assign to the server.

Income prediction of apple and pear farmers in Chungnam area by automatic machine learning with H2O.AI

  • Hyundong, Jang;Sounghun, Kim
    • Korean Journal of Agricultural Science
    • /
    • v.49 no.3
    • /
    • pp.619-627
    • /
    • 2022
  • In Korea, apples and pears are among the most important agricultural products to farmers who seek to earn money as income. Generally, farmers make decisions at various stages to maximize their income but they do not always know exactly which option will be the best one. Many previous studies were conducted to solve this problem by predicting farmers' income structure, but researchers are still exploring better approaches. Currently, machine learning technology is gaining attention as one of the new approaches for farmers' income prediction. The machine learning technique is a methodology using an algorithm that can learn independently through data. As the level of computer science develops, the performance of machine learning techniques is also improving. The purpose of this study is to predict the income structure of apples and pears using the automatic machine learning solution H2O.AI and to present some implications for apple and pear farmers. The automatic machine learning solution H2O.AI can save time and effort compared to the conventional machine learning techniques such as scikit-learn, because it works automatically to find the best solution. As a result of this research, the following findings are obtained. First, apple farmers should increase their gross income to maximize their income, instead of reducing the cost of growing apples. In particular, apple farmers mainly have to increase production in order to obtain more gross income. As a second-best option, apple farmers should decrease labor and other costs. Second, pear farmers also should increase their gross income to maximize their income but they have to increase the price of pears rather than increasing the production of pears. As a second-best option, pear farmers can decrease labor and other costs.

An Introduction of Machine Learning Theory to Business Decisions

  • Kim, Hyun-Soo
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.19 no.2
    • /
    • pp.153-176
    • /
    • 1994
  • In this paper we introduce machine learning theory to business domains for business decisions. First, we review machine learning in general. We give a new look on a previous framework, version space approach, and we introduce PAC (probably approximately correct) learning paradigm which has been developed recently. We illustrate major results of PAC learning with business examples. And then, we give a theoretical analysis is decision tree induction algorithms by the frame work of PAC learning. Finally, we will discuss implications of learning theory toi business domains.

  • PDF

Machine Learning Approaches to Corn Yield Estimation Using Satellite Images and Climate Data: A Case of Iowa State

  • Kim, Nari;Lee, Yang-Won
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.4
    • /
    • pp.383-390
    • /
    • 2016
  • Remote sensing data has been widely used in the estimation of crop yields by employing statistical methods such as regression model. Machine learning, which is an efficient empirical method for classification and prediction, is another approach to crop yield estimation. This paper described the corn yield estimation in Iowa State using four machine learning approaches such as SVM (Support Vector Machine), RF (Random Forest), ERT (Extremely Randomized Trees) and DL (Deep Learning). Also, comparisons of the validation statistics among them were presented. To examine the seasonal sensitivities of the corn yields, three period groups were set up: (1) MJJAS (May to September), (2) JA (July and August) and (3) OC (optimal combination of month). In overall, the DL method showed the highest accuracies in terms of the correlation coefficient for the three period groups. The accuracies were relatively favorable in the OC group, which indicates the optimal combination of month can be significant in statistical modeling of crop yields. The differences between our predictions and USDA (United States Department of Agriculture) statistics were about 6-8 %, which shows the machine learning approaches can be a viable option for crop yield modeling. In particular, the DL showed more stable results by overcoming the overfitting problem of generic machine learning methods.

A Feasibility Study on the Improvement of Diagnostic Accuracy for Energy-selective Digital Mammography using Machine Learning (머신러닝을 이용한 에너지 선택적 유방촬영의 진단 정확도 향상에 관한 연구)

  • Eom, Jisoo;Lee, Seungwan;Kim, Burnyoung
    • Journal of radiological science and technology
    • /
    • v.42 no.1
    • /
    • pp.9-17
    • /
    • 2019
  • Although digital mammography is a representative method for breast cancer detection. It has a limitation in detecting and classifying breast tumor due to superimposed structures. Machine learning, which is a part of artificial intelligence fields, is a method for analysing a large amount of data using complex algorithms, recognizing patterns and making prediction. In this study, we proposed a technique to improve the diagnostic accuracy of energy-selective mammography by training data using the machine learning algorithm and using dual-energy measurements. A dual-energy images obtained from a photon-counting detector were used for the input data of machine learning algorithms, and we analyzed the accuracy of predicted tumor thickness for verifying the machine learning algorithms. The results showed that the classification accuracy of tumor thickness was above 95% and was improved with an increase of imput data. Therefore, we expect that the diagnostic accuracy of energy-selective mammography can be improved by using machine learning.