• Title/Summary/Keyword: neural networks

Search Result 4,731, Processing Time 0.04 seconds

Boosting neural networks with an application to bankruptcy prediction (부스팅 인공신경망을 활용한 부실예측모형의 성과개선)

  • Kim, Myoung-Jong;Kang, Dae-Ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.05a
    • /
    • pp.872-875
    • /
    • 2009
  • In a bankruptcy prediction model, the accuracy is one of crucial performance measures due to its significant economic impacts. Ensemble is one of widely used methods for improving the performance of classification and prediction models. Two popular ensemble methods, Bagging and Boosting, have been applied with great success to various machine learning problems using mostly decision trees as base classifiers. In this paper, we analyze the performance of boosted neural networks for improving the performance of traditional neural networks on bankruptcy prediction tasks. Experimental results on Korean firms indicated that the boosted neural networks showed the improved performance over traditional neural networks.

  • PDF

Study on Image Compression Algorithm with Deep Learning (딥 러닝 기반의 이미지 압축 알고리즘에 관한 연구)

  • Lee, Yong-Hwan
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.4
    • /
    • pp.156-162
    • /
    • 2022
  • Image compression plays an important role in encoding and improving various forms of images in the digital era. Recent researches have focused on the principle of deep learning as one of the most exciting machine learning methods to show that it is good scheme to analyze, classify and compress images. Various neural networks are able to adapt for image compressions, such as deep neural networks, artificial neural networks, recurrent neural networks and convolution neural networks. In this review paper, we discussed how to apply the rule of deep learning to obtain better image compression with high accuracy, low loss-ness and high visibility of the image. For those results in performance, deep learning methods are required on justified manner with distinct analysis.

Using Neural Networks to Predict the Sense of Touch of Polyurethane Coated Fabrics (신경망이론은 이용한 폴리우레탄 코팅포 촉감의 예측)

  • 이정순;신혜원
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.26 no.1
    • /
    • pp.152-159
    • /
    • 2002
  • Neural networks are used to predict the sense of touch of polyurethane coated fabrics. In this study, we used the multi layer perceptron (MLP) neural networks in Neural Connection. The learning algorithm for neural networks is back-propagation algorithm. We used 29 polyurethane coated fabrics to train the neural networks and 4 samples to test the neural networks. Input variables are 17 mechanical properties measured with KES-FB system, and output variable is the sense of touch of polyurethane coated fabrics. The influence of MLF function, the number of hidden layers, and the number of hidden nodes on the prediction accuracy is investigated. The results were as follows: MLP function, the number of hidden layer and the number of hidden nodes have some influence on the prediction accuracy. In this work, tangent function, the architecture of the double hidden layers and the 24-12-hidden nodes has the best prediction accuracy with the lowest RMS error. Using the neural networks to predict the sense of touch of polyurethane coated fabrics has hotter prediction accuracy than regression approach used in our previous study.

A NEW ALGORITHM OF EVOLVING ARTIFICIAL NEURAL NETWORKS VIA GENE EXPRESSION PROGRAMMING

  • Li, Kangshun;Li, Yuanxiang;Mo, Haifang;Chen, Zhangxin
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.9 no.2
    • /
    • pp.83-89
    • /
    • 2005
  • In this paper a new algorithm of learning and evolving artificial neural networks using gene expression programming (GEP) is presented. Compared with other traditional algorithms, this new algorithm has more advantages in self-learning and self-organizing, and can find optimal solutions of artificial neural networks more efficiently and elegantly. Simulation experiments show that the algorithm of evolving weights or thresholds can easily find the perfect architecture of artificial neural networks, and obviously improves previous traditional evolving methods of artificial neural networks because the GEP algorithm imitates the evolution of the natural neural system of biology according to genotype schemes of biology to crossover and mutate the genes or chromosomes to generate the next generation, and the optimal architecture of artificial neural networks with evolved weights or thresholds is finally achieved.

  • PDF

Tuning Learning Rate in Neural Network Using Fuzzy Model (퍼지 모델을 이용한 신경망의 학습률 조정)

  • 라혁주;서재용;김성주;전홍태
    • Proceedings of the IEEK Conference
    • /
    • 2003.07d
    • /
    • pp.1239-1242
    • /
    • 2003
  • The neural networks are a famous model to learn the nonlinear function or nonlinear system. The main point of neural network is that the difference actual output from desired output is used to update weights. Usually, the gradient descent method is used for the learning process. On training process, if learning rate is too large, neural networks hardly guarantee convergence of neural networks. On the other hand, if learning rate is too small, the training spends much time. Therefore, one major problem in use of neural networks are to decrease the teaming time while neural networks are guaranteed convergence. In this paper, we suggest the model of fuzzy logic to neural networks to calibrate learning rate. This method is to tune learning rate dynamically according to error and demonstrates the optimization of training.

  • PDF

A Study on Cold Forging Design Using Neural Networks (신경망을 이용한 냉간 단조품 설계에 관한 연구)

  • 김영호;서윤수;박종옥
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.04b
    • /
    • pp.178-182
    • /
    • 1995
  • The technique of neural networks is applied to cold forging design system. A user can select more desirable plans in cold forging design by being advised with expert's opinion from neural networks. The neural networks are learned with 3 parts which are most important in cold forging design-undercut, narrow hole, sharp corner. Using the neural networks, the cold forging design system built in this study determines forming possibility about variable shapes in product. We can get available result using the system.

  • PDF

On the Identification of a Chaotic System using Chaotic Neural Networks (카오틱 신경망을 이용한 카오틱 시스템의 모사)

  • 장창화;홍수동김상희
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.1297-1300
    • /
    • 1998
  • In this paper, we discuss the identification of a chaotic system using chaotic neural networks. Because of selfconnections in neuron itself and interconnections between neurons, chaotic neural networks identifiers show good performance in highly nonlinear dynamics such as chaotic system. Simulation results are presented to demonstrate robustness of chaotic neural networks identifier.

  • PDF

Exponential stability of stochastic static neutral neural networks with varying delays

  • Sun, Xiaoqi
    • Computers and Concrete
    • /
    • v.30 no.4
    • /
    • pp.237-242
    • /
    • 2022
  • This paper is concerned with exponential stability in mean square for stochastic static neutral neural networks with varying delays. By using Lyapunov functional method and with the help of stochastic analysis technique, the sufficient conditions to guarantee the exponential stability in mean square for the neural networks are obtained and some results of related literature are extended.

The Use of Artificial Neural Networks in the Monitoring of Spot Weld Quality (인공신경회로망을 이용한 저항 점용접의 품질감시)

  • 임태균;조형석;장희석
    • Journal of Welding and Joining
    • /
    • v.11 no.2
    • /
    • pp.27-41
    • /
    • 1993
  • The estimation of nugget sizes was attempted by utilizing the artificial neural networks method. Artificial neural networks is a highly simplified model of the biological nervous system. Artificial neural networks is composed of a large number of elemental processors connected like biological neurons. Although the elemental processors have only simple computation functions, because they are connected massively, they can describe any complex functional relationship between an input-output pair in an autonomous manner. The electrode head movement signal, which is a good indicator of corresponding nugget size was determined by measuring the each test specimen. The sampled electrode movement data and the corresponding nugget sizes were fed into the artificial neural networks as input-output pairs to train the networks. In the training phase for the networks, the artificial neural networks constructs a fuctional relationship between the input-output pairs autonomusly by adjusting the set of weights. In the production(estimation) phase when new inputs are sampled and presented, the artificial neural networks produces appropriate outputs(the estimates of the nugget size) based upon the transfer characteristics learned during the training mode. Experimental verification of the proposed estimation method using artificial neural networks was done by actual destructive testing of welds. The predicted result by the artifficial neural networks were found to be in a good agreement with the actual nugget size. The results are quite promising in that the real-time estimation of the invisible nugget size can be achieved by analyzing the process variable without any conventional destructive testing of welds.

  • PDF

Neural Network Architecture Optimization and Application

  • Liu, Zhijun;Sugisaka, Masanori
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1999.10a
    • /
    • pp.214-217
    • /
    • 1999
  • In this paper, genetic algorithm (GA) is implemented to search for the optimal structures (i.e. the kind of neural networks, the number of inputs and hidden neurons) of neural networks which are used approximating a given nonlinear function. Two kinds of neural networks, i.e. the multilayer feedforward [1] and time delay neural networks (TDNN) [2] are involved in this paper. The synapse weights of each neural network in each generation are obtained by associated training algorithms. The simulation results of nonlinear function approximation are given out and some improvements in the future are outlined.

  • PDF