• Title/Summary/Keyword: Recursive neural networks

Search Result 37, Processing Time 0.028 seconds

Design of Incremental FCM-based Recursive RBF Neural Networks Pattern Classifier for Big Data Processing (빅 데이터 처리를 위한 증분형 FCM 기반 순환 RBF Neural Networks 패턴 분류기 설계)

  • Lee, Seung-Cheol;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.6
    • /
    • pp.1070-1079
    • /
    • 2016
  • In this paper, the design of recursive radial basis function neural networks based on incremental fuzzy c-means is introduced for processing the big data. Radial basis function neural networks consist of condition, conclusion and inference phase. Gaussian function is generally used as the activation function of the condition phase, but in this study, incremental fuzzy clustering is considered for the activation function of radial basis function neural networks, which could effectively do big data processing. In the conclusion phase, the connection weights of networks are given as the linear function. And then the connection weights are calculated by recursive least square estimation. In the inference phase, a final output is obtained by fuzzy inference method. Machine Learning datasets are employed to demonstrate the superiority of the proposed classifier, and their results are described from the viewpoint of the algorithm complexity and performance index.

A Controlled Neural Networks of Nonlinear Modeling with Adaptive Construction in Various Conditions (다변 환경 적응형 비선형 모델링 제어 신경망)

  • Kim, Jong-Man;Sin, Dong-Yong
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2004.07b
    • /
    • pp.1234-1238
    • /
    • 2004
  • A Controlled neural networks are proposed in order to measure nonlinear environments in adaptive and in realtime. The structure of it is similar to recurrent neural networks: a delayed output as the input and a delayed error between tile output of plant and neural networks as a bias input. In addition, we compute the desired value of hidden layer by an optimal method instead of transfering desired values by backpropagation and each weights are updated by RLS(Recursive Least Square). Consequently, this neural networks are not sensitive to initial weights and a learning rate, and have a faster convergence rate than conventional neural networks. This new neural networks is Error Estimated Neural Networks. We can estimate nonlinear models in realtime by the proposed networks and control nonlinear models. To show the performance of this one, we have various experiments. And this controller call prove effectively to be control in the environments of various systems.

  • PDF

Nonlinear Neural Networks for Vehicle Modeling Control Algorithm based on 7-Depth Sensor Measurements (7자유도 센서차량모델 제어를 위한 비선형신경망)

  • Kim, Jong-Man;Kim, Won-Sop;Sin, Dong-Yong
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2008.06a
    • /
    • pp.525-526
    • /
    • 2008
  • For measuring nonlinear Vehicle Modeling based on 7-Depth Sensor, the neural networks are proposed m adaptive and in realtime. The structure of it is similar to recurrent neural networks; a delayed output as the input and a delayed error between the output of plant and neural networks as a bias input. In addition, we compute the desired value of hidden layer by an optimal method instead of transfering desired values by backpropagation and each weights are updated by RLS(Recursive Least Square). Consequently, this neural networks are not sensitive to initial weights and a learning rate, and have a faster convergence rate than conventional neural networks. This new neural networks is Error Estimated Neural Networks. We can estimate nonlinear models in realtime by the proposed networks and control nonlinear models.

  • PDF

A Estimated Neural Networks for Adaptive Cognition of Nonlinear Road Situations (굴곡있는 비선형 도로 노면의 최적 인식을 위한 평가 신경망)

  • Kim, Jong-Man;Kim, Young-Min;Hwang, Jong-Sun;Sin, Dong-Yong
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2002.11a
    • /
    • pp.573-577
    • /
    • 2002
  • A new estimated neural networks are proposed in order to measure nonlinear road environments in realtime. This new neural networks is Error Estimated Neural Networks. The structure of it is similar to recurrent neural networks; a delayed output as the input and a delayed error between the output of plant and neural networks as a bias input. In addition, we compute the desired value of hidden layer by an optimal method instead of transfering desired values by backpropagation and each weights are updated by RLS(Recursive Least Square). Consequently, this neural networks are not sensitive to initial weights and a learning rate, and have a faster convergence rate than conventional neural networks. We can estimate nonlinear models in realtime by the proposed networks and control nonlinear models. To show the performance of this one, we control 7 degree simulation, this controller and driver were proved to be effective to drive a car in the environments of nonlinear road systems.

  • PDF

An Efficient Recursive Total Least Squares Algorithm for Training Multilayer Feedforward Neural Networks

  • Choi Nakjin;Lim Jun-Seok;Sung Koeng-Mo
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.527-530
    • /
    • 2004
  • We present a recursive total least squares (RTLS) algorithm for multilayer feedforward neural networks. So far, recursive least squares (RLS) has been successfully applied to training multilayer feedforward neural networks. But, when input data contain additive noise, the results from RLS could be biased. Such biased results can be avoided by using the recursive total least squares (RTLS) algorithm. The RTLS algorithm described in this paper gives better performance than RLS algorithm over a wide range of SNRs and involves approximately the same computational complexity of $O(N^{2})$.

  • PDF

Single Image Super Resolution Reconstruction Based on Recursive Residual Convolutional Neural Network

  • Cao, Shuyi;Wee, Seungwoo;Jeong, Jechang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.98-101
    • /
    • 2019
  • At present, deep convolutional neural networks have made a very important contribution in single-image super-resolution. Through the learning of the neural networks, the features of input images are transformed and combined to establish a nonlinear mapping of low-resolution images to high-resolution images. Some previous methods are difficult to train and take up a lot of memory. In this paper, we proposed a simple and compact deep recursive residual network learning the features for single image super resolution. Global residual learning and local residual learning are used to reduce the problems of training deep neural networks. And the recursive structure controls the number of parameters to save memory. Experimental results show that the proposed method improved image qualities that occur in previous methods.

  • PDF

Control of Chaos Dynamics in Jordan Recurrent Neural Networks

  • Jin, Sang-Ho;Kenichi, Abe
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.43.1-43
    • /
    • 2001
  • We propose two control methods of the Lyapunov exponents for Jordan-type recurrent neural networks. Both the two methods are formulated by a gradient-based learning method. The first method is derived strictly from the definition of the Lyapunov exponents that are represented by the state transition of the recurrent networks. The first method can control the complete set of the exponents, called the Lyapunov spectrum, however, it is computationally expensive because of its inherent recursive way to calculate the changes of the network parameters. Also this recursive calculation causes an unstable control when, at least, one of the exponents is positive, such as the largest Lyapunov exponent in the recurrent networks with chaotic dynamics. To improve stability in the chaotic situation, we propose a non recursive formulation by approximating ...

  • PDF

Identification of suspension systems using error self recurrent neural network and development of sliding mode controller (오차 자기 순환 신경회로망을 이용한 현가시스템 인식과 슬라이딩 모드 제어기 개발)

  • 송광현;이창구;김성중
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.625-628
    • /
    • 1997
  • In this paper the new neural network and sliding mode suspension controller is proposed. That neural network is error self-recurrent neural network. For fast on-line learning, this paper use recursive least squares method. A new neural networks converges considerably faster than the backpropagation algorithm and has advantages of being less affected by the poor initial weights and learning rate. The controller for suspension systems is designed according to sliding mode technique based on new proposed neural network.

  • PDF

The development of semi-active suspension controller based on error self recurrent neural networks (오차 자기순환 신경회로망 기반 반능동 현가시스템 제어기 개발)

  • Lee, Chang-Goo;Song, Kwang-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.5 no.8
    • /
    • pp.932-940
    • /
    • 1999
  • In this paper, a new neural networks and neural network based sliding mode controller are proposed. The new neural networks are an mor self-recurrent neural networks which use a recursive least squares method for the fast on-line leammg. The error self-recurrent neural networks converge considerably last than the back-prollagation algorithm and have advantage oi bemg less affected by the poor initial weights and learning rate. The controller for suspension system is designed according to sliding mode technique based on new proposed neural networks. In order to adapt shding mode control mnethod, each frame dstance hetween ground and vehcle body is estimated md controller is designed according to estimated neural model. The neural networks based sliding mode controller approves good peiformance throllgh computer sirnulations.

  • PDF

Complexity Control Method of Chaos Dynamics in Recurrent Neural Networks

  • Sakai, Masao;Homma, Noriyasu;Abe, Kenichi
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.4 no.2
    • /
    • pp.124-129
    • /
    • 2002
  • This paper demonstrates that the largest Lyapunov exponent λ of recurrent neural networks can be controlled efficiently by a stochastic gradient method. An essential core of the proposed method is a novel stochastic approximate formulation of the Lyapunov exponent λ as a function of the network parameters such as connection weights and thresholds of neural activation functions. By a gradient method, a direct calculation to minimize a square error (λ - λ$\^$obj/)$^2$, where λ$\^$obj/ is a desired exponent value, needs gradients collection through time which are given by a recursive calculation from past to present values. The collection is computationally expensive and causes unstable control of the exponent for networks with chaotic dynamics because of chaotic instability. The stochastic formulation derived in this paper gives us an approximation of the gradients collection in a fashion without the recursive calculation. This approximation can realize not only a faster calculation of the gradient, but also stable control for chaotic dynamics. Due to the non-recursive calculation. without respect to the time evolutions, the running times of this approximation grow only about as N$^2$ compared to as N$\^$5/T that is of the direct calculation method. It is also shown by simulation studies that the approximation is a robust formulation for the network size and that proposed method can control the chaos dynamics in recurrent neural networks efficiently.