• Title/Summary/Keyword: constant learning rate

Search Result 50, Processing Time 0.03 seconds

Self-Organizing Feature Map with Constant Learning Rate and Binary Reinforcement (일정 학습계수와 이진 강화함수를 가진 자기 조직화 형상지도 신경회로망)

  • 조성원;석진욱
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.1
    • /
    • pp.180-188
    • /
    • 1995
  • A modified Kohonen's self-organizing feature map (SOFM) algorithm which has binary reinforcement function and a constant learning rate is proposed. In contrast to the time-varing adaptaion gain of the original Kohonen's SOFM algorithm, the proposed algorithm uses a constant adaptation gain, and adds a binary reinforcement function in order to compensate for the lowered learning ability of SOFM due to the constant learning rate. Since the proposed algorithm does not have the complicated multiplication, it's digital hardware implementation is much easier than that of the original SOFM.

  • PDF

The dynamics of self-organizing feature map with constant learning rate and binary reinforcement function (시불변 학습계수와 이진 강화 함수를 가진 자기 조직화 형상지도 신경회로망의 동적특성)

  • Seok, Jin-Uk;Jo, Seong-Won
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.2 no.2
    • /
    • pp.108-114
    • /
    • 1996
  • We present proofs of the stability and convergence of Self-organizing feature map (SOFM) neural network with time-invarient learning rate and binary reinforcement function. One of the major problems in Self-organizing feature map neural network concerns with learning rate-"Kalman Filter" gain in stochsatic control field which is monotone decreasing function and converges to 0 for satisfying minimum variance property. In this paper, we show that the stability and convergence of Self-organizing feature map neural network with time-invariant learning rate. The analysis of the proposed algorithm shows that the stability and convergence is guranteed with exponentially stable and weak convergence properties as well.s as well.

  • PDF

Active Random Noise Control using Adaptive Learning Rate Neural Networks

  • Sasaki, Minoru;Kuribayashi, Takumi;Ito, Satoshi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.941-946
    • /
    • 2005
  • In this paper an active random noise control using adaptive learning rate neural networks is presented. The adaptive learning rate strategy increases the learning rate by a small constant if the current partial derivative of the objective function with respect to the weight and the exponential average of the previous derivatives have the same sign, otherwise the learning rate is decreased by a proportion of its value. The use of an adaptive learning rate attempts to keep the learning step size as large as possible without leading to oscillation. It is expected that a cost function minimize rapidly and training time is decreased. Numerical simulations and experiments of active random noise control with the transfer function of the error path will be performed, to validate the convergence properties of the adaptive learning rate Neural Networks. Control results show that adaptive learning rate Neural Networks control structure can outperform linear controllers and conventional neural network controller for the active random noise control.

  • PDF

Hybrid Neural Networks for Pattern Recognition

  • Kim, Kwang-Baek
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.6
    • /
    • pp.637-640
    • /
    • 2011
  • The hybrid neural networks have characteristics such as fast learning times, generality, and simplicity, and are mainly used to classify learning data and to model non-linear systems. The middle layer of a hybrid neural network clusters the learning vectors by grouping homogenous vectors in the same cluster. In the clustering procedure, the homogeneity between learning vectors is represented as the distance between the vectors. Therefore, if the distances between a learning vector and all vectors in a cluster are smaller than a given constant radius, the learning vector is added to the cluster. However, the usage of a constant radius in clustering is the primary source of errors and therefore decreases the recognition success rate. To improve the recognition success rate, we proposed the enhanced hybrid network that organizes the middle layer effectively by using the enhanced ART1 network adjusting the vigilance parameter dynamically according to the similarity between patterns. The results of experiments on a large number of calling card images showed that the proposed algorithm greatly improves the character extraction and recognition compared with conventional recognition algorithms.

A Case Study on Learning of Fundamental Idea of Calculus in Constant Acceleration Movement (등가속도 운동에서 미적분의 기본 아이디어 학습 과정에 관한 사례연구)

  • Shin Eun-Ju
    • Journal of Educational Research in Mathematics
    • /
    • v.16 no.1
    • /
    • pp.59-78
    • /
    • 2006
  • As a theoretical background for this research, the literatures which focus on the rationale of teaching and learning of connecting with mathematics and science in calculus were investigated. And teaching and learning material of connecting with mathematics and science in calculus was developed. And then, based on the case study using this material, the research questions were analyzed in depth. Students could understand mean-velocity, instant-velocity, and acceleration in the experimenting process of constant acceleration movement. Also Students could understand fundamental ideas that instant-velocity means slope of the tangent line at one point on the time-displacement graph and rate of distance change means rate of area change under a time-velocity graph.

  • PDF

STOCHASTIC GRADIENT METHODS FOR L2-WASSERSTEIN LEAST SQUARES PROBLEM OF GAUSSIAN MEASURES

  • YUN, SANGWOON;SUN, XIANG;CHOI, JUNG-IL
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.25 no.4
    • /
    • pp.162-172
    • /
    • 2021
  • This paper proposes stochastic methods to find an approximate solution for the L2-Wasserstein least squares problem of Gaussian measures. The variable for the problem is in a set of positive definite matrices. The first proposed stochastic method is a type of classical stochastic gradient methods combined with projection and the second one is a type of variance reduced methods with projection. Their global convergence are analyzed by using the framework of proximal stochastic gradient methods. The convergence of the classical stochastic gradient method combined with projection is established by using diminishing learning rate rule in which the learning rate decreases as the epoch increases but that of the variance reduced method with projection can be established by using constant learning rate. The numerical results show that the present algorithms with a proper learning rate outperforms a gradient projection method.

Maximization of Zero-Error Probability for Adaptive Channel Equalization

  • Kim, Nam-Yong;Jeong, Kyu-Hwa;Yang, Liuqing
    • Journal of Communications and Networks
    • /
    • v.12 no.5
    • /
    • pp.459-465
    • /
    • 2010
  • A new blind equalization algorithm that is based on maximizing the probability that the constant modulus errors concentrate near zero is proposed. The cost function of the proposed algorithm is to maximize the probability that the equalizer output power is equal to the constant modulus of the transmitted symbols. Two blind information-theoretic learning (ITL) algorithms based on constant modulus error signals are also introduced: One for minimizing the Euclidean probability density function distance and the other for minimizing the constant modulus error entropy. The relations between the algorithms and their characteristics are investigated, and their performance is compared and analyzed through simulations in multi-path channel environments. The proposed algorithm has a lower computational complexity and a faster convergence speed than the other ITL algorithms that are based on a constant modulus error. The error samples of the proposed blind algorithm exhibit more concentrated density functions and superior error rate performance in severe multi-path channel environments when compared with the other algorithms.

Implementation of Speed Sensorless Induction Motor drives by Fast Learning Neural Network using RLS Approach

  • Kim, Yoon-Ho;Kook, Yoon-Sang
    • Proceedings of the KIPE Conference
    • /
    • 1998.10a
    • /
    • pp.293-297
    • /
    • 1998
  • This paper presents a newly developed speed sensorless drive using RLS based on Neural Network Training Algorithm. The proposed algorithm has just the time-varying learning rate, while the wellknown back-propagation algorithm based on gradient descent has a constant learning rate. The number of iterations required by the new algorithm to converge is less than that of the back-propagation algorithm. The theoretical analysis and experimental results to verify the effectiveness of the proposed control strategy are described.

  • PDF

Improved Error Backpropagation by Elastic Learning Rate and Online Update (가변학습율과 온라인모드를 이용한 개선된 EBP 알고리즘)

  • Lee, Tae-Seung;Park, Ho-Jin
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.568-570
    • /
    • 2004
  • The error-backpropagation (EBP) algerithm for training multilayer perceptrons (MLPs) is known to have good features of robustness and economical efficiency. However, the algorithm has difficulty in selecting an optimal constant learning rate and thus results in non-optimal learning speed and inflexible operation for working data. This paper Introduces an elastic learning rate that guarantees convergence of learning and its local realization by online upoate of MLP parameters Into the original EBP algorithm in order to complement the non-optimality. The results of experiments on a speaker verification system with Korean speech database are presented and discussed to demonstrate the performance improvement of the proposed method in terms of learning speed and flexibility fer working data of the original EBP algorithm.

  • PDF

Study on semi-supervised local constant regression estimation

  • Seok, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.3
    • /
    • pp.579-585
    • /
    • 2012
  • Many different semi-supervised learning algorithms have been proposed for use wit unlabeled data. However, most of them focus on classification problems. In this paper we propose a semi-supervised regression algorithm called the semi-supervised local constant estimator (SSLCE), based on the local constant estimator (LCE), and reveal the asymptotic properties of SSLCE. We also show that the SSLCE has a faster convergence rate than that of the LCE when a well chosen weighting factor is employed. Our experiment with synthetic data shows that the SSLCE can improve performance with unlabeled data, and we recommend its use with the proper size of unlabeled data.