Complexity Control Method of Chaos Dynamics in Recurrent Neural Networks

  • Sakai, Masao (Dept. of Electric and Communication Eng., Graduate School of Eng., Tohoku University) ;
  • Honma, Noriyasu (Dept. of Radiological Tech., College of Medical Sciences, Tohoku University) ;
  • Abe, Kenichi (Dept. of Electric and Communication Eng., Graduate School of Eng., Tohoku University)
  • Published : 2000.10.01

Abstract

This paper demonstrates that the largest Lyapunov exponent $\lambda$ of recurrent neural networks can be controlled by a gradient method. The method minimizes a square error $e_{\lambda}=(\lambda-\lambda^{obj})^2$ where $\lambda^{obj}$ is desired exponent. The $\lambda$ can be given as a function of the network parameters P such as connection weights and thresholds of neurons' activation. Then changes of parameters to minimize the error are given by calculating their gradients $\partial\lambda/\partialP$. In a previous paper, we derived a control method of $\lambda$via a direct calculation of $\partial\lambda/\partialP$ with a gradient collection through time. This method however is computationally expensive for large-scale recurrent networks and the control is unstable for recurrent networks with chaotic dynamics. Our new method proposed in this paper is based on a stochastic relation between the complexity $\lambda$ and parameters P of the networks configuration under a restriction. Then the new method allows us to approximate the gradient collection in a fashion without time evolution. This approximation requires only $O(N^2)$ run time while our previous method needs $O(N^{5}T)$ run time for networks with N neurons and T evolution. Simulation results show that the new method can realize a "stable" control for larege-scale networks with chaotic dynamics.

Keywords