JOURNAL BROWSE
Search
Advanced SearchSearch Tips
Learning an Artificial Neural Network Using Dynamic Particle Swarm Optimization-Backpropagation: Empirical Evaluation and Comparison
facebook(new window)  Pirnt(new window) E-mail(new window) Excel Download
 Title & Authors
Learning an Artificial Neural Network Using Dynamic Particle Swarm Optimization-Backpropagation: Empirical Evaluation and Comparison
Devi, Swagatika; Jagadev, Alok Kumar; Patnaik, Srikanta;
  PDF(new window)
 Abstract
Training neural networks is a complex task with great importance in the field of supervised learning. In the training process, a set of input-output patterns is repeated to an artificial neural network (ANN). From those patterns weights of all the interconnections between neurons are adjusted until the specified input yields the desired output. In this paper, a new hybrid algorithm is proposed for global optimization of connection weights in an ANN. Dynamic swarms are shown to converge rapidly during the initial stages of a global search, but around the global optimum, the search process becomes very slow. In contrast, the gradient descent method can achieve faster convergence speed around the global optimum, and at the same time, the convergence accuracy can be relatively high. Therefore, the proposed hybrid algorithm combines the dynamic particle swarm optimization (DPSO) algorithm with the backpropagation (BP) algorithm, also referred to as the DPSO-BP algorithm, to train the weights of an ANN. In this paper, we intend to show the superiority (time performance and quality of solution) of the proposed hybrid algorithm (DPSO-BP) over other more standard algorithms in neural network training. The algorithms are compared using two different datasets, and the results are simulated.
 Keywords
ANN;BP algorithm;DPSO;Global optimization;Gradient descent technique;
 Language
English
 Cited by
 References
1.
J. Salerno, “Using the particle swarm optimization technique to train a recurrent neural model,” in Proceedings of IEEE 9th International Conference on Tools with Artificial Intelligence, Newport Beach, CA, pp. 45-49, 1997.

2.
O. L. Mangasarian, “Mathematical programming in neural networks,” ORSA Journal on Computing, vol. 5, no. 4, pp. 349-360, 1993. crossref(new window)

3.
C. M. Kuan and K. Hornik, “Convergence of learning algorithms with constant learning rates,” IEEE Transactions on Neural Networks, vol. 2, no. 5, pp. 484-489, 1991. crossref(new window)

4.
S. Ergezinger and E. Thomsen, “An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer,” IEEE Transactions on Neural Networks, vol. 6, no. 1, pp. 31-42, 1995. crossref(new window)

5.
P. J. Angeline, G. M. Saunders, and J. B. Pollack, “An evolutionary algorithm that constructs recurrent neural networks,” IEEE Transactions on Neural Networks, vol. 5, no. 1, pp. 54-65, 1994. crossref(new window)

6.
J. Kennedy and R. C. Eberhart, Swarm Intelligence. San Francisco, CA: Morgan Kaufmann, 2001.

7.
M. Gori and A. Tesi, “On the problem of local minima in backpropagation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 1, pp. 76-86, 1992. crossref(new window)

8.
M. K. Weir, “A method for self-determination of adaptive learning rates in back propagation,” Neural Networks, vol. 4, no. 3, pp. 371-379, 1991. crossref(new window)

9.
F. Van den Bergh, and A. P. Engelbrecht, “A cooperative approach to particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 225-239, 2004. crossref(new window)

10.
Y. Shi and R. C. Eberhart, “A modified particle swarm optimizer,” in Proceedings of IEEE World Congress on Computational Intelligence, Anchorage, AK, pp. 69-73, 1998.

11.
J. Kennedy and R. C. Eberhart, “A discrete binary version of the particle swarm algorithm,” in Proceedings of IEEE International Conference on Systems, Man, and Cybernetics, Orlando, FL, pp. 4104-4108, 1997.

12.
A. Abraham and B. Nath, "ALEC: an adaptive learning framework for optimizing artificial neural networks," in Computational Science-ICCS 2001. Heidelberg: Springer, pp. 171-180, 2001.

13.
H. V. Gupta, K. L. Hsu, and S. Sorooshian, “Superior training of artificial neural networks using weight-space partitioning,” in Proceedings of International Conference on Neural Networks, Houston, TX, pp. 1919-1923, 1997.

14.
K. S. Tang, C. Y. Chan, K. F. Man, and S. Kwong, “Genetic structure for NN topology and weights optimization,” in Proceedings of the 1st International Conference on Genetic Algorithms in Engineering Systems: Innovations and Applications (GALESIA), Sheffield, UK, pp. 250-255, 1995.