Traffic-based reinforcement learning with neural network algorithm in fog computing environment

  • Jung, Tae-Won (Graduate School of Smart Convergence, KWANGWOON University) ;
  • Lee, Jong-Yong (Ingenium college of liberal arts, KWANGWOON University) ;
  • Jung, Kye-Dong (Ingenium college of liberal arts, KWANGWOON University)
  • Received : 2020.02.02
  • Accepted : 2020.02.14
  • Published : 2020.02.29


Reinforcement learning is a technology that can present successful and creative solutions in many areas. This reinforcement learning technology was used to deploy containers from cloud servers to fog servers to help them learn the maximization of rewards due to reduced traffic. Leveraging reinforcement learning is aimed at predicting traffic in the network and optimizing traffic-based fog computing network environment for cloud, fog and clients. The reinforcement learning system collects network traffic data from the fog server and IoT. Reinforcement learning neural networks, which use collected traffic data as input values, can consist of Long Short-Term Memory (LSTM) neural networks in network environments that support fog computing, to learn time series data and to predict optimized traffic. Description of the input and output values of the traffic-based reinforcement learning LSTM neural network, the composition of the node, the activation function and error function of the hidden layer, the overfitting method, and the optimization algorithm.



  1. Taejoon Park and Jaesang Cha, "A study on BEMS-linked Indoor Air Quality Monitoring Server using Industrial IoT," The Journal of the Institute of Internet, Broadcasting and Communication(JIIBC), Vol. 10, No. 4, pp. 65-69, Oct. 2018. DOI:
  2. Jong-Youel Park "A Study on Automatic Service Creation Method of Cloud-based Mobile Contents," The Journal of the Institute of Internet, Broadcasting and Communication(JIIBC), Vol. 10, No. 4, pp. 19-24, Oct. 2018. DOI:
  3. R. Deng, R. Lu, C. Lai, T.H. Luan, and H. Liang, "Optimal workload allocation in fog-cloud computing towards balanced delay and power consumption," IEEE Internet of Things Journal, Vol. 3, No. 6, pp. 1171-1181, Dec. 2016. DOI:
  4. A.V. Dastjerdi, H. Gupta, R.N. Calheiros, and S.K. Ghosh, "Fog Computing: Principles, Architectures, and Applications," online :, pp.1-26, Jan. 2016. DOI:
  5. Richard S., Sutton, and Andrew G. Barto, "Reinforcement learning: An introduction," Cambridge: MIT press, 1998.
  6. V.Mnih et al., "Asynchronous Methods for Deep Reinforcement Learning," in Proc. Int. Conf. Mach. Learning, New York, USA., pp. 1928-1937, 2016. DOI:
  7. Steven Bradtke and Michael Duff "Reinforcement learning methods for continuous-time Markov decision problems," NIPS, 1995. DOI:
  8. L. Benini, A. Bogliolo, G.A. Paleologo, and G. De Micheli, "Policy optimization for dynamic power management," IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Vol. 18, issue 6, pp. 813-833, Jun. 1999. DOI: ko&sa=X&scisig=AAGBfm3_I9lTakVAFnMK28y-SpphwpQMnA&nossl=1&oi=scholarr
  9. S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural Computation, Vol. 9, No. 8, pp. 1735-1780, Nov. 1997. DOI:
  10. Y. Bengio, P. Simard, and P. Frasconi. "Learning long-term dependencies with gradient descent is difficult," Neural Networks, IEEE Transactions, Vol. 5, No. 2, pp. 157-166, 1994. DOI: