DOI QR코드

DOI QR Code

A Routing Algorithm based on Deep Reinforcement Learning in SDN

SDN에서 심층강화학습 기반 라우팅 알고리즘

  • 이성근 (순천대학교 멀티미디어공학과)
  • Received : 2021.10.22
  • Accepted : 2021.12.17
  • Published : 2021.12.31

Abstract

This paper proposes a routing algorithm that determines the optimal path using deep reinforcement learning in software-defined networks. The deep reinforcement learning model for learning is based on DQN, the inputs are the current network state, source, and destination nodes, and the output returns a list of routes from source to destination. The routing task is defined as a discrete control problem, and the quality of service parameters for routing consider delay, bandwidth, and loss rate. The routing agent classifies the appropriate service class according to the user's quality of service profile, and converts the service class that can be provided for each link from the current network state collected from the SDN. Based on this converted information, it learns to select a route that satisfies the required service level from the source to the destination. The simulation results indicated that if the proposed algorithm proceeds with a certain episode, the correct path is selected and the learning is successfully performed.

본 논문은 소프트웨어 정의 네트워크에서 심층강화학습을 활용하여 최적의 경로를 결정하는 라우팅 알고리즘을 제안한다. 학습을 위한 심층강화학습 모델은 DQN 을 기반으로 하고, 입력은 현재 네트워크 상태, 발신지, 목적지 노드이고, 출력은 발신지에서 목적지까지의 경로 리스트를 반환한다. 라우팅 작업을 이산 제어 문제로 정의하며, 라우팅을 위한 서비스 품질 파라미터는 지연, 대역폭, 손실률을 고려하였다. 라우팅 에이전트는 사용자의 서비스 품질 프로파일에 따라 적절한 서비스 등급으로 분류하고, SDN에서 수집된 현재 네트워크 상태로부터 각 링크 별로 제공할 수 있는 서비스 등급을 변환한다. 이러한 변환된 정보를 토대로 발신지에서부터 목적지까지 요구되는 서비스 등급을 만족시키는 경로를 선택하도록 학습을 한다. 시뮬레이션 결과는 제안한 알고리즘이 일정한 에피소드를 진행하게 되면 올바른 경로를 선택하게 되고, 학습이 성공적으로 수행됨을 나타냈다.

Keywords

Acknowledgement

이 논문은 2019년도 정부(교육부)의 재원으로 한국연구재단의 지원을 받아 수행된 기초연구사업임. (No. 2019R1I1A3A0106291)

References

  1. N. Feamste, J. Rexford, and E. Zegura, "The Road to SDN: An Intellectual History of Programmable Networks," SIGCOMM Comput. Commun. Rev., vol. 44, no. 2, 2014, pp. 87-98.
  2. A. Yassine, H. Rahimi, and S. Shirmohammadi, ''Software defined network traffic measurement: Current trends and challenges,'' IEEE Instrum. Meas. Mag., vol. 18, no. 2, 2015, pp. 42-50. https://doi.org/10.1109/MIM.2015.7066685
  3. W. Guck, A. V. Bemten, M. Reisslein, and W. Kellerer, "Unicast QoS routing algorithms for SDN: A comprehensive survey and performance evaluation," IEEE Communications Surveys Tutorials, vol. 20, no. 1, 2018, pp. 388-415. https://doi.org/10.1109/COMST.2017.2749760
  4. M. Karakus and A. Durresi, "Quality of service in software defined networking: A survey," Journal of Network and Computer Applications, vol. 80, 2017, pp. 200-218. https://doi.org/10.1016/j.jnca.2016.12.019
  5. V. Mnih, K. Kavukcuoglu and D. Silver, "Human-level control through deep reinforcement learning," Nature, vol. 518, no. 7540, 2015, pp. 529 - 535. https://doi.org/10.1038/nature14236
  6. N. C. Luong et al., "Applications of Deep Reinforcement Learning in Communications and Networking: A Survey," in IEEE Communications Surveys & Tutorials, vol. 21, no. 4, 2019, pp. 3133-3174 https://doi.org/10.1109/COMST.2019.2916583
  7. S.-C. Lin, I. F. Akyildiz, P. Wang, and M. Luo, "QoS-aware adaptive routing in multi-layer hierarchical software defined networks: A rein- forcement learning approach," in Services Computing (SCC), 2016 IEEE International Conference on. IEEE, 2016, pp. 25-33.
  8. G. Stampa, M. Arias and A. Cabellos, "A deep-reinforcement learning approach for software defined networking routing optimization," arXiv preprint arXiv:1709.07080, 2017.
  9. X. Huang, T. Yuan, G. Qiao and Y. Ren, "Deep reinforcement learning for multimedia traffic control in software defined networking," IEEE Network, vol. 32, no. 6, 2018, pp. 35-41. https://doi.org/10.1109/MNET.2018.1800097
  10. B. Guo, X. Zhang, Y. Wang, and H. Yang, "Deep Q-network based multimedia multi-service QoS optimization for mobile edge computing systems," IEEE Access, vol. 7, 2019, pp. 160961-160972. https://doi.org/10.1109/access.2019.2951219
  11. A. Valadarsky, M. Schapira, and A. Tamar, "Learning to route with deep reinforcement learning," in NIPS Deep Reinforcement Learning Symposium, 2017.
  12. Xu, J. Tang, C. Yin, Y. Wang, and G. Xue, "Experience-driven congestion control: When multi-path tcp meets deep reinforcement learning," IEEE Journal on Selected Areas in Communications, vol. 37, no. 6, 2019, pp. 1325-1336. https://doi.org/10.1109/jsac.2019.2904358
  13. S. Xiao, D. He, and Z. Gong, "Deep-q: Traffic-driven qos inference using deep generative network," in Proceedings of the 2018 Workshop on Network Meets AI & ML, 2018, pp. 67-73.
  14. S. Q. Jalil, M. Husain Rehmani and S. Chalup, "DQR: Deep Q-Routing in Software Defined Networks," 2020 International Joint Conference on Neural Networks (IJCNN), 2020, pp. 1-8.
  15. S. Q. Jalil, M. Husain Rehmani and S. Chalup, "DQR: Deep Q-Routing in Software Defined Networks," 2020 International Joint Conference on Neural Networks (IJCNN), 2020, pp. 1-8.
  16. S. Lee, "Design and Application of LoRa-based Network Protocol in IoT Networks," J. of the Korea Institute of Electronic Communication Sciences, vol. 14, no. 6, 2019, pp. 1089-1096. https://doi.org/10.13067/JKIECS.2019.14.6.1089
  17. S. Jung and Lee, "A Queue Management Mechanism for Service groups based on Deep Reinforcement Learning," J. of the Korea Institute of Electronic Communication Sciences, vol. 15, no. 6, 2020, pp. 1099-1104. https://doi.org/10.13067/JKIECS.2020.15.6.1099