Communication Failure Resilient Improvement of Distributed Neural Network Partitioning and Inference Accuracy

통신 실패에 강인한 분산 뉴럴 네트워크 분할 및 추론 정확도 개선 기법

  • Received : 2020.12.18
  • Accepted : 2021.02.10
  • Published : 2021.02.28


Recently, it is increasingly necessary to run high-end neural network applications with huge computation overhead on top of resource-constrained embedded systems, such as wearable devices. While the huge computational overhead can be alleviated by distributed neural networks running on multiple separate devices, existing distributed neural network techniques suffer from a large traffic between the devices; thus are very vulnerable to communication failures. These drawbacks make the distributed neural network techniques inapplicable to wearable devices, which are connected with each other through unstable and low data rate communication medium like human body communication. Therefore, in this paper, we propose a distributed neural network partitioning technique that is resilient to communication failures. Furthermore, we show that the proposed technique also improves the inference accuracy even in case of no communication failure, thanks to the improved network partitioning. We verify through comparative experiments with a real-life neural network application that the proposed technique outperforms the existing state-of-the-art distributed neural network technique in terms of accuracy and resiliency to communication failures.



  1. S.P. Heo, D.H. Noh, C.B. Bae, D.S. Kim.. "Trend of IoT-based Healthcare Service" IEMEK Journal of Embedded Systems and Applications Vol. 10, No. 4, pp. 221-230, 2015 (in Korean).
  2. J. Yuan, S. Yu "Privacy Preserving Back-propagation Neural Network Learning Made Practical with Cloud Computing." IEEE Transactions on Parallel and Distributed Systems Vol. 25, No. 1, pp. 212-221, 2013.
  3. L. Lai, N. Suda, V. Chandra, "Cmsis-nn: Efficient Neural Network Kernels for Arm Cortex-m Cpus." arXiv preprint arXiv:1801.06601, 2018.
  4. J.H. Jeong, D.S. Lee, H.S. Jung, H.S. Yang. "Automatic Convolution Neural Network Model Compression Framework for Resource-Constrained Embedded Systems.",Journal of Korean Institute of Information Scientists and Engineers, Vol. 47, No. 2, pp. 136-146, 2020 (in Korean).
  5. S. Kim, J.G. Ko, "IB-MAC: Transmission Latency-aware MAC for Electro-magnetic Intra-body Communications." Sensors Vol. 19, No. 2, pp. 341, 2019.
  6. S.W. Kang, H.I. Park, K.H. Park. "Trends of Human Body Communications." [ETRI] Electronics and Telecommunications Trends Vol. 28, No. 2, pp. 70-76, 2013 (in Korean).
  7. J. Mao, X. Chen, K.W. Nixon, C. Krieger, Y. Chen, "Modnn: Local Distributed Mobile Computing System for Deep Neural Network." Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017. IEEE, 2017.
  8. K. Bhardwaj, C.Y. Lin, A. Sartor, R. Marculescu, "Memory-and Communication-aware Model Compression for Distributed Deep Learning Inference on iot." ACM Transactions on Embedded Computing Systems (TECS) Vol. 18, No. 5s, pp. 1-22, 2019.
  9. G. Hinton, O. Vinyals, J. Dean, "Distilling the Knowledge in a Neural Network." arXiv preprint arXiv:1503.02531 ,2015.
  10. N. Komodakis, S. Zagoruyko, "Paying more Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer.", 2017.
  11. M.E.J. Newman, "Modularity and Community Structure in Networks." Proceedings of the national academy of sciences Vol. 103, No. 23, pp. 8577-8582, 2006.
  12. J.H. Jeong, D.S. Lee, H.S. Yang, "Optimization of Distributed Neural Network on Wearable Devices with Low Data Rate Communication", Proceedings of Korean Institute of Communications and Information Sciences Summer Conference, pp. 661-662, 2020 (in korean).
  13. S. Zagoruyko, N. Komodakis, "Wide Residual Networks." arXiv preprint arXiv:1605.07146, 2016.
  14. K. Simonyan, A. Zisserman, "Very Deep Convolutional Networks for Large-scale Image Recognition." arXiv preprint arXiv:1409.1556, 2014.