DOI QR코드

DOI QR Code

저계수 행렬 근사 및 CP 분해 기법을 이용한 CNN 압축

Compression of CNN Using Low-Rank Approximation and CP Decomposition Methods

  • 문현철 (한국항공대학교 항공전자정보공학부) ;
  • 문기화 (한국항공대학교 항공전자정보공학부) ;
  • 김재곤 (한국항공대학교 항공전자정보공학부)
  • Moon, HyeonCheol (Korea Aerospace University, School of Electronics and Information Engineering) ;
  • Moon, Gihwa (Korea Aerospace University, School of Electronics and Information Engineering) ;
  • Kim, Jae-Gon (Korea Aerospace University, School of Electronics and Information Engineering)
  • 투고 : 2021.01.08
  • 심사 : 2021.03.03
  • 발행 : 2021.03.30

초록

최근 CNN(Convolutional Neural Network)은 영상 분류, 객체 인식, 화질 개선 등 다양한 비전 분야에서 우수한 성능을 보여주고 있다. 그러나 많은 메모리와 계산량이 요구되어 모바일 또는 IoT(Internet of Things) 장치와 같은 저전력 디바이스에 적용하기에는 제한이 따른다. 이에, CNN 모델의 임무 성능을 유지하면서 네트워크 모델을 압축하는 연구가 진행되고 있다. 본 논문에서는 행렬 분해 기술인 저계수 행렬 근사(Low-rank approximation)와 CP(Canonical Polyadic) 분해 기법을 결합한 CNN 모델 압축 기법을 제안한다. 제안기법은 하나의 행렬 분해 기법만을 적용하는 기존의 기법과 달리 CNN의 계층 유형에 따라 두 가지 분해 기법을 선택적으로 적용하여 압축 성능을 높인다. 제안기법의 성능 검증을 위하여 영상 분류 CNN 모델인 VGG-16, ResNet50, 그리고 MobileNetV2 모델을 압축하였고, 계층 유형에 따라 두 가지의 분해 기법을 선택적으로 적용함으로써 저계수 행렬 근사 기법만 적용한 경우 보다 1.5 ~ 12.1 배의 동일한 압축률에서 분류 성능이 향상됨을 확인하였다.

In recent years, Convolutional Neural Networks (CNNs) have achieved outstanding performance in the fields of computer vision such as image classification, object detection, visual quality enhancement, etc. However, as huge amount of computation and memory are required in CNN models, there is a limitation in the application of CNN to low-power environments such as mobile or IoT devices. Therefore, the need for neural network compression to reduce the model size while keeping the task performance as much as possible has been emerging. In this paper, we propose a method to compress CNN models by combining matrix decomposition methods of LR (Low-Rank) approximation and CP (Canonical Polyadic) decomposition. Unlike conventional methods that apply one matrix decomposition method to CNN models, we selectively apply two decomposition methods depending on the layer types of CNN to enhance the compression performance. To evaluate the performance of the proposed method, we use the models for image classification such as VGG-16, RestNet50 and MobileNetV2 models. The experimental results show that the proposed method gives improved classification performance at the same range of 1.5 to 12.1 times compression ratio than the existing method that applies only the LR approximation.

키워드

참고문헌

  1. S. Jung, C. Son, S. Lee, J. Han, Y. Kwak, and S. Hwang, "Learning to Quantize Deep Networks by Optimizing Quantization Intervals with Task Loss," In Proc. Computer Vision and Pattern Recognition (CVPR), 2019.
  2. W. Bailer, et al, "Text of ISO/IEC DIS 15938-17 Compression of Neural Networks for Multimedia Content Description and Analysis," ISO/IEC/JTC1/SC29/WG04, N0016, Oct. 2020.
  3. X. Zhang, X. Zhou, M. Lin, and J. Sun, "ShuffleNet: An Exteremey Efficient Convolutional Neural Network for Mobile Devices," In Proc. Computer Vision and Patter Recognition (CVPR), 2018.
  4. A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications," arXiv preprint arXiv:1704.04861, 2017
  5. C. Aytekin, F. Cricri, T. Wang, E. Aksu, "Response to the Call for Proposals on Neural Network Compression: Training Highly Compressible Neural Networks," ISO/IEC JTC1/SC29/WG11, m47379, Mar. 2019.
  6. H. Moon, H. Lee, and J. Kim, "Acceleration of CNN Model Using Neural Network Compression and its Performance Evaluation on Embedded Boards," In Proc. KIBME Annual Fall Conf., Nov. 2019.
  7. S. Han, et al, "Deep Compression: Compressing Deep Neural Networks with pruning, trained quantization and Huffman coding," Computer Vision and Patter Recognition, In Proc. ICLR 2016, May 2016.
  8. M. Jaderberg, A. Vedaldi, and A. Zisserman, "Speeding up Convolutional Neural Networks with Low Rank Expansions," In Proc. CVPR, Jun. 2014.
  9. V. Lebedev, Y. Ganin, M. Rakhuba, I. Oseledets, and V. Lemptisky, "Sppeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition," In Proc. CVPR, Jun. 2015.
  10. H. Moon, G. Moon, and J. Kim, "Compression of CNN Using Low-Rank Approximation and CP Decomposition Methods," In Proc. KIBME Annual Fall Conf., Nov. 2020.
  11. H. Moon, J. Kim, S. Kim, S. Jang, and B. Choi, "KAU/KETI Response to the CE-1 on Neural Network Compression: CP Decomposition of Convolutional Layers (Method 5)," ISO/IEC JTC1/SC29/WG04, m55053, Oct. 2020.
  12. Large Scale Visual Recognition Challenge 2012 (ILSVRC 2012), [Available at Online] http://www.image-net.org/challenges/LSVRC/2012/