DOI QR코드

DOI QR Code

신경망 학습에서 프라이버시 이슈 및 대응방법 분석

Analysis of privacy issues and countermeasures in neural network learning

  • Hong, Eun-Ju (Dept. of Convergence Science, Kongju National University) ;
  • Lee, Su-Jin (Dept. of Mathematics, Kongju National University) ;
  • Hong, Do-won (Dept. of Applied Mathematics, Kongju National University) ;
  • Seo, Chang-Ho (Dept. of Applied Mathematics, Kongju National University)
  • 투고 : 2019.04.11
  • 심사 : 2019.07.20
  • 발행 : 2019.07.28

초록

PC, SNS, IoT의 대중화로 수많은 데이터가 생성되고 그 양은 기하급수적으로 증가하고 있다. 거대한 양의 데이터를 활용하는 방법으로 인공신경망 학습은 최근 많은 분야에서 주목받는 주제이다. 인공신경망 학습은 음성인식, 이미지 인식에서 엄청난 잠재력을 보였으며 더 나아가 의료진단, 인공지능 게임 및 얼굴인식 등 다양하고 복잡한 곳에 광범위하게 적용된다. 인공신경망의 결과는 실제 인간을 능가할 정도로 정확성을 보이고 있다. 이러한 많은 이점에도 불구하고 인공신경망 학습에는 여전히 프라이버시 문제가 존재한다. 인공신경망 학습을 위한 학습 데이터에는 개인의 민감한 정보를 포함한 다양한 정보가 포함되어 악의적인 공격자로 인해 프라이버시가 노출될 수 있다. 공격자가 학습하는 도중 개입하여 학습이 저하되거나 학습이 완료된 모델을 공격할 때 발생하는 프라이버시 위험이 있다. 본 논문에서는 최근 제안된 신경망 모델의 공격 기법과 그에 따른 프라이버시 보호 방법을 분석한다.

With the popularization of PC, SNS and IoT, a lot of data is generated and the amount is increasing exponentially. Artificial neural network learning is a topic that attracts attention in many fields in recent years by using huge amounts of data. Artificial neural network learning has shown tremendous potential in speech recognition and image recognition, and is widely applied to a variety of complex areas such as medical diagnosis, artificial intelligence games, and face recognition. The results of artificial neural networks are accurate enough to surpass real human beings. Despite these many advantages, privacy problems still exist in artificial neural network learning. Learning data for artificial neural network learning includes various information including personal sensitive information, so that privacy can be exposed due to malicious attackers. There is a privacy risk that occurs when an attacker interferes with learning and degrades learning or attacks a model that has completed learning. In this paper, we analyze the attack method of the recently proposed neural network model and its privacy protection method.

키워드

DJTJBT_2019_v17n7_285_f0001.png 이미지

Fig. 1. Neural Network

DJTJBT_2019_v17n7_285_f0002.png 이미지

Fig. 2. Differential Privacy[7]

DJTJBT_2019_v17n7_285_f0003.png 이미지

Fig. 3. Centralized and Distributed training model

DJTJBT_2019_v17n7_285_f0004.png 이미지

Fig. 4. GAN attack

DJTJBT_2019_v17n7_285_f0005.png 이미지

Fig. 5. Model Extraction Attack

참고문헌

  1. M. Ribeiro, K. Grolinger & M. A. M. Capretz. (2015). MLaaS: Machine Learning as a Service. In IEEE International Conference on Machine Learning and Applications (ICMLA), p. 896-902.
  2. M. Fredrikson, S. Jha & T. Ristenpart. (2015). Model inversion attacks that exploit confidence information and basic countermeasures. In CCS, (pp. 1322-1333). USA : ACM.
  3. A. Krizhevsky, I. Sutskever & G. E Hinton. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, 1097-1105.
  4. S. Hochreiter & J. Schmidhuber. (1997). Long short-term memory. Neural computation 9(8), 1735-1780. https://doi.org/10.1162/neco.1997.9.8.1735
  5. B Hitaj, G Ateniese & F Perez-Cruz. (2017). Deep Models under the GAN: Information Leakage from Collaborative Deep Learning. Proc. (pp. 603-618). ACM CCS.
  6. C. Dwork & A. Roth. (2013). The algorithmic foundations of differential privacy. Theoretical Computer Science, 9(3-4), 211-407.
  7. K. Ligett. (2017). Introduction to differential privacy, randomized response, basic properties. The 7th BIU Winter School on Cryptography, BIU.
  8. C. Gentry. (2009). A fully homomorphic encryption scheme. PhD thesis, Stanford University, California
  9. P Martins, L Sousa & A Mariano. (2018). A survey on fully homomorphic encryption: An engineering perspective. ACM Computing Surveys (CSUR), 50(6), 83.
  10. Y. Lindell & B. Pinkas. (2008). Secure multiparty computation for privacy-preserving data mining. IACR Cryptology ePrint Archive 197.
  11. H. Bae, J. Jang, D. Jung, H. Jang, H. Ha & S. Yoon. (2018). Security and Privacy Issues in Deep Learning. ACM Computing Surveys
  12. S. Chang & C. Li. (2018). Privacy in Neural Network Learning: Threats and Countermeasures. IEEE Network, 32(4), 61-67. https://doi.org/10.1109/mnet.2018.1700447
  13. R. Shokri, M. Stronati, C. Song & V. Shmatikov. (2017). Membership Inference Attacks against Machine Learning Models. IEEE Sym. SP, p. 3-18.
  14. F. Tramer, F. Zhang, A. Juels, M.K. Reiter & T. Ristenpart. (2016). Stealing Machine Learning Models via Prediction APIs. USENIX Sec. Sym. (pp. 601-618). Vancouver : USENIX
  15. P. Mohassel & Y. Zhang. (2017). SecureML: A System for Scalable Privacy preserving Machine Learning. IEEE Sym. SP, p. 19-38.
  16. L. Xie, K. Lin, S. Wang, F. Wang & J. Zhou. (2018). Differentially Private Generative Adversarial Network. arXiv preprint arXiv:1802.06739.
  17. J. Yuan & S. Yu. (2014). Privacy Preserving Back-Propagation Neural Network Learning Made Practical with Cloud Computing. IEEE Trans. PDS, p. 212-221.
  18. P. Li et al. (2017). Multi-Key Privacy-Preserving Deep Learning in Cloud Computing. Future Generation Computer Systems, 74, 76-85. https://doi.org/10.1016/j.future.2017.02.006
  19. M. Abadi et al. (2016). Deep Learning with Differential Privacy. Proc. ACM CCS, (pp. 308-318). ACM : Vienna
  20. G. Acs, L. Melis, C. Castelluccia & E. De Cristofaro. (2017). Differentially private mixture of generative neural networks. IEEE Transactions on Knowledge and Data Engineering, 31(6), 1109-1121. https://doi.org/10.1109/tkde.2018.2855136
  21. C. Dwork & G. N. Rothblum. (2016). Concentrated differential privacy. CoRR, abs/1603.01887.
  22. L. Yu, L. Liu, C. Pu, M. E. Gursoy & S. Truex. (2019). Differentially Private Model Publishing for Deep Learning. IEEE.
  23. X. Zhang, S. Ji, H. Wang & T. Wang (2017). Private, Yet Practical, Multiparty Deep Learning. ICDCS, pp. 1442-52. IEEE.
  24. K. Bonawitz et al. (2017). Practical Secure Aggregation for Privacy-Preserving Machine Learning. Cryptology ePrint Archive, (pp. 1175-1191). ACM.