• Title/Summary/Keyword: Model Inversion Attack

Search Result 8, Processing Time 0.034 seconds

Membership Inference Attack against Text-to-Image Model Based on Generating Adversarial Prompt Using Textual Inversion (Textual Inversion을 활용한 Adversarial Prompt 생성 기반 Text-to-Image 모델에 대한 멤버십 추론 공격)

  • Yoonju Oh;Sohee Park;Daeseon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1111-1123
    • /
    • 2023
  • In recent years, as generative models have developed, research that threatens them has also been actively conducted. We propose a new membership inference attack against text-to-image model. Existing membership inference attacks on Text-to-Image models produced a single image as captions of query images. On the other hand, this paper uses personalized embedding in query images through Textual Inversion. And we propose a membership inference attack that effectively generates multiple images as a method of generating Adversarial Prompt. In addition, the membership inference attack is tested for the first time on the Stable Diffusion model, which is attracting attention among the Text-to-Image models, and achieve an accuracy of up to 1.00.

Differential Privacy Technology Resistant to the Model Inversion Attack in AI Environments (AI 환경에서 모델 전도 공격에 안전한 차분 프라이버시 기술)

  • Park, Cheollhee;Hong, Dowon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.3
    • /
    • pp.589-598
    • /
    • 2019
  • The amount of digital data a is explosively growing, and these data have large potential values. Countries and companies are creating various added values from vast amounts of data, and are making a lot of investments in data analysis techniques. The privacy problem that occurs in data analysis is a major factor that hinders data utilization. Recently, as privacy violation attacks on neural network models have been proposed. researches on artificial neural network technology that preserves privacy is required. Therefore, various privacy preserving artificial neural network technologies have been studied in the field of differential privacy that ensures strict privacy. However, there are problems that the balance between the accuracy of the neural network model and the privacy budget is not appropriate. In this paper, we study differential privacy techniques that preserve the performance of a model within a given privacy budget and is resistant to model inversion attacks. Also, we analyze the resistance of model inversion attack according to privacy preservation strength.

Trajectory Guidance and Control for a Small UAV

  • Sato, Yoichi;Yamasaki, Takeshi;Takano, Hiroyuki;Baba, Yoriaki
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.7 no.2
    • /
    • pp.137-144
    • /
    • 2006
  • The objective of this paper is to present trajectory guidance and control system with a dynamic inversion for a small unmanned aerial vehicle (UAV). The UAV model is expressed by fixed-mass rigid-body six-degree-of-freedom equations of motion, which include the detailed aerodynamic coefficients, the engine model and the actuator models that have lags and limits. A trajectory is generated from the given waypoints using cubic spline functions of a flight distance. The commanded values of an angle of attack, a sideslip angle, a bank angle and a thrust, are calculated from guidance forces to trace the flight trajectory. To adapt various waypoint locations, a proportional navigation is combined with the guidance system. By the decision logic, appropriate guidance law is selected. The flight control system to achieve the commands is designed using a dynamic inversion approach. For a dynamic inversion controller we use the two-timescale assumption that separates the fast dynamics, involving the angular rates of the aircraft, from the slow dynamics, which include angle of attack, sideslip angle, and bank angle. Some numerical simulations are conducted to see the performance of the proposed guidance and control system.

Model Inversion Attack: Analysis under Gray-box Scenario on Deep Learning based Face Recognition System

  • Khosravy, Mahdi;Nakamura, Kazuaki;Hirose, Yuki;Nitta, Naoko;Babaguchi, Noboru
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.3
    • /
    • pp.1100-1118
    • /
    • 2021
  • In a wide range of ML applications, the training data contains privacy-sensitive information that should be kept secure. Training the ML systems by privacy-sensitive data makes the ML model inherent to the data. As the structure of the model has been fine-tuned by training data, the model can be abused for accessing the data by the estimation in a reverse process called model inversion attack (MIA). Although, MIA has been applied to shallow neural network models of recognizers in literature and its threat in privacy violation has been approved, in the case of a deep learning (DL) model, its efficiency was under question. It was due to the complexity of a DL model structure, big number of DL model parameters, the huge size of training data, big number of registered users to a DL model and thereof big number of class labels. This research work first analyses the possibility of MIA on a deep learning model of a recognition system, namely a face recognizer. Second, despite the conventional MIA under the white box scenario of having partial access to the users' non-sensitive information in addition to the model structure, the MIA is implemented on a deep face recognition system by just having the model structure and parameters but not any user information. In this aspect, it is under a semi-white box scenario or in other words a gray-box scenario. The experimental results in targeting five registered users of a CNN-based face recognition system approve the possibility of regeneration of users' face images even for a deep model by MIA under a gray box scenario. Although, for some images the evaluation recognition score is low and the generated images are not easily recognizable, but for some other images the score is high and facial features of the targeted identities are observable. The objective and subjective evaluations demonstrate that privacy cyber-attack by MIA on a deep recognition system not only is feasible but also is a serious threat with increasing alert state in the future as there is considerable potential for integration more advanced ML techniques to MIA.

Aircraft CAS Design with Input Saturation Using Dynamic Model Inversion

  • Sangsoo Lim;Kim, Byoung-Soo
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.3
    • /
    • pp.315-320
    • /
    • 2003
  • This paper presents a control augmentation system (CAS) based on the dynamic model inversion (DMI) architecture for a highly maneuverable aircraft. In the application of DMI not treating actuator dynamics, significant instabilities arise due to limitations on the aircraft inputs, such as actuator time delay based on dynamics and actuator displacement limit. Actuator input saturation usually occurs during high angles of attack maneuvering in low dynamic pressure conditions. The pseudo-control hedging (PCH) algorithm is applied to prevent or delay the instability of the CAS due to a slow actuator or occurrence of actuator saturation. The performance of the proposed CAS with PCH architecture is demonstrated through a nonlinear flight simulation.

A Study of Split Learning Model to Protect Privacy (프라이버시 침해에 대응하는 분할 학습 모델 연구)

  • Ryu, Jihyeon;Won, Dongho;Lee, Youngsook
    • Convergence Security Journal
    • /
    • v.21 no.3
    • /
    • pp.49-56
    • /
    • 2021
  • Recently, artificial intelligence is regarded as an essential technology in our society. In particular, the invasion of privacy in artificial intelligence has become a serious problem in modern society. Split learning, proposed at MIT in 2019 for privacy protection, is a type of federated learning technique that does not share any raw data. In this study, we studied a safe and accurate segmentation learning model using known differential privacy to safely manage data. In addition, we trained SVHN and GTSRB on a split learning model to which 15 different types of differential privacy are applied, and checked whether the learning is stable. By conducting a learning data extraction attack, a differential privacy budget that prevents attacks is quantitatively derived through MSE.

TrapMI: Protecting Training Data to Evade Model Inversion Attack on Split Learning (TrapMI: 분할 학습에서 모델 전도 공격을 회피할 수 있는 훈련 데이터 보호 방법)

  • Hyun-Sik Na;Dae-Seon Choi
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.234-236
    • /
    • 2023
  • Edge AI 환경에서의 DNNs 학습 방법 중 하나인 분할 학습은 모델 전도 공격으로 인해 입력 데이터의 프라이버시가 노출될 수 있다. 본 논문에서는 분할 학습 환경에서의 모델 전도 공격에 대한 기존 방어 기술들의 한계점을 회피할 수 있는 TrapMI 기술을 제안하고, 이를 통해 입력 이미지를 원 본 데이터 세트의 도메인에서 특정 타겟 이미지 도메인으로 이동시킴으로써 이미지 복원의 가능성을 최소화시킨다. 추가적으로, 테스트 과정에서 타겟 이미지의 정보를 알 수 없는 제약을 회피하기 위해 AutoGenerator를 구축한 후 실험을 통해 원본 데이터 보호 성능을 검증한다.

Study of the Incremental Dynamic Inversion Control to Prevent the Over-G in the Transonic Flight Region (천음속 비행영역에서 하중제한 초과 방지를 위한 증분형 동적 모델역변환 제어 연구)

  • Jin, Tae-beom;Kim, Chong-sup;Koh, Gi-Oak;Kim, Byoung-Soo
    • Journal of Aerospace System Engineering
    • /
    • v.15 no.5
    • /
    • pp.33-42
    • /
    • 2021
  • Modern aircraft fighters improve the maneuverability and performance with the RSS (Relaxed Static Stability) concept and therefore these aircrafts are susceptible to abrupt pitch-up in the transonic and moderate Angle-of-Attack (AoA) flight region where the shock wave is formed and the mean aerodynamic center is moved forward during deceleration. Also, the modeling of the aircraft flying in this flight region is very difficult due to complex flow filed and unpredictable dynamic characteristics and the model-based control design technique does not fully cover this problem. In this paper, we analyzed the performance of the TPMC (Transonic Pitching Moment Compensation) control based on the model-based IDI (Incremental Dynamic Inversion) and the Hybrid IDI based on the model and sensor based IDI during the SDT (Slow Down Turn) in transonic region. As the result, the Hybrid IDI had quicker response and the same maximum g suppression performance and provided the predictable flying qualities compared to the TPMC control. The Hybrid IDI improved the performance of the Over-G protection controller in the transonic and moderate AoA region