• Title/Summary/Keyword: Differential Privacy

Search Result 42, Processing Time 0.022 seconds

A Differential Privacy Approach to Preserve GWAS Data Sharing based on A Game Theoretic Perspective

  • Yan, Jun;Han, Ziwei;Zhou, Yihui;Lu, Laifeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.1028-1046
    • /
    • 2022
  • Genome-wide association studies (GWAS) aim to find the significant genetic variants for common complex disease. However, genotype data has privacy information such as disease status and identity, which make data sharing and research difficult. Differential privacy is widely used in the privacy protection of data sharing. The current differential privacy approach in GWAS pays no attention to raw data but to statistical data, and doesn't achieve equilibrium between utility and privacy, so that data sharing is hindered and it hampers the development of genomics. To share data more securely, we propose a differential privacy preserving approach of data sharing for GWAS, and achieve the equilibrium between privacy and data utility. Firstly, a reasonable disturbance interval for the genotype is calculated based on the expected utility. Secondly, based on the interval, we get the Nash equilibrium point between utility and privacy. Finally, based on the equilibrium point, the original genotype matrix is perturbed with differential privacy, and the corresponding random genotype matrix is obtained. We theoretically and experimentally show that the method satisfies expected privacy protection and utility. This method provides engineering guidance for protecting GWAS data privacy.

Case Study on Local Differential Privacy in Practice : Privacy Preserving Survey (로컬 차분 프라이버시 실제 적용 사례연구 : 프라이버시 보존형 설문조사)

  • Jeong, Sooyong;Hong, Dowon;Seo, Changho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.1
    • /
    • pp.141-156
    • /
    • 2020
  • Differential privacy, which used to collect and analysis data and preserve data privacy, has been applied widely in data privacy preserving data application. Local differential privacy algorithm which is the local model of differential privacy is used to user who add noise to his data himself with randomized response by self and release his own data. So, user can be preserved his data privacy and data analyst can make a statistical useful data by collected many data. Local differential privacy method has been used by global companies which are Google, Apple and Microsoft to collect and analyze data from users. In this paper, we compare and analyze the local differential privacy methods which used in practically. And then, we study applicability that applying the local differential privacy method in survey or opinion poll scenario in practically.

Differential Privacy in Practice

  • Nguyen, Hiep H.;Kim, Jong;Kim, Yoonho
    • Journal of Computing Science and Engineering
    • /
    • v.7 no.3
    • /
    • pp.177-186
    • /
    • 2013
  • We briefly review the problem of statistical disclosure control under differential privacy model, which entails a formal and ad omnia privacy guarantee separating the utility of the database and the risk due to individual participation. It has born fruitful results over the past ten years, both in theoretical connections to other fields and in practical applications to real-life datasets. Promises of differential privacy help to relieve concerns of privacy loss, which hinder the release of community-valuable data. This paper covers main ideas behind differential privacy, its interactive versus non-interactive settings, perturbation mechanisms, and typical applications found in recent research.

A Study on an Efficient and Robust Differential Privacy Scheme Using a Tag Field in Medical Environment

  • Kim, Soon-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.11
    • /
    • pp.109-117
    • /
    • 2019
  • Recently, the invasion of privacy in medical information has been issued following the interest in the secondary use of mass medical information. The mass medical information is very useful information that can be used in various fields such as disease research and prevention. However, due to privacy laws such as the Privacy Act and Medical Law, this information, including patients' or health professionals' personal information, is difficult to utilize as a secondary use of mass information. To do these problem, various methods such as k-anonymity, l-diversity and differential-privacy that can be utilized while protecting privacy have been developed and utilized in this field. In this paper, we discuss the differential privacy processing of the various methods that have been studied so far, and discuss the problems of differential privacy using Laplace noise and the previously proposed differential privacy. Finally, we propose a new scheme to solve the existing problem by adding a 1-bit status field to the last column of a given data set to confirm the response to queries from analysts.

An Uncertain Graph Method Based on Node Random Response to Preserve Link Privacy of Social Networks

  • Jun Yan;Jiawang Chen;Yihui Zhou;Zhenqiang Wu;Laifeng Lu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.1
    • /
    • pp.147-169
    • /
    • 2024
  • In pace with the development of network technology at lightning speed, social networks have been extensively applied in our lives. However, as social networks retain a large number of users' sensitive information, the openness of this information makes social networks vulnerable to attacks by malicious attackers. To preserve the link privacy of individuals in social networks, an uncertain graph method based on node random response is devised, which satisfies differential privacy while maintaining expected data utility. In this method, to achieve privacy preserving, the random response is applied on nodes to achieve edge modification on an original graph and node differential privacy is introduced to inject uncertainty on the edges. Simultaneously, to keep data utility, a divide and conquer strategy is adopted to decompose the original graph into many sub-graphs and each sub-graph is dealt with separately. In particular, only some larger sub-graphs selected by the exponent mechanism are modified, which further reduces the perturbation to the original graph. The presented method is proven to satisfy differential privacy. The performances of experiments demonstrate that this uncertain graph method can effectively provide a strict privacy guarantee and maintain data utility.

A Study on Synthetic Data Generation Based Safe Differentially Private GAN (차분 프라이버시를 만족하는 안전한 GAN 기반 재현 데이터 생성 기술 연구)

  • Kang, Junyoung;Jeong, Sooyong;Hong, Dowon;Seo, Changho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.5
    • /
    • pp.945-956
    • /
    • 2020
  • The publication of data is essential in order to receive high quality services from many applications. However, if the original data is published as it is, there is a risk that sensitive information (political tendency, disease, ets.) may reveal. Therefore, many research have been proposed, not the original data but the synthetic data generating and publishing to privacy preserve. but, there is a risk of privacy leakage still even if simply generate and publish the synthetic data by various attacks (linkage attack, inference attack, etc.). In this paper, we propose a synthetic data generation algorithm in which privacy preserved by applying differential privacy the latest privacy protection technique to GAN, which is drawing attention as a synthetic data generative model in order to prevent the leakage of such sensitive information. The generative model used CGAN for efficient learning of labeled data, and applied Rényi differential privacy, which is relaxation of differential privacy, considering the utility aspects of the data. And validation of the utility of the generated data is conducted and compared through various classifiers.

Study on Robust Differential Privacy Using Secret Sharing Scheme (비밀 분산 기법을 이용한 강건한 디퍼렌셜 프라이버시 개선 방안에 관한 연구)

  • Kim, Cheoljung;Yeo, Kwangsoo;Kim, Soonseok
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.2
    • /
    • pp.311-319
    • /
    • 2017
  • Recently invasion of privacy problem in medical information have been issued following the interest in secondary use of large medical information. These large medical information is very useful information that can be used in various fields such as disease research and prevention. However, due to the privacy laws such as Privacy Act and Medical Law, these informations including patients or health professionals' personal information are difficult to utilize secondary. Accordingly, various methods such as k-anonymity, l-diversity and differential-privacy that can be utilized while protecting privacy have been developed and utilized in this field. In this paper, we study differential privacy processing procedure, one of various methods, and find out about the differential privacy problem using Laplace noise. Finally, we propose a new method using the Shamir's secret sharing method and symemetric key encryption algorithm such as AES for this problem.

Differential Privacy Technology Resistant to the Model Inversion Attack in AI Environments (AI 환경에서 모델 전도 공격에 안전한 차분 프라이버시 기술)

  • Park, Cheollhee;Hong, Dowon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.3
    • /
    • pp.589-598
    • /
    • 2019
  • The amount of digital data a is explosively growing, and these data have large potential values. Countries and companies are creating various added values from vast amounts of data, and are making a lot of investments in data analysis techniques. The privacy problem that occurs in data analysis is a major factor that hinders data utilization. Recently, as privacy violation attacks on neural network models have been proposed. researches on artificial neural network technology that preserves privacy is required. Therefore, various privacy preserving artificial neural network technologies have been studied in the field of differential privacy that ensures strict privacy. However, there are problems that the balance between the accuracy of the neural network model and the privacy budget is not appropriate. In this paper, we study differential privacy techniques that preserve the performance of a model within a given privacy budget and is resistant to model inversion attacks. Also, we analyze the resistance of model inversion attack according to privacy preservation strength.

A Study of Split Learning Model to Protect Privacy (프라이버시 침해에 대응하는 분할 학습 모델 연구)

  • Ryu, Jihyeon;Won, Dongho;Lee, Youngsook
    • Convergence Security Journal
    • /
    • v.21 no.3
    • /
    • pp.49-56
    • /
    • 2021
  • Recently, artificial intelligence is regarded as an essential technology in our society. In particular, the invasion of privacy in artificial intelligence has become a serious problem in modern society. Split learning, proposed at MIT in 2019 for privacy protection, is a type of federated learning technique that does not share any raw data. In this study, we studied a safe and accurate segmentation learning model using known differential privacy to safely manage data. In addition, we trained SVHN and GTSRB on a split learning model to which 15 different types of differential privacy are applied, and checked whether the learning is stable. By conducting a learning data extraction attack, a differential privacy budget that prevents attacks is quantitatively derived through MSE.

Study on Evaluation Method of Task-Specific Adaptive Differential Privacy Mechanism in Federated Learning Environment (연합 학습 환경에서의 Task-Specific Adaptive Differential Privacy 메커니즘 평가 방안 연구)

  • Assem Utaliyeva;Yoon-Ho Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.1
    • /
    • pp.143-156
    • /
    • 2024
  • Federated Learning (FL) has emerged as a potent methodology for decentralized model training across multiple collaborators, eliminating the need for data sharing. Although FL is lauded for its capacity to preserve data privacy, it is not impervious to various types of privacy attacks. Differential Privacy (DP), recognized as the golden standard in privacy-preservation techniques, is widely employed to counteract these vulnerabilities. This paper makes a specific contribution by applying an existing, task-specific adaptive DP mechanism to the FL environment. Our comprehensive analysis evaluates the impact of this mechanism on the performance of a shared global model, with particular attention to varying data distribution and partitioning schemes. This study deepens the understanding of the complex interplay between privacy and utility in FL, providing a validated methodology for securing data without compromising performance.