• Title/Summary/Keyword: Data Privacy Model

Search Result 275, Processing Time 0.029 seconds

An Extended Role-based Access Control Model with Privacy Enforcement (프라이버시 보호를 갖는 확장된 역할기반 접근제어 모델)

  • 박종화;김동규
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.8C
    • /
    • pp.1076-1085
    • /
    • 2004
  • Privacy enforcement has been one of the most important problems in IT area. Privacy protection can be achieved by enforcing privacy policies within an organization's data processing systems. Traditional security models are more or less inappropriate for enforcing basic privacy requirements, such as privacy binding. This paper proposes an extended role-based access control (RBAC) model for enforcing privacy policies within an organization. For providing privacy protection and context based access control, this model combines RBAC, Domain-Type Enforcement, and privacy policies Privacy policies are to assign privacy levels to user roles according to their tasks and to assign data privacy levels to data according to consented consumer privacy preferences recorded as data usage policies. For application of this model, small hospital model is considered.

Differential Privacy Technology Resistant to the Model Inversion Attack in AI Environments (AI 환경에서 모델 전도 공격에 안전한 차분 프라이버시 기술)

  • Park, Cheollhee;Hong, Dowon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.3
    • /
    • pp.589-598
    • /
    • 2019
  • The amount of digital data a is explosively growing, and these data have large potential values. Countries and companies are creating various added values from vast amounts of data, and are making a lot of investments in data analysis techniques. The privacy problem that occurs in data analysis is a major factor that hinders data utilization. Recently, as privacy violation attacks on neural network models have been proposed. researches on artificial neural network technology that preserves privacy is required. Therefore, various privacy preserving artificial neural network technologies have been studied in the field of differential privacy that ensures strict privacy. However, there are problems that the balance between the accuracy of the neural network model and the privacy budget is not appropriate. In this paper, we study differential privacy techniques that preserve the performance of a model within a given privacy budget and is resistant to model inversion attacks. Also, we analyze the resistance of model inversion attack according to privacy preservation strength.

Privacy Model Recommendation System Based on Data Feature Analysis

  • Seung Hwan Ryu;Yongki Hong;Gihyuk Ko;Heedong Yang;Jong Wan Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.81-92
    • /
    • 2023
  • A privacy model is a technique that quantitatively restricts the possibility and degree of privacy breaches through privacy attacks. Representative models include k-anonymity, l-diversity, t-closeness, and differential privacy. While many privacy models have been studied, research on selecting the most suitable model for a given dataset has been relatively limited. In this study, we develop a system for recommending the suitable privacy model to prevent privacy breaches. To achieve this, we analyze the data features that need to be considered when selecting a model, such as data type, distribution, frequency, and range. Based on privacy model background knowledge that includes information about the relationships between data features and models, we recommend the most appropriate model. Finally, we validate the feasibility and usefulness by implementing a recommendation prototype system.

A Solution to Privacy Preservation in Publishing Human Trajectories

  • Li, Xianming;Sun, Guangzhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3328-3349
    • /
    • 2020
  • With rapid development of ubiquitous computing and location-based services (LBSs), human trajectory data and associated activities are increasingly easily recorded. Inappropriately publishing trajectory data may leak users' privacy. Therefore, we study publishing trajectory data while preserving privacy, denoted privacy-preserving activity trajectories publishing (PPATP). We propose S-PPATP to solve this problem. S-PPATP comprises three steps: modeling, algorithm design and algorithm adjustment. During modeling, two user models describe users' behaviors: one based on a Markov chain and the other based on the hidden Markov model. We assume a potential adversary who intends to infer users' privacy, defined as a set of sensitive information. An adversary model is then proposed to define the adversary's background knowledge and inference method. Additionally, privacy requirements and a data quality metric are defined for assessment. During algorithm design, we propose two publishing algorithms corresponding to the user models and prove that both algorithms satisfy the privacy requirement. Then, we perform a comparative analysis on utility, efficiency and speedup techniques. Finally, we evaluate our algorithms through experiments on several datasets. The experiment results verify that our proposed algorithms preserve users' privay. We also test utility and discuss the privacy-utility tradeoff that real-world data publishers may face.

Privacy Level Indicating Data Leakage Prevention System

  • Kim, Jinhyung;Park, Choonsik;Hwang, Jun;Kim, Hyung-Jong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.3
    • /
    • pp.558-575
    • /
    • 2013
  • The purpose of a data leakage prevention system is to protect corporate information assets. The system monitors the packet exchanges between internal systems and the Internet, filters packets according to the data security policy defined by each company, or discretionarily deletes important data included in packets in order to prevent leakage of corporate information. However, the problem arises that the system may monitor employees' personal information, thus allowing their privacy to be violated. Therefore, it is necessary to find not only a solution for detecting leakage of significant information, but also a way to minimize the leakage of internal users' personal information. In this paper, we propose two models for representing the level of personal information disclosure during data leakage detection. One model measures only the disclosure frequencies of keywords that are defined as personal data. These frequencies are used to indicate the privacy violation level. The other model represents the context of privacy violation using a private data matrix. Each row of the matrix represents the disclosure counts for personal data keywords in a given time period, and each column represents the disclosure count of a certain keyword during the entire observation interval. Using the suggested matrix model, we can represent an abstracted context of the privacy violation situation. Experiments on the privacy violation situation to demonstrate the usability of the suggested models are also presented.

A Study on the Privacy Concern of e-commerce Users: Focused on Information Boundary Theory (전자상거래 이용자의 프라이버시 염려에 관한 연구 : 정보경계이론을 중심으로)

  • Kim, Jong-Ki;Oh, Da-Woon
    • The Journal of Information Systems
    • /
    • v.26 no.2
    • /
    • pp.43-62
    • /
    • 2017
  • Purpose This study provided empirical support for the model that explain the formation of privacy concerns in the perspective of Information Boundary Theory. This study investigated an integrated model suggesting that privacy concerns are formed by the individual's disposition to value privacy, privacy awareness, awareness of privacy policy, and government legislation. The Information Boundary Theory suggests that the boundaries of information space dependends on the individual's personal characteristics and environmental factors of e-commerce. When receiving a request for personal information from e-commerce websites, an individual assesses the risk depending on the risk-control assessment, the perception of intrusion give rise to privacy concerns. Design/methodology/approach This study empirically tested the hypotheses with the data collected in a survey that included the items measuring the constructs in the model. The survey was aimed at university students. and a causal modeling statistical technique(PLS) is used for data analysis in this research. Findings The results of the survey indicated significant relationships among environmental factors of e-commerce websites, individual's personal privacy characteristics and privacy concerns. Both individual's awareness of institutional privacy assurance on e-commerce and the privacy characteristics affect the risk-control assessment towards information disclosure, which becomes an essential components of privacy concerns.

Models for Privacy-preserving Data Publishing : A Survey (프라이버시 보호 데이터 배포를 위한 모델 조사)

  • Kim, Jongseon;Jung, Kijung;Lee, Hyukki;Kim, Soohyung;Kim, Jong Wook;Chung, Yon Dohn
    • Journal of KIISE
    • /
    • v.44 no.2
    • /
    • pp.195-207
    • /
    • 2017
  • In recent years, data are actively exploited in various fields. Hence, there is a strong demand for sharing and publishing data. However, sensitive information regarding people can breach the privacy of an individual. To publish data while protecting an individual's privacy with minimal information distortion, the privacy- preserving data publishing(PPDP) has been explored. PPDP assumes various attacker models and has been developed according to privacy models which are principles to protect against privacy breaching attacks. In this paper, we first present the concept of privacy breaching attacks. Subsequently, we classify the privacy models according to the privacy breaching attacks. We further clarify the differences and requirements of each privacy model.

A Study on Privacy Issues and Solutions of Public Data in Education

  • Jun, Woochun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.1
    • /
    • pp.137-143
    • /
    • 2020
  • With the development of information and communication technology, various data have appeared and are being distributed. The use of various data has contributed to the enrichment and convenience of our lives. Data in the public areas is also growing in volume and being actively used. Public data in the field of education are also used in various ways. As the distribution and use of public data has increased, advantages and disadvantages have started to emerge. Among the various disadvantages, the privacy problem is a representative one. In this study, we deal with the privacy issues of public data in education. First, we introduce the privacy issues of public data in the education field and suggest various solutions. The various solutions include the expansion of privacy education opportunities, the need for a new privacy protection model, the provision of a training opportunity for privacy protection for teachers and administrators, and the development of a real-time privacy infringement diagnosis tool.

Privacy-Preservation Using Group Signature for Incentive Mechanisms in Mobile Crowd Sensing

  • Kim, Mihui;Park, Younghee;Dighe, Pankaj Balasaheb
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1036-1054
    • /
    • 2019
  • Recently, concomitant with a surge in numbers of Internet of Things (IoT) devices with various sensors, mobile crowdsensing (MCS) has provided a new business model for IoT. For example, a person can share road traffic pictures taken with their smartphone via a cloud computing system and the MCS data can provide benefits to other consumers. In this service model, to encourage people to actively engage in sensing activities and to voluntarily share their sensing data, providing appropriate incentives is very important. However, the sensing data from personal devices can be sensitive to privacy, and thus the privacy issue can suppress data sharing. Therefore, the development of an appropriate privacy protection system is essential for successful MCS. In this study, we address this problem due to the conflicting objectives of privacy preservation and incentive payment. We propose a privacy-preserving mechanism that protects identity and location privacy of sensing users through an on-demand incentive payment and group signatures methods. Subsequently, we apply the proposed mechanism to one example of MCS-an intelligent parking system-and demonstrate the feasibility and efficiency of our mechanism through emulation.

A Study of Split Learning Model to Protect Privacy (프라이버시 침해에 대응하는 분할 학습 모델 연구)

  • Ryu, Jihyeon;Won, Dongho;Lee, Youngsook
    • Convergence Security Journal
    • /
    • v.21 no.3
    • /
    • pp.49-56
    • /
    • 2021
  • Recently, artificial intelligence is regarded as an essential technology in our society. In particular, the invasion of privacy in artificial intelligence has become a serious problem in modern society. Split learning, proposed at MIT in 2019 for privacy protection, is a type of federated learning technique that does not share any raw data. In this study, we studied a safe and accurate segmentation learning model using known differential privacy to safely manage data. In addition, we trained SVHN and GTSRB on a split learning model to which 15 different types of differential privacy are applied, and checked whether the learning is stable. By conducting a learning data extraction attack, a differential privacy budget that prevents attacks is quantitatively derived through MSE.