• Title/Summary/Keyword: deduplication

Search Result 73, Processing Time 0.034 seconds

Design of Adaptive Deduplication Algorithm Based on File Type and Size (파일 유형과 크기에 따른 적응형 중복 제거 알고리즘 설계)

  • Hwang, In-Cheol;Kwon, Oh-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.2
    • /
    • pp.149-157
    • /
    • 2020
  • Today, due to the large amount of data duplication caused by the increase in user data, various deduplication studies have been conducted. However, research on personal storage is relatively poor. Personal storage, unlike high-performance computers, needs to perform deduplication while reducing CPU and memory resource usage. In this paper, we propose an adaptive algorithm that selectively applies fixed size chunking (FSC) and whole file chunking (WFH) according to the file type and size in order to maintain the deduplication rate and reduce the load in personal storage. We propose an algorithm for minimization. The experimental results show that the proposed file system has more than 1.3 times slower at first write operation but less than 3 times reducing in memory usage compare to LessFS and it is 2.5 times faster at rewrite operation.

Improving Efficiency of Encrypted Data Deduplication with SGX (SGX를 활용한 암호화된 데이터 중복제거의 효율성 개선)

  • Koo, Dongyoung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.8
    • /
    • pp.259-268
    • /
    • 2022
  • With prosperous usage of cloud services to improve management efficiency due to the explosive increase in data volume, various cryptographic techniques are being applied in order to preserve data privacy. In spite of the vast computing resources of cloud systems, decrease in storage efficiency caused by redundancy of data outsourced from multiple users acts as a factor that significantly reduces service efficiency. Among several approaches on privacy-preserving data deduplication over encrypted data, in this paper, the research results for improving efficiency of encrypted data deduplication using trusted execution environment (TEE) published in the recent USENIX ATC are analysed in terms of security and efficiency of the participating entities. We present a way to improve the stability of a key-managing server by integrating it with individual clients, resulting in secure deduplication without independent key servers. The experimental results show that the communication efficiency of the proposed approach can be improved by about 30% with the effect of a distributed key server while providing robust security guarantees as the same level of the previous research.

Request Deduplication Scheme in Cache-Enabled 5G Network Using PON

  • Jung, Bokrae
    • Journal of information and communication convergence engineering
    • /
    • v.18 no.2
    • /
    • pp.100-105
    • /
    • 2020
  • With the advent of the 5G era, the rapid growth in demand for mobile content services has increased the need for additional backhaul investment. To meet this demand, employing a content delivery network (CDN) and optical access solution near the last mile has become essential for the configuration of 5G networks. In this paper, a cache-enabled architecture using the passive optical network (PON) is presented to serve video on demand (VoD) for users. For efficient use of mobile backhaul, I propose a request deduplication scheme (RDS) that can provide all the requested services missed in cache with minimum bandwidth by eliminating duplicate requests for movies within tolerable range of the quality of service (QoS). The performance of the proposed architecture is compared with and without RDS in terms of the number of requests arriving at the origin server (OS), hit ratio, and improvement ratio according to user requests and cache sizes.

Provably-Secure Public Auditing with Deduplication

  • Kim, Dongmin;Jeong, Ik Rae
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.4
    • /
    • pp.2219-2236
    • /
    • 2017
  • With cloud storage services, users can handle an enormous amount of data in an efficient manner. However, due to the widespread popularization of cloud storage, users have raised concerns about the integrity of outsourced data, since they no longer possess the data locally. To address these concerns, many auditing schemes have been proposed that allow users to check the integrity of their outsourced data without retrieving it in full. Yuan and Yu proposed a public auditing scheme with a deduplication property where the cloud server does not store the duplicated data between users. In this paper, we analyze the weakness of the Yuan and Yu's scheme as well as present modifications which could improve the security of the scheme. We also define two types of adversaries and prove that our proposed scheme is secure against these adversaries under formal security models.

Design and Implementation of Multiple Filter Distributed Deduplication System Applying Cuckoo Filter Similarity (쿠쿠 필터 유사도를 적용한 다중 필터 분산 중복 제거 시스템 설계 및 구현)

  • Kim, Yeong-A;Kim, Gea-Hee;Kim, Hyun-Ju;Kim, Chang-Geun
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.10
    • /
    • pp.1-8
    • /
    • 2020
  • The need for storage, management, and retrieval techniques for alternative data has emerged as technologies based on data generated from business activities conducted by enterprises have emerged as the key to business success in recent years. Existing big data platform systems must load a large amount of data generated in real time without delay to process unstructured data, which is an alternative data, and efficiently manage storage space by utilizing a deduplication system of different storages when redundant data occurs. In this paper, we propose a multi-layer distributed data deduplication process system using the similarity of the Cuckoo hashing filter technique considering the characteristics of big data. Similarity between virtual machines is applied as Cuckoo hash, individual storage nodes can improve performance with deduplication efficiency, and multi-layer Cuckoo filter is applied to reduce processing time. Experimental results show that the proposed method shortens the processing time by 8.9% and increases the deduplication rate by 10.3%.

CORE-Dedup: IO Extent Chunking based Deduplication using Content-Preserving Access Locality (CORE-Dedup: 내용보존 접근 지역성 활용한 IO 크기 분할 기반 중복제거)

  • Kim, Myung-Sik;Won, You-Jip
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.6
    • /
    • pp.59-76
    • /
    • 2015
  • Recent wide spread of embedded devices and technology growth of broadband communication has led to rapid increase in the volume of created and managed data. As a result, data centers have to increase the storage capacity cost-effectively to store the created data. Data deduplication is one way to save the storage space by removing redundant data. This work propose IO extent based deduplication schemes called CORE-Dedup that exploits content-preserving access locality. We acquire IO traces from block device layer in virtual machine host, and compare the deduplication performance of chunking method between the fixed size and IO extent based. At multiple workload of 10 user's compile in virtual machine environment, the result shows that 4 KB fixed size chunking and IO extent based chunking use chunk index 14500 and 1700, respectively. The deduplication rate account for 60.4% and 57.6% on fixed size and IO extent chunking, respectively.

Design and Implementation of Inline Data Deduplication in Cluster File System (클러스터 파일 시스템에서 인라인 데이터 중복제거 설계 및 구현)

  • Kim, Youngchul;Kim, Cheiyol;Lee, Sangmin;Kim, Youngkyun
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.8
    • /
    • pp.369-374
    • /
    • 2016
  • The growing demand of virtual computing and storage resources in the cloud computing environment has led to deduplication of storage system for effective reduction and utilization of storage space. In particular, large reduction in the storage space is made possible by preventing data with identical content as the virtual desktop images from being stored on the virtual desktop infrastructure. However, in order to provide reliable support of virtual desktop services, the storage system must address a variety of workloads by virtual desktop, such as performance overhead due to deduplication, periodic data I/O storms and frequent random I/O operations. In this paper, we designed and implemented a clustered file system to support virtual desktop and storage services in cloud computing environment. The proposed clustered file system provides low storage consumption by means of inline deduplication on virtual desktop images. In addition, it reduces performance overhead by deduplication process in the data server and not the virtual host on which virtual desktops are running.

Performance Analysis of Open Source Based Distributed Deduplication File System (오픈 소스 기반 데이터 분산 중복제거 파일 시스템의 성능 분석)

  • Jung, Sung-Ouk;Choi, Hoon
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.12
    • /
    • pp.623-631
    • /
    • 2014
  • Comparison of two representative deduplication file systems, LessFS and SDFS, shows that Lessfs is better in execution time and CPU utilization while SDFS is better in storage usage (around 1/8 less than general file systems). In this paper, a new system is proposed where the advantages of SDFS and Lessfs are combined. The new system uses multiple DFEs and one DSE to maintain the integrity and consistency of the data. An evaluation study to compare between Single DFE and Dual DFE indicates that the Dual DFE was better than the Single DFE. The Dual DFE reduced the CPU usage and provided fast deduplication time. This reveals that proposed system can be used to solve the problem of an increase in large data storage and power consumption.

A Study of Method to Restore Deduplicated Files in Windows Server 2012 (윈도우 서버 2012에서 데이터 중복 제거 기능이 적용된 파일의 복원 방법에 관한 연구)

  • Son, Gwancheol;Han, Jaehyeok;Lee, Sangjin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.27 no.6
    • /
    • pp.1373-1383
    • /
    • 2017
  • Deduplication is a function to effectively manage data and improve the efficiency of storage space. When the deduplication is applied to the system, it makes it possible to efficiently use the storage space by dividing the stored file into chunks and storing only unique chunk. However, the commercial digital forensic tool do not support the file system analysis, and the original file extracted by the tool can not be executed or opened. Therefore, in this paper, we analyze the process of generating chunks of data for a Windows Server 2012 system that can apply deduplication, and the structure of the resulting file(Chunk Storage). We also analyzed the case where chunks that are not covered in the previous study are compressed. Based on these results, we propose the method to collect deduplicated data and reconstruct the original file for digital forensic investigation.

Improving the Lifetime of NAND Flash-based Storages by Min-hash Assisted Delta Compression Engine (MADE (Minhash-Assisted Delta Compression Engine) : 델타 압축 기반의 낸드 플래시 저장장치 내구성 향상 기법)

  • Kwon, Hyoukjun;Kim, Dohyun;Park, Jisung;Kim, Jihong
    • Journal of KIISE
    • /
    • v.42 no.9
    • /
    • pp.1078-1089
    • /
    • 2015
  • In this paper, we propose the Min-hash Assisted Delta-compression Engine(MADE) to improve the lifetime of NAND flash-based storages at the device level. MADE effectively reduces the write traffic to NAND flash through the use of a novel delta compression scheme. The delta compression performance was optimized by introducing min-hash based LSH(Locality Sensitive Hash) and efficiently combining it with our delta compression method. We also developed a delta encoding technique that has functionality equivalent to deduplication and lossless compression. The results of our experiment show that MADE reduces the amount of data written on NAND flash by up to 90%, which is better than a simple combination of deduplication and lossless compression schemes by 12% on average.