• Title/Summary/Keyword: Similar Document Detection

Search Result 18, Processing Time 0.031 seconds

Fast, Flexible Text Search Using Genomic Short-Read Mapping Model

  • Kim, Sung-Hwan;Cho, Hwan-Gue
    • ETRI Journal
    • /
    • v.38 no.3
    • /
    • pp.518-528
    • /
    • 2016
  • The searching of an extensive document database for documents that are locally similar to a given query document, and the subsequent detection of similar regions between such documents, is considered as an essential task in the fields of information retrieval and data management. In this paper, we present a framework for such a task. The proposed framework employs the method of short-read mapping, which is used in bioinformatics to reveal similarities between genomic sequences. In this paper, documents are considered biological objects; consequently, edit operations between locally similar documents are viewed as an evolutionary process. Accordingly, we are able to apply the method of evolution tracing in the detection of similar regions between documents. In addition, we propose heuristic methods to address issues associated with the different stages of the proposed framework, for example, a frequency-based fragment ordering method and a locality-aware interval aggregation method. Extensive experiments covering various scenarios related to the search of an extensive document database for documents that are locally similar to a given query document are considered, and the results indicate that the proposed framework outperforms existing methods.

A Study on Plagiarism Detection and Document Classification Using Association Analysis (연관분석을 이용한 효과적인 표절검사 및 문서분류에 관한 연구)

  • Hwang, Insoo
    • The Journal of Information Systems
    • /
    • v.23 no.3
    • /
    • pp.127-142
    • /
    • 2014
  • Plagiarism occurs when the content is copied without permission or citation, and the problem of plagiarism has rapidly increased because of the digital era of resources available on the World Wide Web. An important task in plagiarism detection is measuring and determining similar text portions between a given pair of documents. One of the main difficulties of this task is that not all similar text fragments are examples of plagiarism, since thematic coincidences also tend to produce portions of similar text. In order to handle this problem, this paper proposed association analysis in data mining to detect plagiarism. This method is able to detect common actions performed by plagiarists such as word deletion, insertion and transposition, allowing to obtain plausible portions of plagiarized text. Experimental results employing an unsupervised document classification strategy showed that the proposed method outperformed traditionally used approaches.

Discriminator of Similar Documents Using Syntactic and Semantic Analysis (구문의미분석를 이용한 유사문서 판별기)

  • Kang, Won-Seog;Hwang, Do-Sam;Kim, Jung H.
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.3
    • /
    • pp.40-51
    • /
    • 2014
  • Owing to importance of document copyright the need to detect document duplication and plagiarism is increasing. Many studies have sought to meet such need, but there are difficulties in document duplication detection due to technological limitations with the processing of natural language. This thesis designs and implements a discriminator of similar documents with natural language processing technique. This system discriminates similar documents using morphological analysis, syntactic analysis, and weight on low frequency and idiom. To evaluate the system, we analyze the correlation between human discrimination and term-based discrimination, and between human discrimination and proposed discrimination. This analysis shows that the proposed discrimination needs improving. Future research should work to define the document type and improve the processing technique appropriate for each type.

Discriminator of Similar Documents Using the Syntactic-Semantic Tree Comparator (구문의미트리 비교기를 이용한 유사문서 판별기)

  • Kang, Won-Seog
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.10
    • /
    • pp.636-646
    • /
    • 2015
  • In information society, the need to detect document duplication and plagiarism is increasing. Many studies have progressed to meet such need, but there are limitations in increasing document duplication detection quality due to technological problem of natural language processing. Recently, some studies tried to increase the quality by applying syntatic-semantic analysis technique. But, the studies have the problem comparing syntactic-semantic trees. This paper develops a syntactic-semantic tree comparator, designs and implements a discriminator of similar documents using the comparator. To evaluate the system, we analyze the correlation between human discrimination and system discrimination with the comparator. This analysis shows that the proposed discrimination has good performance. We need to define the document type and improve the processing technique appropriate for each type.

Big Signature Method for Plagiarism Detection (표절 탐지를 위한 비트 시그니처 기법)

  • Kim, Woosaeng;Kang, Kyucheol
    • Journal of Information Technology Applications and Management
    • /
    • v.24 no.1
    • /
    • pp.1-10
    • /
    • 2017
  • Recently, the problem of plagiarism has emerged as a big social issue because not only literature but also thesis become the target of plagiarism. Even the government requires conformation for plagiarism of high-ranking official's thesis as a standard of their ethical morality. Plagiarism is not just direct copy but also paraphrasing, rewording, adapting parts, missing references or wrong citations. This makes the problem more difficult to handle adequately. We propose a plagiarism detection scheme called a bit signature in which each unique word of document is represented by 0 or 1. The bit signature scheme can find the similar documents by comparing their absolute and relative bit signatures. Experiments show that a bit signature scheme produces better performance for document copy detection than existing similar schemes.

Local Similarity based Document Layout Analysis using Improved ARLSA

  • Kim, Gwangbok;Kim, SooHyung;Na, InSeop
    • International Journal of Contents
    • /
    • v.11 no.2
    • /
    • pp.15-19
    • /
    • 2015
  • In this paper, we propose an efficient document layout analysis algorithm that includes table detection. Typical methods of document layout analysis use the height and gap between words or columns. To correspond to the various styles and sizes of documents, we propose an algorithm that uses the mean value of the distance transform representing thickness and compare with components in the local area. With this algorithm, we combine a table detection algorithm using the same feature as that of the text classifier. Table candidates, separators, and big components are isolated from the image using Connected Component Analysis (CCA) and distance transform. The key idea of text classification is that the characteristics of the text parallel components that have a similar thickness and height. In order to estimate local similarity, we detect a text region using an adaptive searching window size. An improved adaptive run-length smoothing algorithm (ARLSA) was proposed to create the proper boundary of a text zone and non-text zone. Results from experiments on the ICDAR2009 page segmentation competition test set and our dataset demonstrate the superiority of our dataset through f-measure comparison with other algorithms.

Skew Correction of Document Images using Edge (에지를 이용한 문서영상의 기울기 보정)

  • Ju, Jae-Hyon;Oh, Jeong-Su
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.7
    • /
    • pp.1487-1494
    • /
    • 2012
  • This paper proposes an algorithm detecting the skew of the degraded as well as the clear document images using edge and correcting it. The proposed algorithm detects edges in a character region selected by image complexity and generates projection histograms by projecting them to various directions. And then it detects the document skew by estimating the edge concentrations in the histograms and corrects the skewed document image. For the fast skew detection, the proposed algorithm uses downsampling and 3 step coarse-to-fine searching. In the skew detection of the clear and the degraded images, the maximum and the average detection errors in the proposed algorithm are about 50% of one in a conventional similar algorithm and the processing time is reduced to about 25%. In the non-uniform luminance images acquired by a mobile device, the conventional algorithm can't detect skews since it can't get valid binary images, while the proposed algorithm detect them with the average detection error of 0.1o or under.

An effective detection method for hiding data in compound-document files (복합문서 파일에 은닉된 데이터 탐지 기법에 대한 연구)

  • Kim, EunKwang;Jeon, SangJun;Han, JaeHyeok;Lee, MinWook;Lee, Sangjin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.6
    • /
    • pp.1485-1494
    • /
    • 2015
  • Traditionally, data hiding has been done mainly in such a way that insert the data into the large-capacity multimedia files. However, the document files of the previous versions of Microsoft Office 2003 have been used as cover files as their structure are so similar to a File System that it is easy to hide data in them. If you open a compound-document file which has a secret message hidden in it with MS Office application, it is hard for users who don't know whether a secret message is hidden in the compound-document file to detect the secret message. This paper presents an analysis of Compound-File Binary Format features exploited in order to hide data and algorithms to detect the data hidden with these exploits. Studying methods used to hide data in unused area, unallocated area, reserved area and inserted streams led us to develop an algorithm to aid in the detection and examination of hidden data.

Secure Multiparty Computation of Principal Component Analysis (주성분 분석의 안전한 다자간 계산)

  • Kim, Sang-Pil;Lee, Sanghun;Gil, Myeong-Seon;Moon, Yang-Sae;Won, Hee-Sun
    • Journal of KIISE
    • /
    • v.42 no.7
    • /
    • pp.919-928
    • /
    • 2015
  • In recent years, many research efforts have been made on privacy-preserving data mining (PPDM) in data of large volume. In this paper, we propose a PPDM solution based on principal component analysis (PCA), which can be widely used in computing correlation among sensitive data sets. The general method of computing PCA is to collect all the data spread in multiple nodes into a single node before starting the PCA computation; however, this approach discloses sensitive data of individual nodes, involves a large amount of computation, and incurs large communication overheads. To solve the problem, in this paper, we present an efficient method that securely computes PCA without the need to collect all the data. The proposed method shares only limited information among individual nodes, but obtains the same result as that of the original PCA. In addition, we present a dimensionality reduction technique for the proposed method and use it to improve the performance of secure similar document detection. Finally, through various experiments, we show that the proposed method effectively and efficiently works in a large amount of multi-dimensional data.

Seal Detection in Scanned Documents (스캔된 문서에서의 도장 검출)

  • Yu, Kyeonah;Kim, Kyung-Hye
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.12
    • /
    • pp.65-73
    • /
    • 2013
  • As the advent of the digital age, documents are often scanned to be archived or to be transmitted over the network. The largest proportion of documents is texts and the next is seal images indicating the author of the documents. While a lot of research has been conducted to recognize texts in scanned documents and commercialized text recognizing products are developed as highlighted the importance of the scanned document, information about seal images is discarded. In this paper, we study how to extract the seal image area from the color or black and white document containing the seal image and how to save the seal image. We propose a preprocessing step to remove other components except for the candidate outlines of the seal imprint from scanned documents and a method to select the final region of interest from these candidates by using the feature of seal images. Also in case of a seal imprint overlapped with texts, the most similar image among those stored in the database is selected through the template matching process. We verify the implemented system for a various type of documents produced in schools and analyze the results.