• Title/Summary/Keyword: Document Analysis

Search Result 1,173, Processing Time 0.031 seconds

Forgery Detection Mechanism with Abnormal Structure Analysis on Office Open XML based MS-Word File

  • Lee, HanSeong;Lee, Hyung-Woo
    • International journal of advanced smart convergence
    • /
    • v.8 no.4
    • /
    • pp.47-57
    • /
    • 2019
  • We examine the weaknesses of the existing OOXML-based MS-Word file structure, and analyze how data concealment and forgery are performed in MS-Word digital documents. In case of forgery by including hidden information in MS-Word digital document, there is no difference in opening the file with the MS-Word Processor. However, the computer system may be malfunctioned by malware or shell code hidden in the digital document. If a malicious image file or ZIP file is hidden in the document by using the structural vulnerability of the MS-Word document, it may be infected by ransomware that encrypts the entire file on the disk even if the MS-Word file is normally executed. Therefore, it is necessary to analyze forgery and alteration of digital document through internal structure analysis of MS-Word file. In this paper, we designed and implemented a mechanism to detect this efficiently and automatic detection software, and presented a method to proactively respond to attacks such as ransomware exploiting MS-Word security vulnerabilities.

Deep Learning Document Analysis System Based on Keyword Frequency and Section Centrality Analysis

  • Lee, Jongwon;Wu, Guanchen;Jung, Hoekyung
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.1
    • /
    • pp.48-53
    • /
    • 2021
  • Herein, we propose a document analysis system that analyzes papers or reports transformed into XML(Extensible Markup Language) format. It reads the document specified by the user, extracts keywords from the document, and compares the frequency of keywords to extract the top-three keywords. It maintains the order of the paragraphs containing the keywords and removes duplicated paragraphs. The frequency of the top-three keywords in the extracted paragraphs is re-verified, and the paragraphs are partitioned into 10 sections. Subsequently, the importance of the relevant areas is calculated and compared. By notifying the user of areas with the highest frequency and areas with higher importance than the average frequency, the user can read only the main content without reading all the contents. In addition, the number of paragraphs extracted through the deep learning model and the number of paragraphs in a section of high importance are predicted.

Investigation on the Effect of Multi-Vector Document Embedding for Interdisciplinary Knowledge Representation

  • Park, Jongin;Kim, Namgyu
    • Knowledge Management Research
    • /
    • v.21 no.1
    • /
    • pp.99-116
    • /
    • 2020
  • Text is the most widely used means of exchanging or expressing knowledge and information in the real world. Recently, researches on structuring unstructured text data for text analysis have been actively performed. One of the most representative document embedding method (i.e. doc2Vec) generates a single vector for each document using the whole corpus included in the document. This causes a limitation that the document vector is affected by not only core words but also other miscellaneous words. Additionally, the traditional document embedding algorithms map each document into only one vector. Therefore, it is not easy to represent a complex document with interdisciplinary subjects into a single vector properly by the traditional approach. In this paper, we introduce a multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. After introducing the previous study on multi-vector document embedding, we visually analyze the effects of the multi-vector document embedding method. Firstly, the new method vectorizes the document using only predefined keywords instead of the entire words. Secondly, the new method decomposes various subjects included in the document and generates multiple vectors for each document. The experiments for about three thousands of academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the multi-vector based method, we ascertained that the information and knowledge in complex documents can be represented more accurately by eliminating the interference among subjects.

Personalization of Document Warehouses: Formalization, Design and Implementation

  • Khrouf, Kais;Turki, Hela
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.10
    • /
    • pp.369-373
    • /
    • 2022
  • In the decision-making domain, a document warehouse is designed to meet the analysis needs of users who may have a wide variety of analysis purposes. In this paper, we propose to integrate the preferences and interactions of users based on profiles to the concept of document warehouses. These profiles guarantee the integration of personalized documents and the collaborative recommendation of documents between different users sharing common interests.

XML Document Keyword Weight Analysis based Paragraph Extraction Model (XML 문서 키워드 가중치 분석 기반 문단 추출 모델)

  • Lee, Jongwon;Kang, Inshik;Jung, Hoekyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.11
    • /
    • pp.2133-2138
    • /
    • 2017
  • The analysis of existing XML documents and other documents was centered on words. It can be implemented using a morpheme analyzer, but it can classify many words in the document and cannot grasp the core contents of the document. In order for a user to efficiently understand a document, a paragraph containing a main word must be extracted and presented to the user. The proposed system retrieves keyword in the normalized XML document. Then, the user extracts the paragraphs containing the keyword inputted for searching and displays them to the user. In addition, the frequency and weight of the keyword used in the search are informed to the user, and the order of the extracted paragraphs and the redundancy elimination function are minimized so that the user can understand the document. The proposed system can minimize the time and effort required to understand the document by allowing the user to understand the document without reading the whole document.

Document Schema for the CC-based evaluation of information technology security system (정보보호 시스템의 CC기반 평가를 위한 문서 스키마)

  • Kim, Jeom-Goo
    • Convergence Security Journal
    • /
    • v.12 no.3
    • /
    • pp.45-52
    • /
    • 2012
  • CC does not Contain detailed instructions about evaluation document. So, we must develop document schema to make CC-based evaluation system. In this report, we developed document schema that can be used in CC-based evaluation system. We devloped document schema and DTD that applying Weakest precondition function, reduction rules about amount of document and dependancy analysis document from assurance class within CC. Approach of this study can be applied to develop document and DTD that can be used in evaluation system of software quality.

Automatic Title Detection by Spatial Feature and Projection Profile for Document Images (공간 정보와 투영 프로파일을 이용한 문서 영상에서의 타이틀 영역 추출)

  • Park, Hyo-Jin;Kim, Bo-Ram;Kim, Wook-Hyun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.3
    • /
    • pp.209-214
    • /
    • 2010
  • This paper proposes an algorithm of segmentation and title detection for document image. The automated title detection method that we have developed is composed of two phases, segmentation and title area detection. In the first phase, we extract and segment the document image. To perform this operation, the binary map is segmented by combination of morphological operation and CCA(connected component algorithm). The first phase provides segmented regions that would be detected as title area for the second stage. Candidate title areas are detected using geometric information, then we can extract the title region that is performed by removing non-title regions. After classification step that removes non-text regions, projection is performed to detect a title region. From the fact that usually the largest font is used for the title in the document, horizontal projection is performed within text areas. In this paper, we proposed a method of segmentation and title detection for various forms of document images using geometric features and projection profile analysis. The proposed system is expected to have various applications, such as document title recognition, multimedia data searching, real-time image processing and so on.

A DOM-Based Fuzzing Method for Analyzing Seogwang Document Processing System in North Korea (북한 서광문서처리체계 분석을 위한 Document Object Model(DOM) 기반 퍼징 기법)

  • Park, Chanju;Kang, Dongsu
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.8 no.5
    • /
    • pp.119-126
    • /
    • 2019
  • Typical software developed and used by North Korea is Red Star and internal application software. However, most of the existing research on the North Korean software is the software installation method and general execution screen analysis. One of the ways to identify software vulnerabilities is file fuzzing, which is a typical method for identifying security vulnerabilities. In this paper, we use file fuzzing to analyze the security vulnerability of the software used in North Korea's Seogwang Document Processing System. At this time, we propose the analysis of open document text (ODT) file produced by Seogwang Document Processing System, extraction of node based on Document Object Mode (DOM) to determine test target, and generation of mutation file through insertion and substitution, this increases the number of crash detections at the same testing time.

Document Thematic words Extraction using Principal Component Analysis (주성분 분석을 이용한 문서 주제어 추출)

  • Lee, Chang-Beom;Kim, Min-Soo;Lee, Ki-Ho;Lee, Guee-Sang;Park, Hyuk-Ro
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.10
    • /
    • pp.747-754
    • /
    • 2002
  • In this paper, We propose a document thematic words extraction by using principal component analysis(PCA) which is one of the multivariate statistical methods. The proposed PCA model understands the flow of words in the document by using an eigenvalue and an eigenvector, and extracts thematic words. The proposed model is estimated by applying to document summarization. Experimental results using newspaper articles show that the proposed model is superior to the model using either word frequency or information retrieval thesaurus. We expect that the Proposed model can be applied to information retrieval , information extraction and document summarization.

A Study on Electronic Document Delivery Service in Academic Libraries (대학도서관의 전자문헌제공 모형구축에 관한 연구)

  • Lee, Hwa-Yeon
    • Journal of Information Management
    • /
    • v.28 no.1
    • /
    • pp.34-61
    • /
    • 1997
  • This study suggests that the model of electronic document delivery service be carried out by academic libraries step by step, based on the analysis of the domestic and foreign status. The model consists of the user's request step, electronic document building step, electronic document transferring step. Academic libraries should perform the electronic document delivery service step by step that satisfies user's information need.

  • PDF