• Title, Summary, Keyword: 자연어처리

Search Result 604, Processing Time 0.044 seconds

An English Essay Scoring System Based on Grammaticality and Lexical Cohesion (문법성과 어휘 응집성 기반의 영어 작문 평가 시스템)

  • Kim, Dong-Sung;Kim, Sang-Chul;Chae, Hee-Rahk
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.3
    • /
    • pp.223-255
    • /
    • 2008
  • In this paper, we introduce an automatic system of scoring English essays. The system is comprised of three main components: a spelling checker, a grammar checker and a lexical cohesion checker. We have used such resources as WordNet, Link Grammar/parser and Roget's thesaurus for these components. The usefulness of an automatic scoring system depends on its reliability. To measure reliability, we compared the results of automatic scoring with those of manual scoring, on the basis of the Kappa statistics and the Multi-facet Rasch Model. The statistical data obtained from the comparison showed that the scoring system is as reliable as professional human graders. This system deals with textual units rather than sentential units and checks not only formal properties of a text but also its contents.

  • PDF

Automation of Service Level Agreement based on Active SLA (Active SLA 기반 서비스 수준 협약의 자동화)

  • Kim, Sang-Rak;Kang, Man-Mo;Bae, Jae-Hak
    • The Journal of The Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.4
    • /
    • pp.229-237
    • /
    • 2013
  • As demand for IT services increase, which are based on SOA and cloud computing, service level agreements (SLAs) have received more attention in the parties concerned. An SLA is usually a paper contract written in natural language. SLA management tools which are commercially available, implement SLAs implicitly in the application with a procedural language. This makes automation of SLA management difficult. It is also laborious to maintain contract management systems because changes in a contract give rise to extensive modifications in the source code. We see the source of the trouble is the existence of documentary SLAs (paper contracts) and corresponding executable SLAs (contracts coded in the procedural language). In this paper, to resolve the current SLA management problems we propose an active SLM (Active Service Level Management) system, which is based on the active SLA (Active Service Level Agreement). In the proposed system, the separated management and processing of dual SLAs can be unified into a single process with the introduction of active SLAs (ASLAs).

Story Generation Method using User Information in Mobile Environment (모바일 환경에서 사용자 정보를 이용한 스토리 생성 방법)

  • Hong, Jeen-Pyo;Cha, Jeong-Won
    • Journal of Internet Computing and Services
    • /
    • v.14 no.3
    • /
    • pp.81-90
    • /
    • 2013
  • Mobile device can get useful user information, because users have always this device. In this paper, we propose automatically story generation method and user topic extraction using user information in mobile environment. Proposed method is follows: (1) We collect user action information in mobile device. Then, (2) we extract topics from collected information. (3) For the results of (2), we determine episodes for one day. Then, (4) we generate sentences using sentence templates and we compose stories which have theme-based or time-based. Because proposed method is simpler than previous method, proposed method can work only in mobile device. There's no room to leak user information. And proposed method is expressed more informative than previous method, because proposed method is provided sentence-based result. Extracted user-topic, a result of our method, can use to analyze user action and user preference.

A study on integrating and discovery of semantic based knowledge model (의미 기반의 지식모델 통합과 탐색에 관한 연구)

  • Chun, Seung-Su
    • Journal of Internet Computing and Services
    • /
    • v.15 no.6
    • /
    • pp.99-106
    • /
    • 2014
  • Generation and analysis methods have been proposed in recent years, such as using a natural language and formal language processing, artificial intelligence algorithms based knowledge model is effective meaning. its semantic based knowledge model has been used effective decision making tree and problem solving about specific context. and it was based on static generation and regression analysis, trend analysis with behavioral model, simulation support for macroeconomic forecasting mode on especially in a variety of complex systems and social network analysis. In this study, in this sense, integrating knowledge-based models, This paper propose a text mining derived from the inter-Topic model Integrated formal methods and Algorithms. First, a method for converting automatically knowledge map is derived from text mining keyword map and integrate it into the semantic knowledge model for this purpose. This paper propose an algorithm to derive a method of projecting a significant topic map from the map and the keyword semantically equivalent model. Integrated semantic-based knowledge model is available.

Designing a Repository Independent Model for Mining and Analyzing Heterogeneous Bug Tracking Systems (다형의 버그 추적 시스템 마이닝 및 분석을 위한 저장소 독립 모델 설계)

  • Lee, Jae-Kwon;Jung, Woo-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.9
    • /
    • pp.103-115
    • /
    • 2014
  • In this paper, we propose UniBAS(Unified Bug Analysis System) to provide a unified repository model by integrating the extracted data from the heterogeneous bug tracking systems. The UniBAS reduces the cost and complexity of the MSR(Mining Software Repositories) research process and enables the researchers to focus on their logics rather than the tedious and repeated works such as extracting repositories, processing data and building analysis models. Additionally, the system not only extracts the data but also automatically generates database tables, views and stored procedures which are required for the researchers to perform query-based analysis easily. It can also generate various types of exported files for utilizing external analysis tools or managing research data. A case study of detecting duplicate bug reports from the Firfox project of the Mozilla site has been performed based on the UniBAS in order to evaluate the usefulness of the system. The results of the experiments with various algorithms of natural language processing and flexible querying to the automatically extracted data also showed the effectiveness of the proposed system.

Document Summarization Considering Entailment Relation between Sentences (문장 수반 관계를 고려한 문서 요약)

  • Kwon, Youngdae;Kim, Noo-ri;Lee, Jee-Hyong
    • Journal of KIISE
    • /
    • v.44 no.2
    • /
    • pp.179-185
    • /
    • 2017
  • Document summarization aims to generate a summary that is consistent and contains the highly related sentences in a document. In this study, we implemented for document summarization that extracts highly related sentences from a whole document by considering both similarities and entailment relations between sentences. Accordingly, we proposed a new algorithm, TextRank-NLI, which combines a Recurrent Neural Network based Natural Language Inference model and a Graph-based ranking algorithm used in single document extraction-based summarization task. In order to evaluate the performance of the new algorithm, we conducted experiments using the same datasets as used in TextRank algorithm. The results indicated that TextRank-NLI showed 2.3% improvement in performance, as compared to TextRank.

A Word Embedding used Word Sense and Feature Mirror Model (단어 의미와 자질 거울 모델을 이용한 단어 임베딩)

  • Lee, JuSang;Shin, JoonChoul;Ock, CheolYoung
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.4
    • /
    • pp.226-231
    • /
    • 2017
  • Word representation, an important area in natural language processing(NLP) used machine learning, is a method that represents a word not by text but by distinguishable symbol. Existing word embedding employed a large number of corpora to ensure that words are positioned nearby within text. However corpus-based word embedding needs several corpora because of the frequency of word occurrence and increased number of words. In this paper word embedding is done using dictionary definitions and semantic relationship information(hypernyms and antonyms). Words are trained using the feature mirror model(FMM), a modified Skip-Gram(Word2Vec). Sense similar words have similar vector. Furthermore, it was possible to distinguish vectors of antonym words.

News Topic Extraction based on Word Similarity (단어 유사도를 이용한 뉴스 토픽 추출)

  • Jin, Dongxu;Lee, Soowon
    • Journal of KIISE
    • /
    • v.44 no.11
    • /
    • pp.1138-1148
    • /
    • 2017
  • Topic extraction is a technology that automatically extracts a set of topics from a set of documents, and this has been a major research topic in the area of natural language processing. Representative topic extraction methods include Latent Dirichlet Allocation (LDA) and word clustering-based methods. However, there are problems with these methods, such as repeated topics and mixed topics. The problem of repeated topics is one in which a specific topic is extracted as several topics, while the problem of mixed topic is one in which several topics are mixed in a single extracted topic. To solve these problems, this study proposes a method to extract topics using an LDA that is robust against the problem of repeated topic, going through the steps of separating and merging the topics using the similarity between words to correct the extracted topics. As a result of the experiment, the proposed method showed better performance than the conventional LDA method.

Hybrid Word-Character Neural Network Model for the Improvement of Document Classification (문서 분류의 개선을 위한 단어-문자 혼합 신경망 모델)

  • Hong, Daeyoung;Shim, Kyuseok
    • Journal of KIISE
    • /
    • v.44 no.12
    • /
    • pp.1290-1295
    • /
    • 2017
  • Document classification, a task of classifying the category of each document based on text, is one of the fundamental areas for natural language processing. Document classification may be used in various fields such as topic classification and sentiment classification. Neural network models for document classification can be divided into two categories: word-level models and character-level models that treat words and characters as basic units respectively. In this study, we propose a neural network model that combines character-level and word-level models to improve performance of document classification. The proposed model extracts the feature vector of each word by combining information obtained from a word embedding matrix and information encoded by a character-level neural network. Based on feature vectors of words, the model classifies documents with a hierarchical structure wherein recurrent neural networks with attention mechanisms are used for both the word and the sentence levels. Experiments on real life datasets demonstrate effectiveness of our proposed model.

A Comparative Performance Analysis of Spark-Based Distributed Deep-Learning Frameworks (스파크 기반 딥 러닝 분산 프레임워크 성능 비교 분석)

  • Jang, Jaehee;Park, Jaehong;Kim, Hanjoo;Yoon, Sungroh
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.5
    • /
    • pp.299-303
    • /
    • 2017
  • By piling up hidden layers in artificial neural networks, deep learning is delivering outstanding performances for high-level abstraction problems such as object/speech recognition and natural language processing. Alternatively, deep-learning users often struggle with the tremendous amounts of time and resources that are required to train deep neural networks. To alleviate this computational challenge, many approaches have been proposed in a diversity of areas. In this work, two of the existing Apache Spark-based acceleration frameworks for deep learning (SparkNet and DeepSpark) are compared and analyzed in terms of the training accuracy and the time demands. In the authors' experiments with the CIFAR-10 and CIFAR-100 benchmark datasets, SparkNet showed a more stable convergence behavior than DeepSpark; but in terms of the training accuracy, DeepSpark delivered a higher classification accuracy of approximately 15%. For some of the cases, DeepSpark also outperformed the sequential implementation running on a single machine in terms of both the accuracy and the running time.