• Title/Summary/Keyword: abstractive summarization

Search Result 22, Processing Time 0.02 seconds

Multi-layered attentional peephole convolutional LSTM for abstractive text summarization

  • Rahman, Md. Motiur;Siddiqui, Fazlul Hasan
    • ETRI Journal
    • /
    • v.43 no.2
    • /
    • pp.288-298
    • /
    • 2021
  • Abstractive text summarization is a process of making a summary of a given text by paraphrasing the facts of the text while keeping the meaning intact. The manmade summary generation process is laborious and time-consuming. We present here a summary generation model that is based on multilayered attentional peephole convolutional long short-term memory (MAPCoL; LSTM) in order to extract abstractive summaries of large text in an automated manner. We added the concept of attention in a peephole convolutional LSTM to improve the overall quality of a summary by giving weights to important parts of the source text during training. We evaluated the performance with regard to semantic coherence of our MAPCoL model over a popular dataset named CNN/Daily Mail, and found that MAPCoL outperformed other traditional LSTM-based models. We found improvements in the performance of MAPCoL in different internal settings when compared to state-of-the-art models of abstractive text summarization.

KI-HABS: Key Information Guided Hierarchical Abstractive Summarization

  • Zhang, Mengli;Zhou, Gang;Yu, Wanting;Liu, Wenfen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4275-4291
    • /
    • 2021
  • With the unprecedented growth of textual information on the Internet, an efficient automatic summarization system has become an urgent need. Recently, the neural network models based on the encoder-decoder with an attention mechanism have demonstrated powerful capabilities in the sentence summarization task. However, for paragraphs or longer document summarization, these models fail to mine the core information in the input text, which leads to information loss and repetitions. In this paper, we propose an abstractive document summarization method by applying guidance signals of key sentences to the encoder based on the hierarchical encoder-decoder architecture, denoted as KI-HABS. Specifically, we first train an extractor to extract key sentences in the input document by the hierarchical bidirectional GRU. Then, we encode the key sentences to the key information representation in the sentence level. Finally, we adopt key information representation guided selective encoding strategies to filter source information, which establishes a connection between the key sentences and the document. We use the CNN/Daily Mail and Gigaword datasets to evaluate our model. The experimental results demonstrate that our method generates more informative and concise summaries, achieving better performance than the competitive models.

Improving Abstractive Summarization by Training Masked Out-of-Vocabulary Words

  • Lee, Tae-Seok;Lee, Hyun-Young;Kang, Seung-Shik
    • Journal of Information Processing Systems
    • /
    • v.18 no.3
    • /
    • pp.344-358
    • /
    • 2022
  • Text summarization is the task of producing a shorter version of a long document while accurately preserving the main contents of the original text. Abstractive summarization generates novel words and phrases using a language generation method through text transformation and prior-embedded word information. However, newly coined words or out-of-vocabulary words decrease the performance of automatic summarization because they are not pre-trained in the machine learning process. In this study, we demonstrated an improvement in summarization quality through the contextualized embedding of BERT with out-of-vocabulary masking. In addition, explicitly providing precise pointing and an optional copy instruction along with BERT embedding, we achieved an increased accuracy than the baseline model. The recall-based word-generation metric ROUGE-1 score was 55.11 and the word-order-based ROUGE-L score was 39.65.

Text Summarization on Large-scale Vietnamese Datasets

  • Ti-Hon, Nguyen;Thanh-Nghi, Do
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.4
    • /
    • pp.309-316
    • /
    • 2022
  • This investigation is aimed at automatic text summarization on large-scale Vietnamese datasets. Vietnamese articles were collected from newspaper websites and plain text was extracted to build the dataset, that included 1,101,101 documents. Next, a new single-document extractive text summarization model was proposed to evaluate this dataset. In this summary model, the k-means algorithm is used to cluster the sentences of the input document using different text representations, such as BoW (bag-of-words), TF-IDF (term frequency - inverse document frequency), Word2Vec (Word-to-vector), Glove, and FastText. The summary algorithm then uses the trained k-means model to rank the candidate sentences and create a summary with the highest-ranked sentences. The empirical results of the F1-score achieved 51.91% ROUGE-1, 18.77% ROUGE-2 and 29.72% ROUGE-L, compared to 52.33% ROUGE-1, 16.17% ROUGE-2, and 33.09% ROUGE-L performed using a competitive abstractive model. The advantage of the proposed model is that it can perform well with O(n,k,p) = O(n(k+2/p)) + O(nlog2n) + O(np) + O(nk2) + O(k) time complexity.

A Method Name Suggestion Model based on Abstractive Text Summarization (추상적 텍스트 요약 기반의 메소드 이름 제안 모델)

  • Ju, Hansae;Lee, Scott Uk-Jin
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.137-138
    • /
    • 2022
  • 소스 코드 식별자의 이름을 잘 정하는 것은 소프트웨어 엔지니어링에서 중요한 문제로 다루어지고 있다. 프로그램 엔티티의 의미있고 간결한 이름은 코드 이해도에 중요한 역할을 하며, 소프트웨어 유지보수 관리 비용을 줄이는 데에 큰 효과가 있다. 이러한 코드 식별자 중 평균적으로 가장 복잡한 식별자는 '메소드 이름'으로 알려져 있다. 본 논문에서는 메소드 내용과 일관성 있는 적절한 메소드 이름 생성을 자연어 처리 태스크 중 하나인 '추상적 텍스트 요약'으로 치환하여 수행하는 트랜스포머 기반의 인코더-디코더 모델을 제안한다. 제안하는 모델은 Github 오픈소스를 크롤링한 Java 데이터셋에서 기존 최신 메소드 이름 생성 모델보다 약 50% 이상의 성능향상을 보였다. 이를 통해 적절한 메소드 작명에 필요한 비용 절감 달성 및 다양한 소스 코드 관련 태스크를 언어 모델의 성능을 활용하여 해결하는 데 도움이 될 것으로 기대된다.

  • PDF

Transformer-based Text Summarization Using Pre-trained Language Model (사전학습 언어 모델을 활용한 트랜스포머 기반 텍스트 요약)

  • Song, Eui-Seok;Kim, Museong;Lee, Yu-Rin;Ahn, Hyunchul;Kim, Namgyu
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.395-398
    • /
    • 2021
  • 최근 방대한 양의 텍스트 정보가 인터넷에 유통되면서 정보의 핵심 내용을 파악하기가 더욱 어려워졌으며, 이로 인해 자동으로 텍스트를 요약하려는 연구가 활발하게 이루어지고 있다. 텍스트 자동 요약을 위한 다양한 기법 중 특히 트랜스포머(Transformer) 기반의 모델은 추상 요약(Abstractive Summarization) 과제에서 매우 우수한 성능을 보이며, 해당 분야의 SOTA(State of the Art)를 달성하고 있다. 하지만 트랜스포머 모델은 매우 많은 수의 매개변수들(Parameters)로 구성되어 있어서, 충분한 양의 데이터가 확보되지 않으면 이들 매개변수에 대한 충분한 학습이 이루어지지 않아서 양질의 요약문을 생성하기 어렵다는 한계를 갖는다. 이러한 한계를 극복하기 위해 본 연구는 소량의 데이터가 주어진 환경에서도 양질의 요약문을 생성할 수 있는 문서 요약 방법론을 제안한다. 구체적으로 제안 방법론은 한국어 사전학습 언어 모델인 KoBERT의 임베딩 행렬을 트랜스포머 모델에 적용하는 방식으로 문서 요약을 수행하며, 제안 방법론의 우수성은 Dacon 한국어 문서 생성 요약 데이터셋에 대한 실험을 통해 ROUGE 지표를 기준으로 평가하였다.

  • PDF

Document Summarization Model Based on General Context in RNN

  • Kim, Heechan;Lee, Soowon
    • Journal of Information Processing Systems
    • /
    • v.15 no.6
    • /
    • pp.1378-1391
    • /
    • 2019
  • In recent years, automatic document summarization has been widely studied in the field of natural language processing thanks to the remarkable developments made using deep learning models. To decode a word, existing models for abstractive summarization usually represent the context of a document using the weighted hidden states of each input word when they decode it. Because the weights change at each decoding step, these weights reflect only the local context of a document. Therefore, it is difficult to generate a summary that reflects the overall context of a document. To solve this problem, we introduce the notion of a general context and propose a model for summarization based on it. The general context reflects overall context of the document that is independent of each decoding step. Experimental results using the CNN/Daily Mail dataset show that the proposed model outperforms existing models.

Summarization of Korean Dialogues through Dialogue Restructuring (대화문 재구조화를 통한 한국어 대화문 요약)

  • Eun Hee Kim;Myung Jin Lim;Ju Hyun Shin
    • Smart Media Journal
    • /
    • v.12 no.11
    • /
    • pp.77-85
    • /
    • 2023
  • After COVID-19, communication through online platforms has increased, leading to an accumulation of massive amounts of conversational text data. With the growing importance of summarizing this text data to extract meaningful information, there has been active research on deep learning-based abstractive summarization. However, conversational data, compared to structured texts like news articles, often contains missing or transformed information, necessitating consideration from multiple perspectives due to its unique characteristics. In particular, vocabulary omissions and unrelated expressions in the conversation can hinder effective summarization. Therefore, in this study, we restructured by considering the characteristics of Korean conversational data, fine-tuning a pre-trained text summarization model based on KoBART, and improved conversation data summary perfomance through a refining operation to remove redundant elements from the summary. By restructuring the sentences based on the order of utterances and extracting a central speaker, we combined methods to restructure the conversation around them. As a result, there was about a 4 point improvement in the Rouge-1 score. This study has demonstrated the significance of our conversation restructuring approach, which considers the characteristics of dialogue, in enhancing Korean conversation summarization performance.

Empirical Study for Automatic Evaluation of Abstractive Summarization by Error-Types (오류 유형에 따른 생성요약 모델의 본문-요약문 간 요약 성능평가 비교)

  • Seungsoo Lee;Sangwoo Kang
    • Korean Journal of Cognitive Science
    • /
    • v.34 no.3
    • /
    • pp.197-226
    • /
    • 2023
  • Generative Text Summarization is one of the Natural Language Processing tasks. It generates a short abbreviated summary while preserving the content of the long text. ROUGE is a widely used lexical-overlap based metric for text summarization models in generative summarization benchmarks. Although it shows very high performance, the studies report that 30% of the generated summary and the text are still inconsistent. This paper proposes a methodology for evaluating the performance of the summary model without using the correct summary. AggreFACT is a human-annotated dataset that classifies the types of errors in neural text summarization models. Among all the test candidates, the two cases, generation summary, and when errors occurred throughout the summary showed the highest correlation results. We observed that the proposed evaluation score showed a high correlation with models finetuned with BART and PEGASUS, which is pretrained with a large-scale Transformer structure.

Corpus Construction of National Assembly Minutes Summarization for Korean Abstractive Meeting Minutes Summarization (한국어 회의록 생성 요약을 위한 국회 회의록 요약 말뭉치 구축 연구)

  • Younggyun Hahm;Yejee Kang;Seoyoon Park;Yongbin Jeong;Hyunbin Seo;Yiseul Lee;Hyejin Seo;Saetbyol Seo;Hansam Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.192-197
    • /
    • 2022
  • 요약 연구의 주류는 아직 문서를 대상으로 하지만, 최근에는 회의 요약 연구에 대한 관심이 크게 높아지고 있다. 본 연구는 국립국어원 국어 빅데이터 구축 사업의 일환으로 국내에서 아직 연구되지 않은 국회 회의록 생성 요약에 대해 연구를 진행하였으며, 국회 회의록에 대한 생성 요약 데이터세트를 구축하였다. 또한 생성 요약 모델을 통해 구축된 데이터세트에 대한 정량 및 정성적 평가를 진행함으로써 국회 회의록 요약 데이터세트에 대한 평가 및 향후 생성 요약과 회의록 요약의 연구 방향을 모색하였다.

  • PDF