• 제목/요약/키워드: Korean Word

검색결과 3,989건 처리시간 0.037초

KR-WordRank : WordRank를 개선한 비지도학습 기반 한국어 단어 추출 방법 (KR-WordRank : An Unsupervised Korean Word Extraction Method Based on WordRank)

  • 김현중;조성준;강필성
    • 대한산업공학회지
    • /
    • 제40권1호
    • /
    • pp.18-33
    • /
    • 2014
  • A Word is the smallest unit for text analysis, and the premise behind most text-mining algorithms is that the words in given documents can be perfectly recognized. However, the newly coined words, spelling and spacing errors, and domain adaptation problems make it difficult to recognize words correctly. To make matters worse, obtaining a sufficient amount of training data that can be used in any situation is not only unrealistic but also inefficient. Therefore, an automatical word extraction method which does not require a training process is desperately needed. WordRank, the most widely used unsupervised word extraction algorithm for Chinese and Japanese, shows a poor word extraction performance in Korean due to different language structures. In this paper, we first discuss why WordRank has a poor performance in Korean, and propose a customized WordRank algorithm for Korean, named KR-WordRank, by considering its linguistic characteristics and by improving the robustness to noise in text documents. Experiment results show that the performance of KR-WordRank is significantly better than that of the original WordRank in Korean. In addition, it is found that not only can our proposed algorithm extract proper words but also identify candidate keywords for an effective document summarization.

청각 단어 재인에서 나타난 한국어 단어길이 효과 (The Korean Word Length Effect on Auditory Word Recognition)

  • 최원일;남기춘
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2002년도 11월 학술대회지
    • /
    • pp.137-140
    • /
    • 2002
  • This study was conducted to examine the korean word length effects on auditory word recognition. Linguistically, word length can be defined by several sublexical units such as letters, phonemes, syllables, and so on. In order to investigate which units are used in auditory word recognition, lexical decision task was used. Experiment 1 and 2 showed that syllable length affected response time, and syllable length interacted with word frequency. As a result, in recognizing auditory word syllable length was important variable.

  • PDF

Word Sense Disambiguation Using Embedded Word Space

  • Kang, Myung Yun;Kim, Bogyum;Lee, Jae Sung
    • Journal of Computing Science and Engineering
    • /
    • 제11권1호
    • /
    • pp.32-38
    • /
    • 2017
  • Determining the correct word sense among ambiguous senses is essential for semantic analysis. One of the models for word sense disambiguation is the word space model which is very simple in the structure and effective. However, when the context word vectors in the word space model are merged into sense vectors in a sense inventory, they become typically very large but still suffer from the lexical scarcity. In this paper, we propose a word sense disambiguation method using word embedding that makes the sense inventory vectors compact and efficient due to its additive compositionality. Results of experiments with a Korean sense-tagged corpus show that our method is very effective.

Ranking Translation Word Selection Using a Bilingual Dictionary and WordNet

  • Kim, Kweon-Yang;Park, Se-Young
    • 한국지능시스템학회논문지
    • /
    • 제16권1호
    • /
    • pp.124-129
    • /
    • 2006
  • This parer presents a method of ranking translation word selection for Korean verbs based on lexical knowledge contained in a bilingual Korean-English dictionary and WordNet that are easily obtainable knowledge resources. We focus on deciding which translation of the target word is the most appropriate using the measure of semantic relatedness through the 45 extended relations between possible translations of target word and some indicative clue words that play a role of predicate-arguments in source language text. In order to reduce the weight of application of possibly unwanted senses, we rank the possible word senses for each translation word by measuring semantic similarity between the translation word and its near synonyms. We report an average accuracy of $51\%$ with ten Korean ambiguous verbs. The evaluation suggests that our approach outperforms the default baseline performance and previous works.

Word Embedding 자질을 이용한 한국어 개체명 인식 및 분류 (Korean Named Entity Recognition and Classification using Word Embedding Features)

  • 최윤수;차정원
    • 정보과학회 논문지
    • /
    • 제43권6호
    • /
    • pp.678-685
    • /
    • 2016
  • 한국어 개체명 인식에 다양한 연구가 있었지만, 영어 개체명 인식에 비해 자질이 부족한 문제를 가지고 있다. 본 논문에서는 한국어 개체명 인식의 자질 부족 문제를 해결하기 위해 word embedding 자질을 개체명 인식에 사용하는 방법을 제안한다. CBOW(Continuous Bag-of-Words) 모델을 이용하여 word vector를 생성하고, word vector로부터 K-means 알고리즘을 이용하여 군집 정보를 생성한다. word vector와 군집 정보를 word embedding 자질로써 CRFs(Conditional Random Fields)에 사용한다. 실험 결과 TV 도메인과 Sports 도메인, IT 도메인에서 기본 시스템보다 각각 1.17%, 0.61%, 1.19% 성능이 향상되었다. 또한 제안 방법이 다른 개체명 인식 및 분류 시스템보다 성능이 향상되는 것을 보여 그 효용성을 입증했다.

한의학 고문헌 데이터 분석을 위한 단어 임베딩 기법 비교: 자연어처리 방법을 적용하여 (Comparison between Word Embedding Techniques in Traditional Korean Medicine for Data Analysis: Implementation of a Natural Language Processing Method)

  • 오준호
    • 대한한의학원전학회지
    • /
    • 제32권1호
    • /
    • pp.61-74
    • /
    • 2019
  • Objectives : The purpose of this study is to help select an appropriate word embedding method when analyzing East Asian traditional medicine texts as data. Methods : Based on prescription data that imply traditional methods in traditional East Asian medicine, we have examined 4 count-based word embedding and 2 prediction-based word embedding methods. In order to intuitively compare these word embedding methods, we proposed a "prescription generating game" and compared its results with those from the application of the 6 methods. Results : When the adjacent vectors are extracted, the count-based word embedding method derives the main herbs that are frequently used in conjunction with each other. On the other hand, in the prediction-based word embedding method, the synonyms of the herbs were derived. Conclusions : Counting based word embedding methods seems to be more effective than prediction-based word embedding methods in analyzing the use of domesticated herbs. Among count-based word embedding methods, the TF-vector method tends to exaggerate the frequency effect, and hence the TF-IDF vector or co-word vector may be a more reasonable choice. Also, the t-score vector may be recommended in search for unusual information that could not be found in frequency. On the other hand, prediction-based embedding seems to be effective when deriving the bases of similar meanings in context.

영어 단어경계에 따른 발화 양상 연구: 한국인 화자와 영어 원어민 화자 비교 분석 (A Study on the Production of the English Word Boundaries: A Comparative Analysis of Korean Speakers and English Speakers)

  • 김지향;김기호
    • 말소리와 음성과학
    • /
    • 제6권1호
    • /
    • pp.47-58
    • /
    • 2014
  • The purpose of this paper is to find out how Korean speakers' speech production in English word boundaries differs from English speakers' and to account for what bring about such differences. Seeing two consecutive words as one single cluster, the English speakers generally pronounce them naturally by linking a word-final consonant of the first word with a word-initial vowel of the second word, while this is not the case with most of the Korean speakers; they read the two consecutive words individually. In consequence, phonological processes such as resyllabification and aspiration can be found in the English speakers' word-boundary production, while glottalization, and unreleased stops are rather common phonological process seen in the Korean speakers' word-boundary production. This may be accounted for by Korean speakers' L1 interference, depending on English proficiency.

한국어 단어재인에 있어서 빈도와 길이 효과 탐색 (The exploration of the effects of word frequency and word length on Korean word recognition)

  • 이창환;이윤형;김태훈
    • 한국산학기술학회논문지
    • /
    • 제17권1호
    • /
    • pp.54-61
    • /
    • 2016
  • 단어는 언어의 기초적인 의미 단위이기 때문에 단어재인에 대한 연구는 언어 연구에서 중요하며 단어처리에 기여하는 변인이 무엇인지에 관한 연구가 이루어져 왔다. 본 연구에서는 한국어 단어재인 과정의 주요 변인 중 단어 빈도와 단어길이의 영향을 탐색하였다. 먼저 단어 빈도와 관련하여, 한국어의 특징 중 하나인 한자어로 이루어진 단어에서도 기존의 연구와 동일한 양상의 빈도 효과가 나타나는지를 탐색하였다. 이를 위해 순 한글 단어와 한자어로 이루어진 단어를 비교하였으며, 그 결과 한자어로 이루어진 단어에서는 빈도 효과가 나타나지 않았다. 한편 단어 길이 효과의 경우, 단음절로 구성된 단어의 양상을 확인해 보고자, 음절의 개수를 변화시켜 단어 길이 효과를 측정하였다. 그 결과 단음절 단어는 이음절 단어에 비해 느리게 처리되었다. 특정 유형의 단어에 대한 빈도 효과의 부재 및 단음절 단어의 느린 처리는 한국어의 특징을 반영한 결과라 할 수 있으며 추후 연구를 통해 이에 대한 좀더 자세한 탐색이 필요할 것이다.

한국어 단음절 낱말 인식에 미치는 어휘적 특성의 영향 (Analysis of Lexical Effect on Spoken Word Recognition Test)

  • 윤미선;이봉원
    • 대한음성학회지:말소리
    • /
    • 제54호
    • /
    • pp.15-26
    • /
    • 2005
  • The aim of this paper was to analyze the lexical effects on spoken word recognition of Korean monosyllabic word. The lexical factors chosen in this paper was frequency, density and lexical familiarity of words. Result of the analysis was as follows; frequency was the significant factor to predict spoken word recognition score of monosyllabic word. The other factors were not significant. This result suggest that word frequency should be considered in speech perception test.

  • PDF

단어의 위치정보를 이용한 Word Embedding (Word Embedding using word position information)

  • 황현선;이창기;장현기;강동호
    • 한국어정보학회:학술대회논문집
    • /
    • 한국어정보학회 2017년도 제29회 한글및한국어정보처리학술대회
    • /
    • pp.60-63
    • /
    • 2017
  • 자연어처리에 딥 러닝을 적용하기 위해 사용되는 Word embedding은 단어를 벡터 공간상에 표현하는 것으로 차원축소 효과와 더불어 유사한 의미의 단어는 유사한 벡터 값을 갖는다는 장점이 있다. 이러한 word embedding은 대용량 코퍼스를 학습해야 좋은 성능을 얻을 수 있기 때문에 기존에 많이 사용되던 word2vec 모델은 대용량 코퍼스 학습을 위해 모델을 단순화 하여 주로 단어의 등장 비율에 중점적으로 맞추어 학습하게 되어 단어의 위치 정보를 이용하지 않는다는 단점이 있다. 본 논문에서는 기존의 word embedding 학습 모델을 단어의 위치정보를 이용하여 학습 할 수 있도록 수정하였다. 실험 결과 단어의 위치정보를 이용하여 word embedding을 학습 하였을 경우 word-analogy의 syntactic 성능이 크게 향상되며 어순이 바뀔 수 있는 한국어에서 특히 큰 효과를 보였다.

  • PDF