• Title/Summary/Keyword: Korean Word

Search Result 3,986, Processing Time 0.029 seconds

Performance Improvement of Context-Sensitive Spelling Error Correction Techniques using Knowledge Graph Embedding of Korean WordNet (alias. KorLex) (한국어 어휘 의미망(alias. KorLex)의 지식 그래프 임베딩을 이용한 문맥의존 철자오류 교정 기법의 성능 향상)

  • Lee, Jung-Hun;Cho, Sanghyun;Kwon, Hyuk-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.3
    • /
    • pp.493-501
    • /
    • 2022
  • This paper is a study on context-sensitive spelling error correction and uses the Korean WordNet (KorLex)[1] that defines the relationship between words as a graph to improve the performance of the correction[2] based on the vector information of the word embedded in the correction technique. The Korean WordNet replaced WordNet[3] developed at Princeton University in the United States and was additionally constructed for Korean. In order to learn a semantic network in graph form or to use it for learned vector information, it is necessary to transform it into a vector form by embedding learning. For transformation, we list the nodes (limited number) in a line format like a sentence in a graph in the form of a network before the training input. One of the learning techniques that use this strategy is Deepwalk[4]. DeepWalk is used to learn graphs between words in the Korean WordNet. The graph embedding information is used in concatenation with the word vector information of the learned language model for correction, and the final correction word is determined by the cosine distance value between the vectors. In this paper, In order to test whether the information of graph embedding affects the improvement of the performance of context- sensitive spelling error correction, a confused word pair was constructed and tested from the perspective of Word Sense Disambiguation(WSD). In the experimental results, the average correction performance of all confused word pairs was improved by 2.24% compared to the baseline correction performance.

A Word Embedding used Word Sense and Feature Mirror Model (단어 의미와 자질 거울 모델을 이용한 단어 임베딩)

  • Lee, JuSang;Shin, JoonChoul;Ock, CheolYoung
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.4
    • /
    • pp.226-231
    • /
    • 2017
  • Word representation, an important area in natural language processing(NLP) used machine learning, is a method that represents a word not by text but by distinguishable symbol. Existing word embedding employed a large number of corpora to ensure that words are positioned nearby within text. However corpus-based word embedding needs several corpora because of the frequency of word occurrence and increased number of words. In this paper word embedding is done using dictionary definitions and semantic relationship information(hypernyms and antonyms). Words are trained using the feature mirror model(FMM), a modified Skip-Gram(Word2Vec). Sense similar words have similar vector. Furthermore, it was possible to distinguish vectors of antonym words.

The Analysis of a Causal Relationship of Traditional Korean Restaurant's Well-Bing Attribute Selection on Customers' Re-Visitation and Word-of-Mouth

  • Baek, Hang-Sun;Shin, Chung-Sub;Lee, Sang-Youn
    • East Asian Journal of Business Economics (EAJBE)
    • /
    • v.4 no.2
    • /
    • pp.48-60
    • /
    • 2016
  • This study analyzes what effects does restaurant's well-being attribute selection have on word-of-mouth intention. Based on the result, this study aims to provide basic data for establishing Korean restaurant's service strategy and marketing strategy. The researchers surveyed 350 customers who visited a Korean restaurant located in Kangbook, Seoul. We encoded gathered data and analyzed them using SPSS 17.0 statistics package program. Following are the analyzed results. First, under hypothesis 1 - Korean restaurant's well-being attribute selection will have a positive influence on re-visitation intention - it is shown that sufficiency, healthiness, and steadiness have similar influence on re-visitation intention. Second, under hypothesis 2 - Korean restaurant's well-being attribute selection will have a positive influence on word-of-mouth intention - it is shown that sufficiency, healthiness, environment, and steadiness have similar influence on word -of-mouth intention. Third, under hypothesis 3 - Korean restaurant's re-visitation intention will have a positive influence on word -of-mouth intention - it is considered that eliciting customer's re-visitation intention also has influence on word-of-mouth intention. We will be necessary to consult how to derive customer's re-visitation intention or word-of-mouth intention by considering factors which customers of traditional Korean restaurant value.

The influence of task demands on the preparation of spoken word production: Evidence from Korean

  • Choi, Tae-Hwan;Oh, Sujin;Han, Jeong-Im
    • Phonetics and Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.1-7
    • /
    • 2017
  • It was shown in speech production studies that the preparation unit of spoken word production is language particular, such as onset phonemes for English and Dutch, syllables for Mandarin Chinese, and morae for Japanese. However, there have been inconsistent results on whether the onset phoneme is a planning unit of spoken word production in Korean. In this study, two sets of experiments investigated possible influences of task demands on the phonological preparation in native Korean adults, namely, implicit priming and word naming with the form preparation paradigm. Only the word naming task, but not the implicit priming task, showed a significant onset priming effect, even though there were significant syllable priming effects in both tasks. Following the attentional theory ($O^{\prime}S{\acute{e}}aghdha$ & Frazer, 2014), these results suggest that task demands might play a role in the absence/presence of onset priming effects in Korean. Native Korean speakers could maintain their attention to the shared onset phonemes in word naming, which is not very demanding, while they have difficulties in allocating their attention to such units in a more cognitive-demanding implicit priming, even though both tasks involve accessing phonological codes. These findings demonstrate that there are cross-linguistic differences in the first selectable unit in preparation of spoken word production, but within a single language, the preparation unit might not be immutable.

The Locus of the Word Frequency Effect in Speech Production: Evidence from the Picture-word Interference Task (말소리 산출에서 단어빈도효과의 위치 : 그림-단어간섭과제에서 나온 증거)

  • Koo, Min-Mo;Nam, Ki-Chun
    • MALSORI
    • /
    • no.62
    • /
    • pp.51-68
    • /
    • 2007
  • Two experiments were conducted to determine the exact locus of the frequency effect in speech production. Experiment 1 addressed the question as to whether the word frequency effect arise from the stage of lemma selection. A picture-word interference task was performed to test the significance of interactions between the effects of target frequency, distractor frequency and semantic relatedness. There was a significant interaction between the distractor frequency and the semantic relatedness and between the target and the distractor frequency. Experiment 2 examined whether the word frequency effect is attributed to the lexeme level which represent phonological information of words. A methodological logic applied to Experiment 2 was the same as that of Experiment 1. There was no significant interaction between the distractor frequency and the phonological relatedness. These results demonstrate that word frequency has influence on the processes involved in selecting a correct lemma corresponding to an activated lexical concept in speech production.

  • PDF

An Algorithm for Text Image Watermarking based on Word Classification (단어 분류에 기반한 텍스트 영상 워터마킹 알고리즘)

  • Kim Young-Won;Oh Il-Seok
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.8
    • /
    • pp.742-751
    • /
    • 2005
  • This paper proposes a novel text image watermarking algorithm based on word classification. The words are classified into K classes using simple features. Several adjacent words are grouped into a segment. and the segments are also classified using the word class information. The same amount of information is inserted into each of the segment classes. The signal is encoded by modifying some inter-word spaces statistics of segment classes. Subjective comparisons with conventional word-shift algorithms are presented under several criteria.

Input Dimension Reduction based on Continuous Word Vector for Deep Neural Network Language Model (Deep Neural Network 언어모델을 위한 Continuous Word Vector 기반의 입력 차원 감소)

  • Kim, Kwang-Ho;Lee, Donghyun;Lim, Minkyu;Kim, Ji-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.3-8
    • /
    • 2015
  • In this paper, we investigate an input dimension reduction method using continuous word vector in deep neural network language model. In the proposed method, continuous word vectors were generated by using Google's Word2Vec from a large training corpus to satisfy distributional hypothesis. 1-of-${\left|V\right|}$ coding discrete word vectors were replaced with their corresponding continuous word vectors. In our implementation, the input dimension was successfully reduced from 20,000 to 600 when a tri-gram language model is used with a vocabulary of 20,000 words. The total amount of time in training was reduced from 30 days to 14 days for Wall Street Journal training corpus (corpus length: 37M words).

Measuring Acoustical Parameters of English Words by the Position in the Phrases (영어어구의 위치에 따른 단어의 음향 변수 측정)

  • Yang, Byung-Gon
    • Speech Sciences
    • /
    • v.14 no.4
    • /
    • pp.115-128
    • /
    • 2007
  • The purposes of this paper were to develop an automatic script to collect such acoustic parameters as duration, intensity, pitch and the first two formant values of English words produced by two native Canadian speakers either alone or in a two-word phrase at a normal speed and to compare those values by the position in the phrases. A Praat script was proposed to obtain the comparable parameters at evenly divided time point of the target word. Results showed that the total duration of the word in the phrase was shorter than that of the word produced alone. That was attributed to the pronunciation style of the native speakers generally placing the primary word stress in the first word position. Also, the reduction ratio of the male speaker depended on the word position in the phrase while the female speaker didn't. Moreover, there were different contours of intensity and pitch by the position of the target word in the phrase while almost the same formant patterns were observed. Further studies would be desirable to examine those parameters of the words in the authentic speech materials.

  • PDF

Exclusion of Non-similar Candidates using Positional Accuracy based on Levenstein Distance from N-best Recognition Results of Isolated Word Recognition (레벤스타인 거리에 기초한 위치 정확도를 이용한 고립 단어 인식 결과의 비유사 후보 단어 제외)

  • Yun, Young-Sun;Kang, Jeom-Ja
    • Phonetics and Speech Sciences
    • /
    • v.1 no.3
    • /
    • pp.109-115
    • /
    • 2009
  • Many isolated word recognition systems may generate non-similar words for recognition candidates because they use only acoustic information. In this paper, we investigate several techniques which can exclude non-similar words from N-best candidate words by applying Levenstein distance measure. At first, word distance method based on phone and syllable distances are considered. These methods use just Levenstein distance on phones or double Levenstein distance algorithm on syllables of candidates. Next, word similarity approaches are presented that they use characters' position information of word candidates. Each character's position is labeled to inserted, deleted, and correct position after alignment between source and target string. The word similarities are obtained from characters' positional probabilities which mean the frequency ratio of the same characters' observations on the position. From experimental results, we can find that the proposed methods are effective for removing non-similar words without loss of system performance from the N-best recognition candidates of the systems.

  • PDF

Utterance Verification using Phone-Level Log-Likelihood Ratio Patterns in Word Spotting Systems (핵심어 인식기에서 단어의 음소레벨 로그 우도 비율의 패턴을 이용한 발화검증 방법)

  • Kim, Chong-Hyon;Kwon, Suk-Bong;Kim, Hoi-Rin
    • Phonetics and Speech Sciences
    • /
    • v.1 no.1
    • /
    • pp.55-62
    • /
    • 2009
  • This paper proposes an improved method to verify a keyword segment that results from a word spotting system. First a baseline word spotting system is implemented. In order to improve performance of the word spotting systems, we use a two-pass structure which consists of a word spotting system and an utterance verification system. Using the basic likelihood ratio test (LRT) based utterance verification system to verify the keywords, there have been certain problems which lead to performance degradation. So, we propose a method which uses phone-level log-likelihood ratios (PLLR) patterns in computing confidence measures for each keyword. The proposed method generates weights according to the PLLR patterns and assigns different weights to each phone in the process of generating confidence measures for the keywords. This proposed method has shown to be more appropriate to word spotting systems and we can achieve improvement in final word spotting accuracy.

  • PDF