• Title/Summary/Keyword: chunking

Search Result 70, Processing Time 0.03 seconds

Effects of Chunking on Reading Comprehension of EFL Learners: Silent vs. Oral Reading

  • Chu, Hera
    • English Language & Literature Teaching
    • /
    • v.16 no.3
    • /
    • pp.19-34
    • /
    • 2010
  • This study investigates how EFL learners' chunking ability both in oral and silent reading affects reading comprehension, and how the chunking ability in silent reading relates to that of oral reading. The participants of this study consisted of 30 Korean university students taking a required 'English Reading' course. Chunking is a technique of grouping words into meaningful syntactic units for better understanding. Chunking was measured from pauses in oral reading. Results of this study suggest that the participants who can chunk properly both orally and silently display better comprehension of texts in general. However, chunking in silent reading was found to be a stronger indicator of improved reading comprehension. Also, the chunking skills in silent reading showed a statistically strong correlation with those observed in oral reading, suggesting that the chunking ability in silent reading may develop in parallel with that of oral reading. Oral as well as silent reading should be continuously practiced to improve reading comprehension of all levels of EFL learners, including low levels of learners. There is also a need to encourage students to read aloud with appropriate prosodic cues to help them read in meaningful units of words, therefore increasing EFL learners' comprehension not only in reading but also in listening.

  • PDF

Processing Dependent Nouns Based on Chunking for Korean Syntactic Analysis (한국어 구문분석을 위한 구묶음 기반 의존명사 처리)

  • Park Eui-Kyu;Ra Dong-Yul
    • Korean Journal of Cognitive Science
    • /
    • v.17 no.2
    • /
    • pp.119-138
    • /
    • 2006
  • It is widely known that chunking is beneficial to syntactic analysis. This paper introduces a method of chunking thai is useful for structural analysis of sentences in Korean. Dependent nouns in Korean usually tend to make sentences complex and long. By performing chunking operations related with dependent nouns, it is possible to reduce sentence complexity and thus make syntactic analysis easier. With this aim in mind we investigated techniques for chunking related with dependent nouns. We proposed a variety of chunking schemes according to the types of dependent nouns. The experiments showed that carrying out chunking leads to significant improvement of performance in syntactic analysis for Korean.

  • PDF

Text Chunking by Rule and Lexical Information (규칙과 어휘정보를 이용한 한국어 문장의 구묶음(Chunking))

  • 김미영;강신재;이종혁
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2000.06a
    • /
    • pp.103-109
    • /
    • 2000
  • 본 논문은 효율적인 한국어 구문분석을 위해 먼저 구묶음 분석(Chunking) 과정을 적용할 것을 제안한다. 한국어는 어순이 자유롭지만 명사구와 동사구에서는 규칙적인 어순을 발견할 수 있으므로, 규칙을 이용한 구묶음(Chunking) 과정의 적용이 가능하다. 하지만, 규칙만으로는 명사구와 동사구의 묶음에 한계가 있으므로 실험 말뭉치에서 어휘 정보를 찾아내어 구묶음 과정(Chunking)에 적용한다. 기존의 구문분석 방법은 구구조문법과 의존문법에 기반한 것이 대부분인데, 이러한 구문분석은 다양한 결과들이 분석되는 동안 많은 시간이 소요되며 이 중 잘못된 분석 결과를 가려서 삭제하기(pruning)도 어렵다. 따라서 본 논문에서 제시한 구묶음(Chunking) 과정을 적용함으로써, 잘못된 구문분석 결과를 미연에 방지하고 의존문법을 적용한 구문분석에 있어서 의존관계의 설정 범위(scope)도 제한할 수 있다.

  • PDF

Text Chunking by Rule and Lexical Information (규칙과 어휘정보를 이용한 한국어 문장의 구묶음(Chunking))

  • Kim, Mi-Young;Kang, Sin-Jae;Lee, Jong-Hyeok
    • Annual Conference on Human and Language Technology
    • /
    • 2000.10d
    • /
    • pp.103-109
    • /
    • 2000
  • 본 논문은 효율적인 한국어 구문분석을 위해 먼저 구묶음 분석(Chunking) 과정을 적용할 것을 제안한다. 한국어는 어순이 자유롭지만 명사구와 동사구에서는 규칙적인 어순을 발견할 수 있으므로, 규칙을 이용한 구묶음(Chunking) 과정의 적용이 가능하다 하지만, 규칙만으로는 명사구와 동사구의 묶음에 한계가 있으므로 실험 말뭉치에서 어휘 정보를 찾아내어 구묶음 과정(Chunking)에 적용한다. 기존의 구문분석 방법은 구구조문법과 의존문법에 기반한 것이 대부분인데, 이러한 구문분석은 다양한 결과들이 분석되는 동안 많은 시간이 소요되며 이 중 잘못된 분석 결과를 가려서 삭제하기(pruning)도 어렵다. 따라서 본 논문에서 제시한 구묶음(Chunking) 과정을 적용함으로써, 잘못된 구문분석 결과를 미연에 방지하고 의존문법을 적용한 구문분석에 있어서 의존관계의 설정 범위(scope)도 제한할 수 있다.

  • PDF

Chunking Korean and an Application (한국어 낱말 묶기와 그 응용)

  • Un Koaunghi;Hong Jungha;You Seok-Hoon;Lee Kiyong;Choe Jae-Woong
    • Language and Information
    • /
    • v.9 no.2
    • /
    • pp.49-68
    • /
    • 2005
  • Application of chunking to English and some other European languages has shown that it is a viable parsing mechanism for natural languages. Although a small number of attempts have been made to apply chunking to the analysis of the Korean language, it still is not clear enough what criteria there are to identify appropriate units of chunking, and how efficient and valid the chunking algorithms would be when applied to some authentic Korean texts. The purpose of this research is to provide an alternative set of algorithms for chunking Korean, and to implement them, and to test them against some English-Korean parallel corpora, which is English and Korean bibles matched sentence by sentence. It is shown in the paper that aligning related texts and identifying matched phrases between the two languages can be achieved through appropriate chunking and matching algorithms defined on the morphologically-tagged parallel corpus. Chunking and matching processes are based on the content words rather than the function words, and the matching itself is done in terms of the transfer dictionary. The implementation is done in C and XML, and can be accessed through the Internet.

  • PDF

Dynamic Prime Chunking Algorithm for Data Deduplication in Cloud Storage

  • Ellappan, Manogar;Abirami, S
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1342-1359
    • /
    • 2021
  • The data deduplication technique identifies the duplicates and minimizes the redundant storage data in the backup server. The chunk level deduplication plays a significant role in detecting the appropriate chunk boundaries, which solves the challenges such as minimum throughput and maximum chunk size variance in the data stream. To provide the solution, we propose a new chunking algorithm called Dynamic Prime Chunking (DPC). The main goal of DPC is to dynamically change the window size within the prime value based on the minimum and maximum chunk size. According to the result, DPC provides high throughput and avoid significant chunk variance in the deduplication system. The implementation and experimental evaluation have been performed on the multimedia and operating system datasets. DPC has been compared with existing algorithms such as Rabin, TTTD, MAXP, and AE. Chunk Count, Chunking time, throughput, processing time, Bytes Saved per Second (BSPS) and Deduplication Elimination Ratio (DER) are the performance metrics analyzed in our work. Based on the analysis of the results, it is found that throughput and BSPS have improved. Firstly, DPC quantitatively improves throughput performance by more than 21% than AE. Secondly, BSPS increases a maximum of 11% than the existing AE algorithm. Due to the above reason, our algorithm minimizes the total processing time and achieves higher deduplication efficiency compared with the existing Content Defined Chunking (CDC) algorithms.

Improving Parsing Efficiency Using Chunking in Chinese-Korean Machine Translation (중한번역에서 구 묶음을 이용한 파싱 효율 개선)

  • 양재형;심광섭
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.8
    • /
    • pp.1083-1091
    • /
    • 2004
  • This paper presents a chunking system employed as a preprocessing module to the parser in a Chinese to Korean machine translation system. The parser can benefit from the dependency information provided by the chunking module. The chunking system was implemented using transformation-based learning technique and an effective interface that conveys the dependency information to the parser was also devised. The module was integrated into the machine translation system and experiments were performed with corpuses collected from Chinese websites. The experimental results show the introduction of chunking module provides noticeable improvements in the parser's performance.

File Modification Pattern Detection Mechanism Using File Similarity Information

  • Jung, Ho-Min;Ko, Yong-Woong
    • International journal of advanced smart convergence
    • /
    • v.1 no.1
    • /
    • pp.34-37
    • /
    • 2012
  • In a storage system, the performance of data deduplication can be increased if we consider the file modification pattern. For example, if a file is modified at the end of file region then fixed-length chunking algorithm superior to variable-length chunking. Therefore, it is important to predict in which location of a file is modified between files. In this paper, the essential idea is to exploit an efficient file pattern checking scheme that can be used for data deduplication system. The file modification pattern can be used for elaborating data deduplication system for selecting deduplication algorithm. Experiment result shows that the proposed system can predict file modification region with high probability.

Efficient Deduplication Scheme on Fixed-length Chunking System Using File Similarity Information (파일유사도 정보를 이용한 고정 분할 기반 중복 제거 기법)

  • Moon, Young Chan;Jung, Ho Min;Ko, Young Woong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.04a
    • /
    • pp.202-205
    • /
    • 2012
  • 기존의 고정 길이 분할 (FLC: Fixed Length Chunking) 중복 제거 기법은 파일이 조금이라도 수정이 되면 수정된 블록에 대한 해시 정보가 달라져 중복 데이터 임에도 불구하고 중복 블록으로 검색이 되지 않는 문제점이 있다. 본 연구에서는 FLC 기반의 중복 제거 기법에 데이터 위치(offset) 정보를 활용하여 중복 블록을 효율적으로 찾아냄으로써 기존의 FLC 기반의 중복 제거 기법보다 더 좋은 성능을 발휘하는 유사도 정보를 활용하는 중복 제거 기법(FS_FLC: File Similarity based Fixed Length Chunking)을 설계하고 구현했다. 실험 결과 제안한 알고리즘은 낮은 오버헤드로 가변 분할 기법(VLC: Variable Length Chunking)만큼의 높은 중복 데이터 탐색 성능을 보여주었다.

Design of Adaptive Deduplication Algorithm Based on File Type and Size (파일 유형과 크기에 따른 적응형 중복 제거 알고리즘 설계)

  • Hwang, In-Cheol;Kwon, Oh-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.2
    • /
    • pp.149-157
    • /
    • 2020
  • Today, due to the large amount of data duplication caused by the increase in user data, various deduplication studies have been conducted. However, research on personal storage is relatively poor. Personal storage, unlike high-performance computers, needs to perform deduplication while reducing CPU and memory resource usage. In this paper, we propose an adaptive algorithm that selectively applies fixed size chunking (FSC) and whole file chunking (WFH) according to the file type and size in order to maintain the deduplication rate and reduce the load in personal storage. We propose an algorithm for minimization. The experimental results show that the proposed file system has more than 1.3 times slower at first write operation but less than 3 times reducing in memory usage compare to LessFS and it is 2.5 times faster at rewrite operation.