Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
The KIPS Transactions:PartB
Journal Basic Information
Journal DOI :
Korea Information Processing Society
Editor in Chief :
Volume & Issues
Volume 19B, Issue 4 - Aug 2012
Volume 19B, Issue 3 - Jun 2012
Volume 19B, Issue 2 - Apr 2012
Volume 19B, Issue 1 - Feb 2012
Selecting the target year
Photomosaic Algorithm with Adaptive Tilting and Block Matching
Seo, Sung-Jin ; Kim, Ki-Wong ; Kim, Sun-Myeng ; Lee, Hae-Yeoun ;
The KIPS Transactions:PartB, volume 19B, issue 1, 2012, Pages 1~8
DOI : 10.3745/KIPSTB.2012.19B.1.001
Mosaic is to make a big image by gathering lots of small materials having various colors. With advance of digital imaging techniques, photomosaic techniques using photos are widely used. In this paper, we presents an automatic photomosaic algorithm based on adaptive tiling and block matching. The proposed algorithm is composed of two processes: photo database generation and photomosaic generation. Photo database is a set of photos (or tiles) used for mosaic, where a tile is divided into
regions and the average RGB value of each region is the feature of the tile. Photomosaic generation is composed of 4 steps: feature extraction, adaptive tiling, block matching, and intensity adjustment. In feature extraction, the feature of each block is calculated after the image is splitted into the preset size of blocks. In adaptive tiling, the blocks having similar similarities are merged. Then, the blocks are compared with tiles in photo database by comparing euclidean distance as a similarity measure in block matching. Finally, in intensity adjustment, the intensity of the matched tile is replaced as that of the block to increase the similarity between the tile and the block. Also, a tile redundancy minimization scheme of adjacent blocks is applied to enhance the quality of mosaic photos. In comparison with Andrea mosaic software, the proposed algorithm outperforms in quantitative and qualitative analysis.
Interval-based Audio Integrity Authentication Algorithm using Reversible Watermarking
Yeo, Dong-Gyu ; Lee, Hae-Yeoun ;
The KIPS Transactions:PartB, volume 19B, issue 1, 2012, Pages 9~18
DOI : 10.3745/KIPSTB.2012.19B.1.009
Many audio watermarking researches which have been adapted to authenticate contents can not recover the original media after watermark removal. Therefore, reversible watermarking can be regarded as an effective method to ensure the integrity of audio data in the applications requiring high-confidential audio contents. Reversible watermarking inserts watermark into digital media in such a way that perceptual transparency is preserved, which enables the restoration of the original media from the watermarked one without any loss of media quality. This paper presents a new interval-based audio integrity authentication algorithm which can detect malicious tampering. To provide complete reversibility, we used differential histogram-based reversible watermarking. To authenticate audio in parts, not the entire audio at once, the proposed algorithm processes audio by dividing into intervals and the confirmation of the authentication is carried out in each interval. Through experiments using multiple kinds of test data, we prove that the presented algorithm provides over 99% authenticating rate, complete reversibility, and higher perceptual quality, while maintaining the induced-distortion low.
Audio Segmentation and Classification Using Support Vector Machine and Fuzzy C-Means Clustering Techniques
Nguyen, Ngoc ; Kang, Myeong-Su ; Kim, Cheol-Hong ; Kim, Jong-Myon ;
The KIPS Transactions:PartB, volume 19B, issue 1, 2012, Pages 19~26
DOI : 10.3745/KIPSTB.2012.19B.1.019
The rapid increase of information imposes new demands of content management. The purpose of automatic audio segmentation and classification is to meet the rising need for efficient content management. With this reason, this paper proposes a high-accuracy algorithm that segments audio signals and classifies them into different classes such as speech, music, silence, and environment sounds. The proposed algorithm utilizes support vector machine (SVM) to detect audio-cuts, which are boundaries between different kinds of sounds using the parameter sequence. We then extract feature vectors that are composed of statistical data and they are used as an input of fuzzy c-means (FCM) classifier to partition audio-segments into different classes. To evaluate segmentation and classification performance of the proposed SVM-FCM based algorithm, we consider precision and recall rates for segmentation and classification accuracy for classification. Furthermore, we compare the proposed algorithm with other methods including binary and FCM classifiers in terms of segmentation performance. Experimental results show that the proposed algorithm outperforms other methods in both precision and recall rates.
Particle Filter Localization Using Noisy Models
Kim, In-Cheol ; Kim, Seung-Yeon ; Kim, Hye-Suk ;
The KIPS Transactions:PartB, volume 19B, issue 1, 2012, Pages 27~30
DOI : 10.3745/KIPSTB.2012.19B.1.027
One of the most fundamental functions required for an intelligent agent is to estimate its current position based upon uncertain sensor data. In this paper, we explain the implementation of a robot localization system using Particle filters, which are the most effective one of the probabilistic localization methods, and then present the result of experiments for evaluating the performance of our system. Through conducting experiments to compare the effect of the noise-free model with that of the noisy state transition model considering inherent errors of robot actions, we show that it can help improve the performance of the Particle filter localization to apply a state transition model closely approximating the uncertainty of real robot actions.
A Reinforcement Learning Approach to Collaborative Filtering Considering Time-sequence of Ratings
Lee, Jung-Kyu ; Oh, Byong-Hwa ; Yang, Ji-Hoon ;
The KIPS Transactions:PartB, volume 19B, issue 1, 2012, Pages 31~36
DOI : 10.3745/KIPSTB.2012.19B.1.031
In recent years, there has been increasing interest in recommender systems which provide users with personalized suggestions for products or services. In particular, researches of collaborative filtering analyzing relations between users and items has become more active because of the Netflix Prize competition. This paper presents the reinforcement learning approach for collaborative filtering. By applying reinforcement learning techniques to the movie rating, we discovered the connection between a time sequence of past ratings and current ratings. For this, we first formulated the collaborative filtering problem as a Markov Decision Process. And then we trained the learning model which reflects the connection between the time sequence of past ratings and current ratings using Q-learning. The experimental results indicate that there is a significant effect on current ratings by the time sequence of past ratings.
Query Expansion based on Word Graph using Term Proximity
Jang, Kye-Hun ; Lee, Kyung-Soon ;
The KIPS Transactions:PartB, volume 19B, issue 1, 2012, Pages 37~42
DOI : 10.3745/KIPSTB.2012.19B.1.037
The pseudo relevance feedback suggests that frequent words at the top documents are related to initial query. However, the main drawback associated with the term frequency method is the fact that it relies on feature independence, and disregards any dependencies that may exist between words in the text. In this paper, we propose query expansion based on word graph using term proximity. It supplements term frequency method. On TREC WT10g test collection, experimental results in MAP(Mean Average Precision) show that the proposed method achieved 6.4% improvement over language model.
A Development of the Automatic Predicate-Argument Analyzer for Construction of Semantically Tagged Korean Corpus
Cho, Jung-Hyun ; Jung, Hyun-Ki ; Kim, Yu-Seop ;
The KIPS Transactions:PartB, volume 19B, issue 1, 2012, Pages 43~52
DOI : 10.3745/KIPSTB.2012.19B.1.043
Semantic role labeling is the research area analyzing the semantic relationship between elements in a sentence and it is considered as one of the most important semantic analysis research areas in natural language processing, such as word sense disambiguation. However, due to the lack of the relative linguistic resources, Korean semantic role labeling research has not been sufficiently developed. We, in this paper, propose an automatic predicate-argument analyzer to begin constructing the Korean PropBank which has been widely utilized in the semantic role labeling. The analyzer has mainly two components: the semantic lexical dictionary and the automatic predicate-argument extractor. The dictionary has the case frame information of verbs and the extractor is a module to decide the semantic class of the argument for a specific predicate existing in the syntactically annotated corpus. The analyzer developed in this research will help the construction of Korean PropBank and will finally play a big role in Korean semantic role labeling.
Gathering Common-word and Document Reclassification to improve Accuracy of Document Clustering
Shin, Joon-Choul ; Ock, Cheol-Young ; Lee, Eung-Bong ;
The KIPS Transactions:PartB, volume 19B, issue 1, 2012, Pages 53~62
DOI : 10.3745/KIPSTB.2012.19B.1.053
Clustering technology is used to deal efficiently with many searched documents in information retrieval system. But the accuracy of the clustering is satisfied to the requirement of only some domains. This paper proposes two methods to increase accuracy of the clustering. We define a common-word, that is frequently used but has low weight during clustering. We propose the method that automatically gathers the common-word and calculates its weight from the searched documents. From the experiments, the clustering error rates using the common-word is reduced to 34% compared with clustering using a stop-word. After generating first clusters using average link clustering from the searched documents, we propose the algorithm that reevaluates the similarity between document and clusters and reclassifies the document into more similar clusters. From the experiments using Naver JiSikIn category, the accuracy of reclassified clusters is increased to 1.81% compared with first clusters without reclassification.
Korean Compound Noun Decomposition and Semantic Tagging System using User-Word Intelligent Network
Lee, Yong-Hoon ; Ock, Cheol-Young ; Lee, Eung-Bong ;
The KIPS Transactions:PartB, volume 19B, issue 1, 2012, Pages 63~76
DOI : 10.3745/KIPSTB.2012.19B.1.063
We propose a Korean compound noun semantic tagging system using statistical compound noun decomposition and semantic relation information extracted from a lexical semantic network(U-WIN) and dictionary definitions. The system consists of three phases including compound noun decomposition, semantic constraint, and semantic tagging. In compound noun decomposition, best candidates are selected using noun location frequencies extracted from a Sejong corpus, and re-decomposes noun for semantic constraint and restores foreign nouns. The semantic constraints phase finds possible semantic combinations by using origin information in dictionary and Naive Bayes Classifier, in order to decrease the computation time and increase the accuracy of semantic tagging. The semantic tagging phase calculates the semantic similarity between decomposed nouns and decides the semantic tags. We have constructed 40,717 experimental compound nouns data set from Standard Korean Language Dictionary, which consists of more than 3 characters and is semantically tagged. From the experiments, the accuracy of compound noun decomposition is 99.26%, and the accuracy of semantic tagging is 95.38% respectively.