Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
The KIPS Transactions:PartB
Journal Basic Information
Journal DOI :
Korea Information Processing Society
Editor in Chief :
Volume & Issues
Volume 16B, Issue 6 - Dec 2009
Volume 16B, Issue 5 - Oct 2009
Volume 16B, Issue 4 - Aug 2009
Volume 16B, Issue 3 - Jun 2009
Volume 16B, Issue 2 - Apr 2009
Volume 16B, Issue 1 - Feb 2009
Selecting the target year
Adaptive Image Restoration Considering the Edge Direction
Jeon, Woo-Sang ; Lee, Myung-Sub ; Jang, Ho ;
The KIPS Transactions:PartB, volume 16B, issue 1, 2009, Pages 1~6
DOI : 10.3745/KIPSTB.2009.16-B.1.1
It is very difficult to restore the images degraded by motion blur and additive noise. In conventional methods, regularization usually applies to all the images without considering local characteristics of the images. As a result, ringing artifacts appear in the edge regions and noise amplification is in the flat regions, as well. To solve these problems, we propose an adaptive iterative regularization method, using the way of regularization operator considering edge directions. In addition, we suggest an adaptive regularization parameter and an relaxation parameter. In conclusion, We have verified that the new method shows the suppression of the noise amplification in the flat regions, also does less ringing artifacts in the edge regions. Furthermore, it offers better images and improves the quality of ISNR, comparing with those of conventional methods.
A Study of Post-processing Methods of Clustering Algorithm and Classification of the Segmented Regions
Oh, Jun-Taek ; Kim, Bo-Ram ; Kim, Wook-Hyun ;
The KIPS Transactions:PartB, volume 16B, issue 1, 2009, Pages 7~16
DOI : 10.3745/KIPSTB.2009.16-B.1.7
Some clustering algorithms have a problem that an image is over-segmented since both the spatial information between the segmented regions is not considered and the number of the clusters is defined in advance. Therefore, they are difficult to be applied to the applicable fields. This paper proposes the new post-processing methods, a reclassification of the inhomogeneous clusters and a region merging using Baysian algorithm, that improve the segmentation results of the clustering algorithms. The inhomogeneous cluster is firstly selected based on variance and between-class distance and it is then reclassified into the other clusters in the reclassification step. This reclassification is repeated until the optimal number determined by the minimum average within-class distance. And the similar regions are merged using Baysian algorithm based on Kullbeck-Leibler distance between the adjacent regions. So we can effectively solve the over-segmentation problem and the result can be applied to the applicable fields. Finally, we design a classification system for the segmented regions to validate the proposed method. The segmented regions are classified by SVM(Support Vector Machine) using the principal colors and the texture information of the segmented regions. In experiment, the proposed method showed the validity for various real-images and was effectively applied to the designed classification system.
Detection Algorithm of Crossroad Traffic Accident Using the Sequence of Traffic Lights
Jeong, Sung-Hwan ; Lee, Joon-Whoan ;
The KIPS Transactions:PartB, volume 16B, issue 1, 2009, Pages 17~24
DOI : 10.3745/KIPSTB.2009.16-B.1.17
This paper suggests the background image and the algorism of detecting an accident at crossroads by using the sequence of traffic light at crossroads, which is installed within the crossroads, in order to detect an accident within crossroads. A method of using the existing image contains a problem that the accident-detection ratio gets lower in a situation that noise occurs loudly given using new accident model, the confused situation, or sound source. This study used the accident detection by developing a filter of using the property of histogram in the sequence of traffic light at crossroads and the background image, in order to reduce misjudgment of an accident caused by external shadow, vehicle stoppage, vehicle headlight, and externally environmental influence. As a result of experimenting by acquiring 15 actual accident images in order to examine the performance of the suggested algorism, the accident was detected in all the 15 videos. Even as for a new accident model, the accident within crossroads could be detected.
ESP : A DVR File Format for Enhanced Recording and Searching
Park, Jae-Kyung ; Yang, Seung-Min ;
The KIPS Transactions:PartB, volume 16B, issue 1, 2009, Pages 25~34
DOI : 10.3745/KIPSTB.2009.16-B.1.25
DVR(Digital Video Recorder) system stores video inputs in compressed digital formats and retrieve them. DVR system has several advantages over traditional analog tape recorder system which are (1) improved real-time monitoring, recording and searching capability, and (2) other capabilities such as watermarking and remote monitoring through network. AVI format is the most popular format used for DVR systems. However, AVI format has drawbacks in recording and searching due to structural problem. Some vendors develop and use their own format, do not open the format to the public. In this paper, ESP format is proposed. ESP format solves the drawbacks of AVI format, and the advantages of AVI format apply to ESP format. Moreover, ESP format provides multistream recording/replay and event-recording. In result, ESP format enchances functionality of recording and searching in DVR system.
Lane Departure Warning Algorithm Through Single Lane Extraction and Center Point Analysis
Bae, Jung-Ho ; Kim, Soo-Woong ; Lee, Hae-Yeoun ; Lee, Hyun-Ah ; Kim, Byeong-Man ;
The KIPS Transactions:PartB, volume 16B, issue 1, 2009, Pages 35~46
DOI : 10.3745/KIPSTB.2009.16-B.1.35
Lane extraction and lane departure warning algorithms using the image sensor attached in the vehicle are addressed. With the research about intelligent automobile, there have been many algorithms about lane recognition and lane departure warning system. However, since these algorithms require to detect 2 lanes, the high time complexity and the low recognition rate under various driving circumstances are critical problems. In this paper, we present a lane departure warning algorithm using single lane extraction and center point analysis that achieves the fast processing time and high detection rate. From the geometry between camera and objects, the region of interest (ROI) is determined and splitted into two parts. Hough transform detects the part of the lane. After the detected lane is restored to have a pre-determined size, lane departure is estimated by calculating the distance from the center point. On real driving environments, the presented algorithm is compared with previous algorithms. Experiment results support that the presented algorithm is fast and accurate.
Automatic Object Recognition in 3D Measuring Data
Ahn, Sung-Joon ;
The KIPS Transactions:PartB, volume 16B, issue 1, 2009, Pages 47~54
DOI : 10.3745/KIPSTB.2009.16-B.1.47
Automatic object recognition in 3D measuring data is of great interest in many application fields e.g. computer vision, reverse engineering and digital factory. In this paper we present a software tool for a fully automatic object detection and parameter estimation in unordered and noisy point clouds with a large number of data points. The software consists of three interactive modules each for model selection, point segmentation and model fitting, in which the orthogonal distance fitting (ODF) plays an important role. The ODF algorithms estimate model parameters by minimizing the square sum of the shortest distances between model feature and measurement points. The local quadric surface fitted through ODF to a randomly touched small initial patch of the point cloud provides the necessary initial information for the overall procedures of model selection, point segmentation and model fitting. The performance of the presented software tool will be demonstrated by applying to point clouds.
A Combined Forecast Scheme of User-Based and Item-based Collaborative Filtering Using Neighborhood Size
Choi, In-Bok ; Lee, Jae-Dong ;
The KIPS Transactions:PartB, volume 16B, issue 1, 2009, Pages 55~62
DOI : 10.3745/KIPSTB.2009.16-B.1.55
Collaborative filtering is a popular technique that recommends items based on the opinions of other people in recommender systems. Memory-based collaborative filtering which uses user database can be divided in user-based approaches and item-based approaches. User-based collaborative filtering predicts a user`s preference of an item using the preferences of similar neighborhood, while item-based collaborative filtering predicts the preference of an item based on the similarity of items. This paper proposes a combined forecast scheme that predicts the preference of a user to an item by combining user-based prediction and item-based prediction using the ratio of the number of similar users and the number of similar items. Experimental results using MovieLens data set and the BookCrossing data set show that the proposed scheme improves the accuracy of prediction for movies and books compared with the user-based scheme and item-based scheme.
Feature Analysis of Multi-Channel Time Series EEG Based on Incremental Model
Kim, Sun-Hee ; Yang, Hyung-Jeong ; Ng, Kam Swee ; Jeong, Jong-Mun ;
The KIPS Transactions:PartB, volume 16B, issue 1, 2009, Pages 63~70
DOI : 10.3745/KIPSTB.2009.16-B.1.63
BCI technology is to control communication systems or machines by brain signal among biological signals followed by signal processing. For the implementation of BCI systems, it is required that the characteristics of brain signal are learned and analyzed in real-time and the learned characteristics are applied. In this paper, we detect feature vector of EEG signal on left and right hand movements based on incremental approach and dimension reduction using the detected feature vector. In addition, we show that the reduced dimension can improve the classification performance by removing unnecessary features. The processed data including sufficient features of input data can reduce the time of processing and boost performance of classification by removing unwanted features. Our experiments using K-NN classifier show the proposed approach 5% outperforms the PCA based dimension reduction.
An XML Tag Indexing Method Using on Lexical Similarity
Jeong, Hye-Jin ; Kim, Yong-Sung ;
The KIPS Transactions:PartB, volume 16B, issue 1, 2009, Pages 71~78
DOI : 10.3745/KIPSTB.2009.16-B.1.71
For more effective index extraction and index weight determination, studies of extracting indices are carried out by using document content as well as structure. However, most of studies are concentrating in calculating the importance of context rather than that of XML tag. These conventional studies determine its importance from the aspect of common sense rather than verifying that through an objective experiment. This paper, for the automatic indexing by using the tag information of XML document that has taken its place as the standard for web document management, classifies major tags of constructing a paper according to its importance and calculates the term weight extracted from the tag of low weight. By using the weight obtained, this paper proposes a method of calculating the final weight while updating the term weight extracted from the tag of high weight. In order to determine more objective weight, this paper tests the tag that user considers as important and reflects it in calculating the weight by classifying its importance according to the result. Then by comparing with the search performance while using the index weight calculated by applying a method of determining existing tag importance, it verifies effectiveness of the index weight calculated by applying the method proposed in this paper.
Text Watermarking Based on Syntactic Constituent Movement
Kim, Mi-Young ;
The KIPS Transactions:PartB, volume 16B, issue 1, 2009, Pages 79~84
DOI : 10.3745/KIPSTB.2009.16-B.1.79
This paper explores a method of text watermarking for agglutinative languages and develops a syntactic tree-based syntactic constituent movement scheme. Agglutinative languages provide a good ground for the syntactic tree-based natural language watermarking because syntactic constituent order is relatively free. Our proposed natural language watermarking method consists of seven procedures. First, we construct a syntactic dependency tree of unmarked text. Next, we perform clausal segmentation from the syntactic tree. Third, we choose target syntactic constituents, which will move within its clause. Fourth, we determine the movement direction of the target constituents. Then, we embed a watermark bit for each target constituent. Sixth, if the watermark bit does not coincide with the direction of the target constituent movement, we displace the target constituent in the syntactic tree. Finally, from the modified syntactic tree, we obtain a marked text. From the experimental results, we show that the coverage of our method is 91.53%, and the rate of unnatural sentences of marked text is 23.16%, which is better than that of previous systems. Experimental results also show that the marked text keeps the same style, and it has the same information without semantic distortion.
Harmful Document Classification Using the Harmful Word Filtering and SVM
Lee, Won-Hee ; Chung, Sung-Jong ; An, Dong-Un ;
The KIPS Transactions:PartB, volume 16B, issue 1, 2009, Pages 85~92
DOI : 10.3745/KIPSTB.2009.16-B.1.85
As World Wide Web is more popularized nowadays, the environment is flooded with the information through the web pages. However, despite such convenience of web, it is also creating many problems due to uncontrolled flood of information. The pornographic, violent and other harmful information freely available to the youth, who must be protected by the society, or other users who lack the power of judgment or self-control is creating serious social problems. To resolve those harmful words, various methods proposed and studied. This paper proposes and implements the protecting system that it protects internet youth user from harmful contents. To classify effective harmful/harmless contents, this system uses two step classification systems that is harmful word filtering and SVM learning based filtering. We achieved result that the average precision of 92.1%.