Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
The KIPS Transactions:PartB
Journal Basic Information
Journal DOI :
Korea Information Processing Society
Editor in Chief :
Volume & Issues
Volume 15B, Issue 6 - Dec 2008
Volume 15B, Issue 5 - Oct 2008
Volume 15B, Issue 4 - Aug 2008
Volume 15B, Issue 3 - Jun 2008
Volume 15B, Issue 2 - Apr 2008
Volume 15B, Issue 1 - Feb 2008
Selecting the target year
An Improved Snake Algorithm Using Local Curvature
Lee, Jung-Ho ; Choi, Wan-Sok ; Jang, Jong-Whan ;
The KIPS Transactions:PartB, volume 15B, issue 6, 2008, Pages 501~506
DOI : 10.3745/KIPSTB.2008.15-B.6.501
The classical snake algorithm has a problem in detecting the boundary of an object with deep concavities. While the GVF method can successfully detect boundary concavities, it consumes a lot of time computing the energy map. In this paper, we propose an algorithm to reduce the computation time and improve performance in detecting the boundary of an object with high concavity. We define the degree of complexity of object boundary as the local curvature. If the value of the local curvature is greater than a threshold value, new snake points are added. Simulation results on several different test images show that our method performs well in detecting object boundary and requires less computation time.
Automatic Left Ventricle Segmentation by Edge Classification and Region Growing on Cardiac MRI
Lee, Hae-Yeoun ;
The KIPS Transactions:PartB, volume 15B, issue 6, 2008, Pages 507~516
DOI : 10.3745/KIPSTB.2008.15-B.6.507
Cardiac disease is the leading cause of death in the world. Quantification of cardiac function is performed by manually calculating blood volume and ejection fraction in routine clinical practice, but it requires high computational costs. In this study, an automatic left ventricle (LV) segmentation algorithm using short-axis cine cardiac MRI is presented. We compensate coil sensitivity of magnitude images depending on coil location, classify edge information after extracting edges, and segment LV by applying region-growing segmentation. We design a weighting function for intensity signal and calculate a blood volume of LV considering partial voxel effects. Using cardiac cine SSFP of 38 subjects with Cornell University IRB approval, we compared our algorithm to manual contour tracing and MASS software. Without partial volume effects, we achieved segmentation accuracy of
(standard deviation) and
in diastolic and systolic phases, respectively. With partial volume effects, the accuracy was
in diastolic and systolic phases, respectively. Also in ejection fraction, the accuracy was
without and with partial volume effects, respectively. Results support that the proposed algorithm is exact and useful for clinical practice.
Computation of Stereo Dense Disparity Maps Using Region Segmentation
Lee, Bum-Jong ; Park, Jong-Seung ; Kim, Chung-Kyue ;
The KIPS Transactions:PartB, volume 15B, issue 6, 2008, Pages 517~526
DOI : 10.3745/KIPSTB.2008.15-B.6.517
Stereo vision is a fundamental method for measuring 3D structures by observing them from two cameras placed on different positions. In order to reconstruct 3D structures, it is necessary to create a disparity map from a pair of stereo images. To create a disparity map we compute the matching cost for each point correspondence and compute the disparity that minimizes the sum of the whole matching costs. In this paper, we propose a method to estimate a dense disparity map using region segmentation. We segment each scanline using region homogeneity properties. Using the segmented regions, we prohibit false matches in the stereo matching process. Disparities for pixels that failed in matching are filled by interpolating neighborhood disparities. We applied the proposed method to various stereo images of real environments. Experimental results showed that the proposed method is stable and potentially viable in practical applications.
Shadow Detection Using Linearity of Shadow Brightness from a Single Natural Image
Hwang, Dong-Guk ; Park, Jong-Cheon ; Jun, Byoung-Min ;
The KIPS Transactions:PartB, volume 15B, issue 6, 2008, Pages 527~532
DOI : 10.3745/KIPSTB.2008.15-B.6.527
This paper proposes a novel approach to shadow detection from a single natural image regardless of orientation and type of light sources. This approach is based on the assumption that shadow brightness changes linearly, and the axiom that a region cast shadow on is darker than that not having shadow under the same environment. Firstly, candidates for shadow are extracted by preprocessing. Then, they are quantized to replace the similar values with a representative value because of the more quantization steps of a pixel brightness, the higher linear independency among the neighboring pixels. Finally, shadows are detected according to linear independency of shadow brightness based on the assumption. The experimental results showed the proposed approach can robustly detect umbra as well as self-shadow and penumbra cast on a single-colored background.
Background Subtraction Algorithm by Using the Local Binary Pattern Based on Hexagonal Spatial Sampling
Choi, Young-Kyu ;
The KIPS Transactions:PartB, volume 15B, issue 6, 2008, Pages 533~542
DOI : 10.3745/KIPSTB.2008.15-B.6.533
Background subtraction from video data is one of the most important task in various realtime machine vision applications. In this paper, a new scheme for background subtraction based on the hexagonal pixel sampling is proposed. Generally it has been found that hexagonal spatial sampling yields smaller quantization errors and remarkably improves the understanding of connectivity. We try to apply the hexagonally sampled image to the LBP based non-parametric background subtraction algorithm. Our scheme makes it possible to omit the bilinear pixel interpolation step during the local binary pattern generation process, and, consequently, can reduce the computation time. Experimental results revealed that our approach based on hexagonal spatial sampling is very efficient and can be utilized in various background subtraction applications.
Real-Time Traffic Information Collection Using Multiple Virtual Detection Lines
Kim, Eui-Chul ; Kim, Soo-Hyung ; Lee, Guee-Sang ; Yang, Hyung-Jeong ;
The KIPS Transactions:PartB, volume 15B, issue 6, 2008, Pages 543~552
DOI : 10.3745/KIPSTB.2008.15-B.6.543
ATIS(Advanced Traveler Information System) is the system to offer a real-time traffic information or traffic situation for the benefit of the client. One of traffic information collection methods for ATIS research is the method of image analysis. The method is divided into two : one is the method to set two loop detectors at the area and the other is the method detecting the vehicle through an image analysis. In this paper, we propose a real-time traffic information collection system to mix two methods. The system installs multiple virtual detection lines and traces the location of the vehicle. Use of multiple virtual detection lines supplements the defect of the method of loop detectors. And we drew a representative pixels in the detecting area and used it for image analysis. This is to solve the problem of time delay which increases as the image size increases. We gathered traffic images and experimented using the system and got 92.32% of detection accuracy.
3D Visualization of Medical Image Registration using VTK
Lee, Myung-Eun ; Kim, Soo-Hyung ; Lim, Jun-Sik ;
The KIPS Transactions:PartB, volume 15B, issue 6, 2008, Pages 553~560
DOI : 10.3745/KIPSTB.2008.15-B.6.553
The amount of image data used in medical institution is increasing rapidly with great development of medical technology. Therefore, an automation method that use image processing description, rather than manual macrography of doctors, is required for the analysis large medical data. Specially, medical image registration, which is the process of finding the spatial transform that maps points from one image to the corresponding points in another image, and 3D analysis and visualization skills for a series of 2D images are essential technologies. However, a high establishment cost raise a budget problem, and hence small scaled hospitals hesitate importing these medical visualizing system. In this paper, we propose a visualization system which allows user to manage datasets and manipulates medical images registration using an open source graphics tool - VTK(Visualization Tool Kit). The propose of our research is to get more accurate 3D diagnosis system in less expensive price, compared to existing systems.
Asymmetric Diffusion Model for Protein Spot Matching in 2-DE Image
Choi, Kwan-Deok ; Yoon, Young-Woo ;
The KIPS Transactions:PartB, volume 15B, issue 6, 2008, Pages 561~574
DOI : 10.3745/KIPSTB.2008.15-B.6.561
The spot detection phase of the 2-DE image analysis program segments a gel image into spot regions by an image segmentation algorithm and fits the spot regions to a spot shape model and quantifies the spot informations for the next phases. Currently the watershed algorithm is generally used as the segmentation algorithm and there are the Gaussian model and the diffusion model for the shape model. The diffusion model is closer to real spot shapes than the Gaussian model however spots have very various shapes and especially an asymmetric formation in x-coordinate and y-coordinate. The reason for asymmetric formation of spots is known that a protein could not be diffused completely because the 2-DE could not be processed under the ideal environment usually. Accordingly we propose an asymmetric diffusion model in this paper. The asymmetric diffusion model assumes that a protein spot is diffused from a disc at initial time of diffusing process, but is diffused asymmetrically for x-axis and y-axis respectively as time goes on. In experiments we processed spot matching for 19 gel images by using three models respectively and evaluated averages of SNR for comparing three models. As averages of SNR we got 14.22dB for the Gaussian model, 20.72dB for the diffusion model and 22.85dB for the asymmetric diffusion model. By experimental results we could confirm the asymmetric diffusion model is more efficient and more adequate for spot matching than the Gaussian model and the diffusion model.
Encryption Scheme for MPEG-4 Media Transmission Exploiting Frame Dropping
Shin, Dong-Kyoo ; Shin, Dong-Il ; Park, Se-Young ;
The KIPS Transactions:PartB, volume 15B, issue 6, 2008, Pages 575~584
DOI : 10.3745/KIPSTB.2008.15-B.6.575
According to the network condition, the communication network overload could be occurred when media transmitting. Many researches are being carried out to lessen the network overload, such as the filtering, load distributing, frame dropping and many other methods. Among these methods, one of effective method is frame dropping that reduces specified video frames for bandwidth diminution. B frames are dropped and then I, P frames are dropped according to dependency among the frames in frame dropping. This paper proposes a scheme for protecting copyrights by encryption, when we apply frame dropping to reduce bandwidth of media following MPEG-4 file format. We designed two kinds of frame dropping: first one stores and then sends the dropped files and the other drops frames in real-time when transmitting. We designed three kinds of encryption methods in which DES algorithm is used to encrypt MPEG-4 data: macro block encryption in I-VOP, macro block and motion vector encryption in P-VOP, and macro block and motion vector encryption in I, P-VOP. Based on these three methods, we implemented a digital right management solution for MPEG-4 data streaming. We compared the results of dropping, encryption, decryption and quality of video sequences to select an optimal method, and there is no noticeable difference between the video sequences recovered after frame dropping and the ones recovered without frame dropping. The best performance in encryption and decryption of frames was obtained when we apply the macro block and motion vector encryption in I, P-VOP.
The Adaptive Multimedia Contents Service Method to Reduce Delay of MN in HMIPv6
Park, Won-Gil ; Kang, Eui-Sun ;
The KIPS Transactions:PartB, volume 15B, issue 6, 2008, Pages 585~594
DOI : 10.3745/KIPSTB.2008.15-B.6.585
The issues that we should consider in the process of providing mobile web service using a mobile device are seamless service and QoS-guaranteed service. HMIPv6 has MAP because of improving packet loss and transmission delay due to disconnection. However, a load is concentrated on HMIPv6 because of receiving and delivering packet for MN. Owing to this, real time data fails to be processed quickly, and also adaptive mobile service is required for QoS guaranteed service. However, this method demands the response time cost of contents service owing to the hardware differences of various devices. Therefore, we improve the process performance of real time data by applying a queue in MAP for seamless service in this paper. For decreasing response time cost, we propose mobile web service method which has reusable cache of contents using the elements of contents. The result of a numerical formula and simulation shows that our proposed method is superior under various system conditions.
A Study of Fundamental Frequency for Focused Word Spotting in Spoken Korean
Kwon, Soon-Il ; Park, Ji-Hyung ; Park, Neung-Soo ;
The KIPS Transactions:PartB, volume 15B, issue 6, 2008, Pages 595~602
DOI : 10.3745/KIPSTB.2008.15-B.6.595
The focused word of each sentence is a help in recognizing and understanding spoken Korean. To find the method of focused word spotting at spoken speech signal, we made an analysis of the average and variance of Fundamental Frequency and the average energy extracted from a focused word and the other words in a sentence by experiments with the speech data from 100 spoken sentences. The result showed that focused words have either higher relative average F0 or higher relative variances of F0 than other words. Our findings are to make a contribution to getting prosodic characteristics of spoken Korean and keyword extraction based on natural language processing.
Daily Stock Price Prediction Using Fuzzy Model
Hwang, Hee-Soo ;
The KIPS Transactions:PartB, volume 15B, issue 6, 2008, Pages 603~608
DOI : 10.3745/KIPSTB.2008.15-B.6.603
In this paper an approach to building fuzzy model to predict daily open, close, high, and low stock prices is presented. One of prior problems in building a stock prediction model is to select most effective indicators for the stock prediction. The problem is overcome by the selection of information used in the analysis of stick-chart as the input variables of our fuzzy model. The fuzzy rules have the premise and the consequent, in which they are composed of trapezoidal membership functions, and nonlinear equations, respectively. DE(Differential Evolution) searches optimal fuzzy rules through an evolutionary process. To evaluate the effectiveness of the proposed approach numerical example is considered. The fuzzy models to predict open, high, low, and close prices of KOSPI(KOrea composite Stock Price Index) on a daily basis are built, and their performances are demonstrated and compared with those of neural network.
Applying Genetic Algorithm to the Minimum Vertex Cover Problem
Han, Keun-Hee ; Kim, Chan-Soo ;
The KIPS Transactions:PartB, volume 15B, issue 6, 2008, Pages 609~612
DOI : 10.3745/KIPSTB.2008.15-B.6.609
Let G = (V, E) be a simple undirected graph. The Minimum Vertex Cover (MVC) problem is to find a minimum subset C of V such that for every edge, at least one of its endpoints should be included in C. Like many other graph theoretic problems this problem is also known to be NP-hard. In this paper, we propose a genetic algorithm called LeafGA for MVC problem and show the performance of the proposed algorithm by applying it to several published benchmark graphs.
Korean Base-Noun Extraction and its Application
Kim, Jae-Hoon ;
The KIPS Transactions:PartB, volume 15B, issue 6, 2008, Pages 613~620
DOI : 10.3745/KIPSTB.2008.15-B.6.613
Noun extraction plays an important part in the fields of information retrieval, text summarization, and so on. In this paper, we present a Korean base-noun extraction system and apply it to text summarization to deal with a huge amount of text effectively. The base-noun is an atomic noun but not a compound noun and we use tow techniques, filtering and segmenting. The filtering technique is used for removing non-nominal words from text before extracting base-nouns and the segmenting technique is employed for separating a particle from a nominal and for dividing a compound noun into base-nouns. We have shown that both of the recall and the precision of the proposed system are about 89% on the average under experimental conditions of ETRI corpus. The proposed system has applied to Korean text summarization system and is shown satisfactory results.