Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
The KIPS Transactions:PartD
Journal Basic Information
Journal DOI :
Korea Information Processing Society
Editor in Chief :
Volume & Issues
Volume 19D, Issue 4 - Aug 2012
Volume 19D, Issue 3 - Jun 2012
Volume 19D, Issue 2 - Apr 2012
Volume 19D, Issue 1 - Feb 2012
Selecting the target year
Application of the Flow-Capturing Location-Allocation Model to the Seoul Metropolitan Bus Network for Selecting Pickup Points
Park, Jong-Soo ;
The KIPS Transactions:PartD, volume 19D, issue 2, 2012, Pages 127~132
DOI : 10.3745/KIPSTD.2012.19D.2.127
In the Seoul metropolitan bus network, it may be necessary for a bus passenger to pick up a parcel, which has been purchased through e-commerce, at his or her convenient bus stop on the way to home or office. The flow-capturing location-allocation model can be applied to select pickup points for such bus stops so that they maximize the captured passenger flows, where each passenger flow represents an origin-destination (O-D) pair of a passenger trip. In this paper, we propose a fast heuristic algorithm to select pickup points using a large O-D matrix, which has been extracted from five million transportation card transactions. The experimental results demonstrate the bus stops chosen as pickup points in terms of passenger flow and capture ratio, and illustrate the spatial distribution of the top 20 pickup points on a map.
An Analytic Study on the Categorization of Query through Automatic Term Classification
Lee, Tae-Seok ; Jeong, Do-Heon ; Moon, Young-Su ; Park, Min-Soo ; Hyun, Mi-Hwan ;
The KIPS Transactions:PartD, volume 19D, issue 2, 2012, Pages 133~138
DOI : 10.3745/KIPSTD.2012.19D.2.133
Queries entered in a search box are the results of users' activities to actively seek information. Therefore, search logs are important data which represent users' information needs. The purpose of this study is to examine if there is a relationship between the results of queries automatically classified and the categories of documents accessed. Search sessions were identified in 2009 NDSL(National Discovery for Science Leaders) log dataset of KISTI (Korea Institute of Science and Technology Information). Queries and items used were extracted by session. The queries were processed using an automatic classifier. The identified queries were then compared with the subject categories of items used. As a result, it was found that the average similarity was 58.8% for the automatic classification of the top 100 queries. Interestingly, this result is a numerical value lower than 76.8%, the result of search evaluated by experts. The reason for this difference explains that the terms used as queries are newly emerging as those of concern in other fields of research.
A Method for Frequent Itemsets Mining from Data Stream
Seo, Bok-Il ; Kim, Jae-In ; Hwang, Bu-Hyun ;
The KIPS Transactions:PartD, volume 19D, issue 2, 2012, Pages 139~146
DOI : 10.3745/KIPSTD.2012.19D.2.139
Data Mining is widely used to discover knowledge in many fields. Although there are many methods to discover association rule, most of them are based on frequency-based approaches. Therefore it is not appropriate for stream environment. Because the stream environment has a property that event data are generated continuously. it is expensive to store all data. In this paper, we propose a new method to discover association rules based on stream environment. Our new method is using a variable window for extracting data items. Variable windows have variable size according to the gap of same target event. Our method extracts data using COBJ(Count object) calculation method. FPMDSTN(Frequent pattern Mining over Data Stream using Terminal Node) discovers association rules from the extracted data items. Through experiment, our method is more efficient to apply stream environment than conventional methods.
A Semi-supervised Dimension Reduction Method Using Ensemble Approach
Park, Cheong-Hee ;
The KIPS Transactions:PartD, volume 19D, issue 2, 2012, Pages 147~150
DOI : 10.3745/KIPSTD.2012.19D.2.147
While LDA is a supervised dimension reduction method which finds projective directions to maximize separability between classes, the performance of LDA is severely degraded when the number of labeled data is small. Recently semi-supervised dimension reduction methods have been proposed which utilize abundant unlabeled data and overcome the shortage of labeled data. However, matrix computation usually used in statistical dimension reduction methods becomes hindrance to make the utilization of a large number of unlabeled data difficult, and moreover too much information from unlabeled data may not so helpful compared to the increase of its processing time. In order to solve these problems, we propose an ensemble approach for semi-supervised dimension reduction. Extensive experimental results in text classification demonstrates the effectiveness of the proposed method.
Performance Enhancement of a DVA-tree by the Independent Vector Approximation
Choi, Hyun-Hwa ; Lee, Kyu-Chul ;
The KIPS Transactions:PartD, volume 19D, issue 2, 2012, Pages 151~160
DOI : 10.3745/KIPSTD.2012.19D.2.151
Most of the distributed high-dimensional indexing structures provide a reasonable search performance especially when the dataset is uniformly distributed. However, in case when the dataset is clustered or skewed, the search performances gradually degrade as compared with the uniformly distributed dataset. We propose a method of improving the k-nearest neighbor search performance for the distributed vector approximation-tree based on the strongly clustered or skewed dataset. The basic idea is to compute volumes of the leaf nodes on the top-tree of a distributed vector approximation-tree and to assign different number of bits to them in order to assure an identification performance of vector approximation. In other words, it can be done by assigning more bits to the high-density clusters. We conducted experiments to compare the search performance with the distributed hybrid spill-tree and distributed vector approximation-tree by using the synthetic and real data sets. The experimental results show that our proposed scheme provides consistent results with significant performance improvements of the distributed vector approximation-tree for strongly clustered or skewed datasets.
The Efficient Merge Operation in Log Buffer-Based Flash Translation Layer for Enhanced Random Writing
Lee, Jun-Hyuk ; Roh, Hong-Chan ; Park, Sang-Hyun ;
The KIPS Transactions:PartD, volume 19D, issue 2, 2012, Pages 161~186
DOI : 10.3745/KIPSTD.2012.19D.2.161
Recently, the flash memory consistently increases the storage capacity while the price of the memory is being cheap. This makes the mass storage SSD(Solid State Drive) popular. The flash memory, however, has a lot of defects. In order that these defects should be complimented, it is needed to use the FTL(Flash Translation Layer) as a special layer. To operate restrictions of the hardware efficiently, the FTL that is essential to work plays a role of transferring from the logical sector number of file systems to the physical sector number of the flash memory. Especially, the poor performance is attributed to Erase-Before-Write among the flash memory's restrictions, and even if there are lots of studies based on the log block, a few problems still exists in order for the mass storage flash memory to be operated. If the FAST based on Log Block-Based Flash often is generated in the wide locality causing the random writing, the merge operation will be occur as the sectors is not used in the data block. In other words, the block thrashing which is not effective occurs and then, the flash memory's performance get worse. If the log-block makes the overwriting caused, the log-block is executed like a cache and this technique contributes to developing the flash memory performance improvement. This study for the improvement of the random writing demonstrates that the log block is operated like not only the cache but also the entire flash memory so that the merge operation and the erase operation are diminished as there are a distinct mapping table called as the offset mapping table for the operation. The new FTL is to be defined as the XAST(extensively-Associative Sector Translation). The XAST manages the offset mapping table with efficiency based on the spatial locality and temporal locality.
Methods to Apply GoF Design Patterns in Service-Oriented Computing
Kim, Moon-Kwon ; La, Hyun-Jung ; Kim, Soo-Dong ;
The KIPS Transactions:PartD, volume 19D, issue 2, 2012, Pages 187~202
DOI : 10.3745/KIPSTD.2012.19D.2.187
As a representative reuse paradigm, the theme of service-oriented Paradigm (SOC) is largely centered on publishing and subscribing reusable services. Here, SOC is the term including service oriented architecture and cloud computing. Service providers can produce high profits with reusable services, and service consumers can develop their applications with less time and effort by reusing the services. Design Patterns (DP) is a set of reusable methods to resolve commonly occurring design problems and to provide design structures to deal with the problems by following open/close princples. However, since DPs are mainly proposed for building object-oriented systems and there are distinguishable differences between object-oriented paradigm and SOC, it is challenging to apply the DPs to SOC design problems. Hence, DPs need to be customized by considering the two aspects; for service providers to design services which are highly reusable and reflect their unique characteristics and for service consumers to develop their target applications by reusing and customizing services as soon as possible. Therefore, we propose a set of DPs that are customized to SOC. With the proposed DPs, we believe that service provider can effectively develop highly reusable services, and service consumers can efficiently adapt services for their applications.
Functional Test Automation for Android GUI Widgets Using XML
Ma, Yingzhe ; Choi, Eun-Man ;
The KIPS Transactions:PartD, volume 19D, issue 2, 2012, Pages 203~210
DOI : 10.3745/KIPSTD.2012.19D.2.203
Capture-and-replay technique is a common automatic method for GUI testing. Testing applications on Android platform cannot use directly capture-and-replay technique due to the testing framework which is already set up and technical supported by Google and lack of automatic linking GUI elements to actions handling widget events. Without capture-and-replay testing tools testers must design and implement testing scenarios according to the specification, and make linking every GUI elements to event handling parts all by hand. This paper proposes a more improved and optimized approach than common capture-and-replay technique for automatic testing Android GUI widgets. XML is applied to extract GUI elements from applications based on tracing the actions to handle widget events. After tracing click events using monitoring in capture phase test cases will be created by communicating status of activated widget in replay phase with API events.