Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
The KIPS Transactions:PartD
Journal Basic Information
Journal DOI :
Korea Information Processing Society
Editor in Chief :
Volume & Issues
Volume 16D, Issue 6 - Dec 2009
Volume 16D, Issue 5 - Oct 2009
Volume 16D, Issue 4 - Aug 2009
Volume 16D, Issue 3 - Jun 2009
Volume 16D, Issue 2 - Apr 2009
Volume 16D, Issue 1 - Feb 2009
Selecting the target year
Incremental Maintenance of Horizontal Views Using a PIVOT Operation and a Differential File in Relational DBMSs
Shin, Sung-Hyun ; Kim, Jin-Ho ; Moon, Yang-Sae ; Kim, Sang-Wook ;
The KIPS Transactions:PartD, volume 16D, issue 4, 2009, Pages 463~474
DOI : 10.3745/KIPSTD.2009.16-D.4.463
To analyze multidimensional data conveniently and efficiently, OLAP (On-Line Analytical Processing) systems or e-business are widely using views in a horizontal form to represent measurement values over multiple dimensions. These views can be stored as materialized views derived from several sources in order to support accesses to the integrated data. The horizontal views can provide effective accesses to complex queries of OLAP or e-business. However, we have a problem of occurring maintenance of the horizontal views since data sources are distributed over remote sites. We need a method that propagates the changes of source tables to the corresponding horizontal views. In this paper, we address incremental maintenance of horizontal views that makes it possible to reflect the changes of source tables efficiently. We first propose an overall framework that processes queries over horizontal views transformed from source tables in a vertical form. Under the proposed framework, we propagate the change of vertical tables to the corresponding horizontal views. In order to execute this view maintenance process efficiently, we keep every change of vertical tables in a differential file and then modify the horizontal views with the differential file. Because the differential file is represented as a vertical form, its tuples should be converted to those in a horizontal form to apply them to the out-of-date horizontal view. With this mechanism, horizontal views can be efficiently refreshed with the changes in a differential file without accessing source tables. Experimental results show that the proposed method improves average performance by 1.2
5.0 times over the existing methods.
-cubing-cubing: Improved Data Cube Structure and Cubing Method for OLAP on Data Stream
Chen, Xiangrui ; Li, Yan ; Lee, Dong-Wook ; Kim, Gyoung-Bae ; Bae, Hae-Young ;
The KIPS Transactions:PartD, volume 16D, issue 4, 2009, Pages 475~486
DOI : 10.3745/KIPSTD.2009.16-D.4.475
Data cube plays an important role in multi-dimensional, multi-level data analysis. Meeting on-line analysis requirements of data stream, several cube structures have been proposed for OLAP on data stream, such as stream cube, flowcube, S-cube. Since it is costly to construct data cube and execute ad-hoc OLAP queries, more research works should be done considering efficient data structure, query method and algorithms. Stream cube uses H-cubing to compute selected cuboids and store the computed cells in an H-tree, which form the cuboids along popular-path. However, the H-tree layoutis disorderly and H-cubing method relies too much on popular path.In this paper, first, we propose
-tree, an improved data structure, which makes the retrieval operation in tree structure more efficient. Second, we propose an improved cubing method,
-cubing, with respect to computing the cuboids that cannot be retrieved along popular-path when an ad-hoc OLAP query is executed.
-tree construction and
-cubing algorithms are given. Performance study turns out that during the construction step,
-tree outperforms H-tree with a more desirable trade-off between time and memory usage, and
-cubing is better adapted to ad-hoc OLAP querieswith respect to the factors such as time and memory space.
Trajectory Indexing for Efficient Processing of Range Queries
Cha, Chang-Il ; Kim, Sang-Wook ; Won, Jung-Im ;
The KIPS Transactions:PartD, volume 16D, issue 4, 2009, Pages 487~496
DOI : 10.3745/KIPSTD.2009.16-D.4.487
This paper addresses an indexing scheme capable of efficiently processing range queries in a large-scale trajectory database. After discussing the drawbacks of previous indexing schemes, we propose a new scheme that divides the temporal dimension into multiple time intervals and then, by this interval, builds an index for the line segments. Additionally, a supplementary index is built for the line segments within each time interval. This scheme can make a dramatic improvement in the performance of insert and search operations using a main memory index, particularly for the time interval consisting of the segments taken by those objects which are currently moving or have just completed their movements, as contrast to the previous schemes that store the index totally on the disk. Each time interval index is built as follows: First, the extent of the spatial dimension is divided onto multiple spatial cells to which the line segments are assigned evenly. We use a 2D-tree to maintain information on those cells. Then, for each cell, an additional 3D
-tree is created on the spatio-temporal space (x, y, t). Such a multi-level indexing strategy can cure the shortcomings of the legacy schemes. Performance results obtained from intensive experiments show that our scheme enhances the performance of retrieve operations by 3
10 times, with much less storage space.
Merging XML Documents Based on Insertion/Deletion Edit Operations
Lee, Suk-Kyoon ;
The KIPS Transactions:PartD, volume 16D, issue 4, 2009, Pages 497~506
DOI : 10.3745/KIPSTD.2009.16-D.4.497
The method of effectively merging XML documents becomes necessary, as the use of XML is popular and the collaborative editing is required in the areas such as office documents and scientific documents editing works. As a solution to this problem, in this paper we present a theoretical framework for merging individual editing works by muli-users to a same source document. Different from existing approaches which merge documents themselves when they are merged, we represent editing works with a series of edit operations applied to a source document, which is called a edit script, merge those edit scripts by multi-users, and apply the merged one to the source document so that we can achieve the same effect of merging documents. In order to do this, assuming edit scripts based on insertion and deletion edit operations, we define notions such as static edit scripts, the intervention between edit scripts and the conflict between the ones, then propose the conflict conditions between edit scripts and the method of adjusting edit scripts when merged. This approach is effective in reducing network overhead in distributed environments and also in version management systems because of preserving the semantics of individual editing works.
Efficient Query Indexing for Short Interval Query
Kim, Jae-In ; Song, Myung-Jin ; Han, Dae-Young ; Kim, Dae-In ; Hwang, Bu-Hyun ;
The KIPS Transactions:PartD, volume 16D, issue 4, 2009, Pages 507~516
DOI : 10.3745/KIPSTD.2009.16-D.4.507
In stream data processing system, generally the interval queries are in advance registered in the system. When a data is input to the system continuously, for realtime processing, a query indexing method is used to quickly search queries. Thus, a main memory-based query index with a small storage cost and a fast search time is needed for searching queries. In this paper, we propose a LVC-based(Limited Virtual Construct-based) query index method using a hashing to meet the both needs. In LVC-based query index, we divide the range of a stream into limited virtual construct, or LVC. We map each interval query to its corresponding LVC and the query ID is stored on each LVC. We have compared with the CEI-based query indexing method through the simulation experiment. When the range of values of input stream is broad and there are many short interval queries, the LVC-based indexing method have shown the performance enhancement for the storage cost and search time.
Comparison of Test Case Effectiveness Based on Dynamic Diagrams Using Mutation Testing
Lee, Hyuck-Su ; Choi, Eun-Man ;
The KIPS Transactions:PartD, volume 16D, issue 4, 2009, Pages 517~526
DOI : 10.3745/KIPSTD.2009.16-D.4.517
It is possible to indicate the complex design and execution of object-oriented program with dynamic UML diagram. This paper shows the way how to make several test cases from sequence, state, and activity diagram among dynamic UML diagram. Three dynamic UML diagrams about withdrawal work of ATM simulation program are drawn. Then different test cases are created from these diagrams using previously described ways. To evaluate effectiveness of test cases, mutation testing is executed. Mutants are made from MuClipse plug-in tool based on Eclipse which supports many traditional and class mutation operators. Finally we've got the result of mutation testing and compare effectiveness of test cases, etc. Through this document, we've known some hints that how to choose the way of making test cases.
An Automatic Construction Approach of State Diagram from Class Operations with Pre/Post Conditions
Lee, Kwang-Min ; Bae, Jung-Ho ; Chae, Heung-Seok ;
The KIPS Transactions:PartD, volume 16D, issue 4, 2009, Pages 527~540
DOI : 10.3745/KIPSTD.2009.16-D.4.527
State diagrams describe the dynamic behavior of an individual object as a number of states and transitions between these states. In this paper, we propose an automated technique to the generation of a state diagram from class operations with pre/post conditions. And I also develop a supporting tool, SDAG (State Diagram Automatic Generation tool). Additionally, we propose a complexity metric and a state diagram generation approach concerning types of each operation for decreasing complexity of generated state diagram.
A Case Study on Selection and Improvement of SLA Evaluation Metrics
Shin, Sung-Jin ; Rhew, Sung-Yul ; Kim, Yoo-Ri ;
The KIPS Transactions:PartD, volume 16D, issue 4, 2009, Pages 541~548
DOI : 10.3745/KIPSTD.2009.16-D.4.541
Many companies have recently apply SLA and execute IT service by using SLA. However, there are no objective standards for selection and improvement of SLA evaluation metrics. We derive and present measurement attributes that are criteria for selection and improvement of SLA evaluation metrics as measurement metrics. We execute a case study based on D company in order to verify whether the measurement metrics are applicable. We apply and evaluate the measurement metrics that are applicable to D company, and then we designate an improvement line. We propose improvement guidelines of the measurement metrics which score is less than the improvement line's and derive SLA evaluation metrics. We prove that the way of selection and improvement is useful by applying SLA evaluation metrics to D company.
Distributed Development and Evaluation of Software using Agile Techniques
Lee, Sei-Young ; Yong, Hwan-Seung ;
The KIPS Transactions:PartD, volume 16D, issue 4, 2009, Pages 549~560
DOI : 10.3745/KIPSTD.2009.16-D.4.549
The Agile movement is a phenomenon that is part of the next phase of the software engineering evolution. At the same time, globally distributed software development is another trend delivering high-quality software to global users at lower costs. In this paper, Agile Framework for Distributed Software Development (AFDSD) has been suggested, and Chameleon project of Yahoo! Inc. has been implemented based on the framework. Also, the project has been evaluated by measuring Agile adoption and improvement levels, degrees of agility and agile project success, and comparing the performance and quality with the previous version. The overall performance and satisfaction with Chameleon increased by more than 30% since Agile techniques were adopted. Our objective is to highlight successful practices and suggest a framework to support adoption and evaluation of Agile techniques in a distributed environment.
Policy Definition Language for Service Management in Mobile Environment
Ahn, Sung-Wook ; Rhew, Yul-Sung ;
The KIPS Transactions:PartD, volume 16D, issue 4, 2009, Pages 561~570
DOI : 10.3745/KIPSTD.2009.16-D.4.561
In order to manage repair and maintenance efficiently in the mobile environment, the system structure to manage service as a policy and the policy description language are needed. This research defined the structure of PEP, which is the executioner of policy in the IETF policy framework, and proposed the policy description language which can be carried out under the PEP structure. The proposed policy description language derived demand matters based on documentary data and the characteristics of mobile and the policy information model was designed with the three stage approaches and was defined as policy description language. The three stage approaches are made up of the policy domain that decides the scope to which the policy applies, the policy rules which distinguish the kinds of policy application and control, and policy grammar which contextualizes the policy structure. In order to verify the efficiency of the policy description language, scenarios are defined with the policy description language and verified it by using policy tool and proved the expansive nature by comparing and analyzing other policy description language.
P2P Based Collision Solving Technique for Effective Concurrency Control in a Collaborative Development Environment
Park, Hyun-Soo ; Kim, Dae-Yeob ; Youn, Cheong ;
The KIPS Transactions:PartD, volume 16D, issue 4, 2009, Pages 571~582
DOI : 10.3745/KIPSTD.2009.16-D.4.571
This paper provides a way to overcome limitations of general collaborative software development tools that completely restrict co-ownership of resources among individuals in a team oriented developmental environment. It also provides a solution for users to co-own resources and at the same time manage version control and collision problems that may occur due to the co-ownership of resources.The cooperative development support tool of developed software uses the conventional optimistic technique but employs the algorithm which is improved to reduce costs and efforts required for solving collision. The collaborative software development tool presented in this paper is made up of the classical client/server structure with the P2P(peer to peer) method which supports information exchange among individuals. This tool is developed based on open source software CVS(Concurrent Version System). Functional efficiency was confirmed by comparing it to the utility of prior existing collaborative software development tools.
A Method for Migration of Legacy System into Web Service
Park, Oak-Cha ; Choi, Si-Won ; Kim, Soo-Dong ;
The KIPS Transactions:PartD, volume 16D, issue 4, 2009, Pages 583~594
DOI : 10.3745/KIPSTD.2009.16-D.4.583
Most of the SOA solutions applicable to businesses and organizations are taking a top-down methodology. It starts with an analysis of an organization's requirements, followed by definition of business models and identification of candidate services and ends with finding or developing required services. Challenges in adopting SOA while abandoning legacy systems involve time and cost required during the process. Many businesses and organizations want to gradually migrate into SOA while making the most of the existing system. In this paper, we propose A Method for Migration of Legacy System into Web Service(M-LSWS); it allows legacy system to be migrated into web service accessible by SOA and be used as data repositories. M-LSWS defines procedures for migration into reusable web services through analysis of business processes and identification of candidate services based on design specification and code of legacy system. M-LSWS aims to migrate of legacy system into web service appropriate for SOA. The proposed method consists of four steps: analysis of legacy system, elicitation of reusable service and its specification, service wrapping and service registration. Each step has its own process and guideline. The eligibility of the proposed method will be tested by applying the method to book management system.
A Method of Test Case Generation using BPMN-based Model Reduction for Service System
Lee, Seung-Hoon ; Kang, Dong-Su ; Song, Chee-Yang ; Baik, Doo-Kwon ;
The KIPS Transactions:PartD, volume 16D, issue 4, 2009, Pages 595~612
DOI : 10.3745/KIPSTD.2009.16-D.4.595
The early test can greatly reduce the cost of error correction for system development. It is still important in SOA based service system. However, the existing methods of test case generation for SOA have limitations which are restricted to only web service using XML. Therefore, this paper proposes a method of test case generation using BPMN-based model reduction for service system. For minimizing test effort, an existing BPM is transformed into S-BPM which is composed of basic elements of workflow. The process of test case generation starts with making S-BPM concerning the target service system, and transforms the target service system into directed graph. And then, we generate several service scenarios applying scenario searching algorithm and extract message moving information. Applying this method, we can obtain effective test cases which are even unlimited to web service. This result is the generation of test case which is reflected in the business-driven property of SOA.
Development of an HTM-Based Parts Image Recognition System for Small Scale Manufacturing Industry
Bae, Sun-Gap ; Lee, Dae-Han ; Diao, Jian-Hua ; Nan, Hai-Bao ; Sung, Ki-Won ; Bae, Jong-Min ; Kang, Hyun-Syug ;
The KIPS Transactions:PartD, volume 16D, issue 4, 2009, Pages 613~620
DOI : 10.3745/KIPSTD.2009.16-D.4.613
It is necessary to develop a system of judging whether or not the parts are defective easily at low cost, especially in a small scale factory which manufactures a large variety of products in small amounts. To develop such system, we require to recognize objects using human's cognitive ability under various circumstances. Human's high intelligence originates mostly from neocortex of human brain. The HTM theory, which is proposed by Jeff Hopkins, is one of the recent researches to model the operation principle of neocortex. In this paper we developed PRESM (Parts image REcognition System for small scale Manufacturing industry) system based on the HTM theory to judge badness of manufactured products. As a result of application to the real field of workplace environments we identified the superiority of our recognition system.
Intelligent Production Management System with the Enhanced PathTree
Kwon, Kyung-Lag ; Ryu, Jae-Hwan ; Sohn, Jong-Soo ; Chung, In-Jeong ;
The KIPS Transactions:PartD, volume 16D, issue 4, 2009, Pages 621~630
DOI : 10.3745/KIPSTD.2009.16-D.4.621
In recent years, there have been many attempts to connect the latest RFID (Radio Frequency Identification) technology with EIS (Enterprise Information System) and utilize them. However, in most cases the focus is only on the simultaneous multiple reading capability of the RFID technology neglecting the management of massive data created from the reader. As a result, it is difficult to obtain time-related information such as flow prediction and analysis in process control. In this paper, we suggest a new method called 'procedure tree', an enhanced and complementary version of PathTree which is one of RFID data mining techniques, to manage massive RFID data sets effectively and to perform a real-time process control efficiently. We will evaluate efficiency of the proposed system after applying real-time process management system connected with the RFID-based EIS. Through the suggested method, we are able to perform such tasks as prediction or tracking of process flow for real-time process control and inventory management efficiently which the existing RFID-based production system could not have done.
Architecture Description Model for Common IT Resource Identification in e-Government Systems
Shin, Soo-Jeong ; Choi, Young-Jin ; Jung, Suk-Chun ; Seo, Yong-Won ;
The KIPS Transactions:PartD, volume 16D, issue 4, 2009, Pages 631~642
DOI : 10.3745/KIPSTD.2009.16-D.4.631
Although the Korean government is making great effort to prevent the redundancy in IT investment and efficiently allocate the IT budget, actual achievements are quite limited because of the variety of IT resources and different architecture description among organizations and projects. Thus, a standardized description model of the system architecture is strongly needed to identify the common resources and improve the efficiency of IT investment. Therefore, in this paper, we have developed the function-network matrix model which can be used as the basic template for a standard for architecture description of e-Government systems. The function-network matrix model integrates the function tiers and the network areas into a single unified framework, which enables the functionality of each component and the flow of information clearly visible. Moreover, we described the architectures of Korean e-government’s citizen service systems using our model, resulting in clear demonstration of the similarities and differences between different systems, and easy identification of the common resources. Using the architecture description model developed in this research, the consolidation of national IT resources can be promoted, and non-expert IT users can easily recognize the architecture of their systems. In addition, more efficient and systematic IT resource management can be achieved using our model.