Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
The KIPS Transactions:PartD
Journal Basic Information
Journal DOI :
Korea Information Processing Society
Editor in Chief :
Volume & Issues
Volume 17D, Issue 6 - Dec 2010
Volume 17D, Issue 5 - Oct 2010
Volume 17D, Issue 4 - Aug 2010
Volume 17D, Issue 3 - Jun 2010
Volume 17D, Issue 2 - Apr 2010
Volume 17D, Issue 1 - Feb 2010
Selecting the target year
An Effective Filtering Method for Skyline Queries in MANETs
Park, Mi-Ra ; Kim, Min-Kee ; Min, Jun-Ki ;
The KIPS Transactions:PartD, volume 17D, issue 4, 2010, Pages 245~252
DOI : 10.3745/KIPSTD.2010.17D.4.245
In this paper, we propose an effective filtering method for skyline queries in mobile ad hoc networks (MANETs). Most existing researches assume that data is uniformly distributed. Under these assumptions, the previous works focus on optimizing the energy consumption due to the limited battery power. However, in practice, data distribution is skewed in a specific region. In order to reduce the energy consumption, we propose a new filtering method considering the data distribution. We verify the performance of the proposed method through a comparative experiment with an existing method. The results of the experiment confirm that the proposed method reduces the communication overhead and execution time compared to an existing method.
Efficient Dynamic Weighted Frequent Pattern Mining by using a Prefix-Tree
Jeong, Byeong-Soo ; Farhan, Ahmed ;
The KIPS Transactions:PartD, volume 17D, issue 4, 2010, Pages 253~258
DOI : 10.3745/KIPSTD.2010.17D.4.253
Traditional frequent pattern mining considers equal profit/weight value of every item. Weighted Frequent Pattern (WFP) mining becomes an important research issue in data mining and knowledge discovery by considering different weights for different items. Existing algorithms in this area are based on fixed weight. But in our real world scenarios the price/weight/importance of a pattern may vary frequently due to some unavoidable situations. Tracking these dynamic changes is very necessary in different application area such as retail market basket data analysis and web click stream management. In this paper, we propose a novel concept of dynamic weight and an algorithm DWFPM (dynamic weighted frequent pattern mining). Our algorithm can handle the situation where price/weight of a pattern may vary dynamically. It scans the database exactly once and also eligible for real time data processing. To our knowledge, this is the first research work to mine weighted frequent patterns using dynamic weights. Extensive performance analyses show that our algorithm is very efficient and scalable for WFP mining using dynamic weights.
Process Performance Measurement Model Based on Event for an efficient Decision-Making
Park, Jae-Won ; Choi, Jae-Hyun ; Cho, Poong-Youn ; Lee, Nam-Yong ;
The KIPS Transactions:PartD, volume 17D, issue 4, 2010, Pages 259~270
DOI : 10.3745/KIPSTD.2010.17D.4.259
Information systems nowadays are heterogeneous and distributed which integrate the enterprise information by processes. They are also very complex, because they are linked together by processes. It aims to integrate the systems so that these systems work as one system. A process is a framework which contains all of the business activities in an enterprise, and has a lot of information which is needed for measuring performance. A process consists of activities, and an activity contains events which can be considered information sources. In most cases, it is very valuable to determine if a process is meaningful, but it is difficult because of the complexity in measuring performance, and also because finding relationships between business factors and events is not a simple problem. So it would reduce operation cost and allow efficient process execution if I could evaluate the process before it ends. In this paper we propose an event based process measurement model. First, we propose the concept of process performance measurement, and a model for selecting process and activity indexes from the events which are collected from information systems. Second, we propose at methodologies and data schema that can store and manage the selected process indexes, the mapping methods between indexes and events. Finally, we propose a process Performance measurement model using the collected events which gives users a valuable managerial information.
A Study of Requirement Change Management and Traceability Effect Using Traceability Table
Kim, Ju-Young ; Rhew, Sung-Yul ; Hwang, Man-Su ;
The KIPS Transactions:PartD, volume 17D, issue 4, 2010, Pages 271~282
DOI : 10.3745/KIPSTD.2010.17D.4.271
Insufficient requirement management accounts for 54% ofunsuccessful software development projects and 22% of insufficient requirement management comes from requirement change management. Hence, requirement management activities are important to reduce failure rates and a tracing method is suggested as the major factor in requirements change management. A traceability table is easy to use because of its legibility accurate tracing. However, traceability tables of existing studies have failed to concretely suggest method of change management and effect of traceability. Also, studies of methods to estimate change impact is complex. Hence, this study suggests how to use a traceability table to manage changes in requirements. Together, in comparison to existing studies, this study suggests easier methods to estimate change rate and change impact. Also Fifteen projects were sampled to test the hypothesis that traceability table influences the success of projects and that it decreases the failure rate that comes from the insufficient requirements management.
Test Case Generation Strategy for Timing Diagram
Lee, Hong-Seok ; Chung, Ki-Hyun ; Choi, Kyung-Hee ;
The KIPS Transactions:PartD, volume 17D, issue 4, 2010, Pages 283~296
DOI : 10.3745/KIPSTD.2010.17D.4.283
Timing diagram is a useful tool for describing the specification of system, but there is no study for test case strategy of a timing diagram. To solve this problem, we followed the steps to generate test cases from timing diagram in this paper. 1) We defined a timing diagram formally. 2) We describe the method of transforming from a timing diagram model into a Stateflow model which has an equivalent relationship between a timing diagram model and a transformed Stateflow model. 3) We generated test cases from a transformed Stateflow model using SDV which is plugged in Simulink. To show that our approach is useful, we made an experiment with a surveillance model and arbitrary timing diagram models. In the experiment we transformed timing diagram models into Stateflow models, generated test cases from transformed Stateflow models using SDV, and analyzed the generation results. The conclusion that can be obtained from this study is that timing diagram is not only a specification tool but also a useful tool when users are trying to generate test cases based on model.
Automated Test Data Generation for Testing Programs with Multi-level Stack-directed Pointers
Chung, In-Sang ;
The KIPS Transactions:PartD, volume 17D, issue 4, 2010, Pages 297~310
DOI : 10.3745/KIPSTD.2010.17D.4.297
Recently, a new testing technique called concolic testing receives lots of attention. Concolic testing generates test data by combining concrete program execution and symbolic execution to achieve high test coverage. CREST is a representative open-source test tool implementing concolic testing. Currently, however, CREST only deals with integer type as input. This paper presents a new rule for automated test data generation in presence of inputs of pointer type. The rules effectively handles multi-level stack-directed pointers that are mainly used in C programs. In addition, we describe a tool named vCREST implementing the proposed rules together with the results of applying the tool to some C programs.
Association Rule Discovery Considering Strategic Importance: WARM
Choi, Doug-Won ;
The KIPS Transactions:PartD, volume 17D, issue 4, 2010, Pages 311~316
DOI : 10.3745/KIPSTD.2010.17D.4.311
This paper presents a weight adjusted association rule mining algorithm (WARM). Assigning weights to each strategic factor and normalizing raw scores within each strategic factor are the key ideas of the presented algorithm. It is an extension of the earlier algorithm TSAA (transitive support association Apriori) and strategic importance is reflected by considering factors such as profit, marketing value, and customer satisfaction of each item. Performance analysis based on a real world database has been made and comparison of the mining outcomes obtained from three association rule mining algorithms (Apriori, TSAA, and WARM) is provided. The result indicates that each algorithm gives distinct and characteristic behavior in association rule mining.