Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
The KIPS Transactions:PartD
Journal Basic Information
Journal DOI :
Korea Information Processing Society
Editor in Chief :
Volume & Issues
Volume 13D, Issue 7 - Dec 2006
Volume 13D, Issue 6 - Oct 2006
Volume 13D, Issue 5 - Oct 2006
Volume 13D, Issue 4 - Aug 2006
Volume 13D, Issue 3 - Jun 2006
Volume 13D, Issue 2 - Apr 2006
Volume 13D, Issue 1 - Feb 2006
Selecting the target year
Vector Approximation Bitmap Indexing Method for High Dimensional Multimedia Database
Park Joo-Hyoun ; Son Dea-On ; Nang Jong-Ho ; Joo Bok-Gyu ;
The KIPS Transactions:PartD, volume 13D, issue 4, 2006, Pages 455~462
DOI : 10.3745/KIPSTD.2006.13D.4.455
Recently, the filtering approach using vector approximation such as VA-file or LPC-file have been proposed to support similarity search in high dimensional data space. This approach filters out many irrelevant vectors by calculating the approximate distance from a query vector using the compact approximations of vectors in database. Accordingly, the total elapsed time for similarity search is reduced because the disk I/O time is eliminated by reading the compact approximations instead of original vectors. However, the search time of the VA-file or LPC-file is not much lessened compared to the brute-force search because it requires a lot of computations for calculating the approximate distance. This paper proposes a new bitmap index structure in order to minimize the calculating time. To improve the calculating speed, a specific value of an object is saved in a bit pattern that shows a spatial position of the feature vector on a data space, and the calculation for a distance between objects is performed by the XOR bit calculation that is much faster than the real vector calculation. According to the experiment, the method that this paper suggests has shortened the total searching time to the extent of about one fourth of the sequential searching time, and to the utmost two times of the existing methods by shortening the great deal of calculating time, although this method has a longer data reading time compared to the existing vector approximation based approach. Consequently, it can be confirmed that we can improve even more the searching performance by shortening the calculating time for filtering of the existing vector approximation methods when the database speed is fast enough.
Prefetch R-tree: A Disk and Cache Optimized Multidimensional Index Structure
Park Myung-Sun ;
The KIPS Transactions:PartD, volume 13D, issue 4, 2006, Pages 463~476
DOI : 10.3745/KIPSTD.2006.13D.4.463
R-trees have been traditionally optimized for the I/O performance with the disk page as the tree node. Recently, researchers have proposed cache-conscious variations of R-trees optimized for the CPU cache performance in main memory environments, where the node size is several cache lines wide and more entries are packed in a node by compressing MBR keys. However, because there is a big difference between the node sizes of two types of R-trees, disk-optimized R-trees show poor cache performance while cache-optimized R-trees exhibit poor disk performance. In this paper, we propose a cache and disk optimized R-tree, called the PR-tree (Prefetching R-tree). For the cache performance, the node size of the PR-tree is wider than a cache line, and the prefetch instruction is used to reduce the number of cache misses. For the I/O performance, the nodes of the PR-tree are fitted into one disk page. We represent the detailed analysis of cache misses for range queries, and enumerate all the reasonable in-page leaf and nonleaf node sizes, and heights of in-page trees to figure out tree parameters for best cache and I/O performance. The PR-tree that we propose achieves better cache performance than the disk-optimized R-tree: a factor of 3.5-15.1 improvement for one-by-one insertions, 6.5-15.1 improvement for deletions, 1.3-1.9 improvement for range queries, and 2.7-9.7 improvement for k-nearest neighbor queries. All experimental results do not show notable declines of the I/O performance.
An Index Structure for Updating Continuously Moving Objects Efficiently
Bok Kyoung-Soo ; Yoon Ho-Won ; Kim Myoung-Ho ; Cho Ki-Hyung ; Yoo Jae-Soo ;
The KIPS Transactions:PartD, volume 13D, issue 4, 2006, Pages 477~490
DOI : 10.3745/KIPSTD.2006.13D.4.477
Existing index structures need very much update cost because they repeat delete and insert operations in order to update continuously moving objects. In this paper, we propose a new index structure which reduces the update cost of continuously moving objects. The proposed index structure consists of a space partitioning index structure that stores the location of the moving objects and an auxiliary index structure that directly accesses to their current positions. In order to increase the fanout of the node, it stores not the real partitioning area but kd-tree as the information about the child node of the node. In addition, we don't traverse a whole index structure, but access the leaf nodes directly and accomplish a bottom-up update strategy for efficiently updating the positions of moving objects. We show through the various experiments that our index structure outperforms the existing index structures in terms of insertion, update and retrieval.
Service Composition with Data Mining in Ubiquitous Computing Environment
Lee Sun-Young ; Lee Jong-Yun ;
The KIPS Transactions:PartD, volume 13D, issue 4, 2006, Pages 491~500
DOI : 10.3745/KIPSTD.2006.13D.4.491
Since users want to have services correctly in their own position and surrounding circumstance in ubiquitous computing environment, it is very important to search, compose basic services and provide suitable services according to various circumstances. However existing techniques have been studied on service discovery mainly and lack consideration for position or preference of users. Furthermore, on service composition, they lists basic services simply and do not propose concretely method of use service history data for service composition. Therefore we propose a framework for context-based service provisioning middleware system, called COSEP, and Ontology engine with data ming. This research discovers services by reacting dynamically to circumstance information such as time and position of user, composites services using Ontology engine with data ming and offers newly created optimal services to users.
Research on supporting the group by clause reflecting XML data characteristics in XQuery
Lee Min-Soo ; Cho Hye-Young ; Oh Jung-Sun ; Kim Yun-Mi ; Song Soo-Kyung ;
The KIPS Transactions:PartD, volume 13D, issue 4, 2006, Pages 501~512
DOI : 10.3745/KIPSTD.2006.13D.4.501
XML is the most popular platform-independent data expression which is used to communicate between loosely coupled heterogeneous systems such as B2B Applications or Workflow systems. The powerful query language XQuery has been developed to support diverse needs for querying XML documents. XQuery is designed to configure results from diverse data sources into a uniquely structured query result. Therefore, it became the standard for the XML query language. Although the latest XQuery supports heavy search functions including iterations, the grouping mechanism for data is too primitive and makes the query expression difficult and complex. Therefore, this work is focused on supporting the groupby clause in the query expression to process XQuery grouping. We suggest it to be a more efficient way to process grouping for restructuring and aggregation functions on XML data. We propose an XQuery EBNF that includes the groupby clause and implemented an XQuery processing system with grouping functions based on the eXist Native XML Database.
A Single Index Approach for Subsequence Matching that Supports Normalization Transform in Time-Series Databases
Moon Yang-Sae ; Kim Jin-Ho ; Loh Woong-Kee ;
The KIPS Transactions:PartD, volume 13D, issue 4, 2006, Pages 513~524
DOI : 10.3745/KIPSTD.2006.13D.4.513
Normalization transform is very useful for finding the overall trend of the time-series data since it enables finding sequences with similar fluctuation patterns. The previous subsequence matching method with normalization transform, however, would incur index overhead both in storage space and in update maintenance since it should build multiple indexes for supporting arbitrary length of query sequences. To solve this problem, we propose a single index approach for the normalization transformed subsequence matching that supports arbitrary length of query sequences. For the single index approach, we first provide the notion of inclusion-normalization transform by generalizing the original definition of normalization transform. The inclusion-normalization transform normalizes a window by using the mean and the standard deviation of a subsequence that includes the window. Next, we formally prove correctness of the proposed method that uses the inclusion-normalization transform for the normalization transformed subsequence matching. We then propose subsequence matching and index building algorithms to implement the proposed method. Experimental results for real stock data show that our method improves performance by up to
times over the previous method. Our approach has an additional advantage of being generalized to support many sorts of other transforms as well as normalization transform. Therefore, we believe our work will be widely used in many sorts of transform-based subsequence matching methods.
Design and Implementation of Customer Information Retrieval System based on Semantic Web
Hwang Jeong-Hee ; Gu Mi-Sug ; Lee Hyun-Ah ; Ryu Keun-Ho ;
The KIPS Transactions:PartD, volume 13D, issue 4, 2006, Pages 525~534
DOI : 10.3745/KIPSTD.2006.13D.4.525
Ontology specifies the knowledge in a specific domain and defines the concepts of knowledge and the relationships between concepts. It is possible to provide the service based on the semantic web through the ontology. Therefore, to specify and define the knowledge in a specific domain, it is required to generate the ontology which conceptualizes the knowledge. Accordingly, to search the information of potential customers for home-delivery marketing of post office, we design the specific domain to generate the ontology based on the semantic web in this paper. And we propose how to retrieve the information, using the generated ontology. We implement the data search robot which collects the information based on the generated ontology. Also, we confirm that the ontology and the search robot perform the information retrieval exactly.
BPMN2XPDL: Transformation from BPMN to XPDL for a business process
Park Jung-Up ; Jung Moon-Young ; Jo Myung-Hyun ; Kim Hak-Soo ; Son Jin-Hyun ;
The KIPS Transactions:PartD, volume 13D, issue 4, 2006, Pages 535~548
DOI : 10.3745/KIPSTD.2006.13D.4.535
To formally describe business process, many business process languages have been so far specified with different origins and goals such as XPDL, BPML and BPELAWS. Especially, XPDL proposed by WfMC has been widely used in various business process environments for a long time. On the other hand, the necessity of a standard graphical notation for a business process may create BPMN driven by BPMI. Because BPMN is composed of graphical constructs which can be used to graphically depict business process, BPMN-formed business processes should ultimately be converted to their corresponding semantically equivalent business process language(XPDL). Then, the business process languages can be consequently executed by business process engines. In this paper, we proposed a transformation mechanism from BPMN to XPDL for a business process. By this paper, We minimized the difference between process designers and process execution modules as reducing the gap of semantics between BPMN and XPDL.
Skeleton Code Generation for Transforming an XML Document with DTD using Metadata Interface
Choe Gui-Ja ; Nam Young-Kwang ;
The KIPS Transactions:PartD, volume 13D, issue 4, 2006, Pages 549~556
DOI : 10.3745/KIPSTD.2006.13D.4.549
In this paper, we propose a system for generating skeleton programs for directly transforming an XML document to another document, whose structure is defined in the target DTD with GUI environment. With the generated code, the users can easily update or insert their own codes into the program so that they can convert the document as the way that they want and can be connected with other classes or library files. Since most of the currently available code generation systems or methods for transforming XML documents use XSLT or XQuery, it is very difficult or impossible for users to manipulate the source code for further updates or refinements. As the generated code in this paper reveals the code along the XPaths of the target DTD, the result code is quite readable. The code generating procedure is simple; once the user maps the related elements represented as trees in the GUI interface, the source document is transformed into the target document and its corresponding Java source program is generated, where DTD is given or extracted from XML documents automatically by parsing it. The mapping is classified 1:1, 1:N, and N:1, according to the structure and semantics of elements of the DTD. The functions for changing the structure of elements designated by the user are amalgamated into the metadata interface. A real world example of transforming articles written in XML file into a bibliographical XML document is shown with the transformed result and its code.
Case Study for Information Quality Maturity Model
Kim Chang-Jae ; Choi Yong-Rak ; Rhew Sung-Yul ;
The KIPS Transactions:PartD, volume 13D, issue 4, 2006, Pages 557~564
DOI : 10.3745/KIPSTD.2006.13D.4.557
Information is used effectively and contributes in profit creation and not only support management judgment quick but important resource to be possible recycled. The recent information systems improve enterprise's competitive power by reflection of user's various requirements and becoming big and complex for adaptation of rapidly circumstance change. Also it is trend that importance of information quality is emphasized gradually. The biggest problem in user requirement that is based on low quality data support. In case of business management is achieved by low quality information, company can not help dropping their competitive power such as company's strategy establishment, strategy achievement and management concentration breakup against competitor. Information of low quality increase time and expense to improve inaccurate data or revise and it is hard to accept correct information from specific situation. To solve these problems, we have to gain high quality data through definite comprehension, data management system establishment, and systematic data management achievement etc. Up to now, information quality and connected study were developed partially, but systematic methodology of information quality management's whole condition was not existed. Therefore, in this paper can show you how to extract process for information quality management & related evaluate factor with CMM (Capacity Maturity Mode]) 5 steps that is information warranty of quality process step. This paper whishes to contributes in competitive company or organization activity through information quality improvement management process.
A Systematic Method for Analyzing Business Cases in Product Line Engineering
Park Shin-Young ; Kim Soo-Dong ;
The KIPS Transactions:PartD, volume 13D, issue 4, 2006, Pages 565~572
DOI : 10.3745/KIPSTD.2006.13D.4.565
Product Line Engineering (PLE) is an effective reuse methodology where common features among members are captured into core assets and applications are developed by reusing the core assets, reducing development cost while increasing productivity. To maximize benefits in developing systems, business case analysis for PLE is essential. If the scope for core assets is excessively broad, it will result in high cost of asset development while lowering reusability. On the other hand, if the scope is too narrow, it will result in a limited applicability which only support a small number of members in the domain. In this paper, we propose a process for business case analysis for PLE and for deciding economical analysis of core asset scope. Then, we define guidelines for each activity of the process. Since variability often occurs in PLE, we significantly treat the variability of features among members in detailed level. By applying our framework for business case analysis, one can develop core assets of which scope provide the most economical value with applying PLE.
Reengineering Black-box Test Cases
Seo Kwang-Ik ; Choi Eun-Man ;
The KIPS Transactions:PartD, volume 13D, issue 4, 2006, Pages 573~582
DOI : 10.3745/KIPSTD.2006.13D.4.573
Black-box testing needs to prepare fitting test data, execute software, and examine the result. If we test software effectively, not only selecting test cases but also representing test cases are important. In static testing effectiveness of testing activities also depends on how to represent test cases and checklist to validate. This paper suggests a method for finding ineffective critical test cases and reengineering them. An experiment of reengineering digital set-top box software shows the process and results of checking effectiveness and conformance of current test cases and patching test cases. The result shows how much save the test time and improve test coverage by reengineering test cases. Methods of reuse and restructuring test cases are also studied to fit into embedded product-line software.
An Elicitation Approach of Measurement Indicator based an Product line Context
Hwang Sun-Myung ; Kim Jin-Sam ;
The KIPS Transactions:PartD, volume 13D, issue 4, 2006, Pages 583~592
DOI : 10.3745/KIPSTD.2006.13D.4.583
Software development based on product lines has been proved a promising technology that can drastically reduce cycle time and guarantee quality by strategically reusing quality core assets that belong to an organization. However, how to measure within a product line is different from how to measure within a single software project in that we have to consider the aspects of both core assets and projects that utilize the assets. Moreover, the performance aspects of overall project lines need to be considered within a product line context. Therefore, a systematic approach to measure the performance of product lines is essential to have consistent, repeatable and effective measures within a product line. This paper presents a context-based measurements elicitation approach for product lines that reflects the performance characteristics of product lines and the diversity of their application. The approach includes both detailed procedures and work products resulting from implementation of the procedures, along with their templates. To show the utility of the approach, this paper presents the elicited measurements, especially for technical management practices among product line practices. This paper also illustrated a real application case that adopt this approach. The systematic approach enables management attributes, i.e., measurements to be identified when we construct product lines or develop software product based on the product lines. The measurements will be effective in that they are derived in consideration of the application context and interests of stakeholders.
Implementation of Software Product-Line Variabiliy Applying Aspect-Oriented Programming
Heo Seung-Hyun ; Choi Eun-Man ;
The KIPS Transactions:PartD, volume 13D, issue 4, 2006, Pages 593~602
DOI : 10.3745/KIPSTD.2006.13D.4.593
Software development methodology has been developed for satisfying goals of improvement in productivity and reduction in time-to-market through the reuse of software assets. The current methods that implement software product-line, one of software development methodologies, interfere massively with the core assets, which require high cost in assembly level reducing the effectiveness. In this paper, we introduce Aspect-Oriented Programming (AOP) as a method for improving assembly process in software product-line. The method that assembles core assets and variabilities is described by grammar elements such as Join point, pointcut and advice without code-change. We analyze requirements of a mini-system as an example adapting AOP and design using UML. Our study implements the variabilities, which are from design stage, using an Aspect-Oriented Programming Language, AspectJ and prove usability and practicality by implementing the proposed idea using an Aspect-Oriented Programming Language, AspectJ.
A Study on Selection Process of Web Services Based on the Multi-Attributes Decision Making
Seo Young-Jun ; Song Young-Jae ;
The KIPS Transactions:PartD, volume 13D, issue 4, 2006, Pages 603~612
DOI : 10.3745/KIPSTD.2006.13D.4.603
Recently the web service area is rapidly growing as the next generation IT paradigm because of increase of concern about SOA(Services-Oriented Architecture) and growth of B2B market. Since a service discovery through UDDI(Universal Description, Discovery and Integration) is limited to a functional requirement, it is not considered an effect on frequency of service using and reliability of mutual relation. That is, a quality as nonfunctional aspect of web service is regarded as important factor for a success between consumer and provider. Therefore, the web service selection method with considering the quality is necessary. This paper suggests the agent-based quality broker architecture and selection process which helps to find a service providing the optimum quality that the consumer needs in a position of service consumer. A theory of agent is accepted widely and suitable for proposed system architecture in the circumstance of distributed and heterogeneous environment like web service. In this paper, we considered the QoS and CoS in the evaluation process to solve the problem of existing researches related to the web service selection and used PROMETHEE(Preference Ranking Organization MeTHod for Enrichment Evaluations) as an evaluation method which is most suitable for the web service selection among MCDM approaches. PROMETHEE has advantages that solve the problem that a pair-wise comparison should be performed again when comparative services are added or deleted. This paper suggested a case study with the service composition scenario in order to verify the selection process. In this case study, the decision making problem was described on the basis of evaluated values for qualities from a consumer's point of view and the defined service level.
Embedded Monitoring System using Bit-masking Technique
Shin Won ; Kim Tae-Wan ; Chang Chun-Hyon ;
The KIPS Transactions:PartD, volume 13D, issue 4, 2006, Pages 613~618
DOI : 10.3745/KIPSTD.2006.13D.4.613
As the embedded software spreads in various areas, many development tools have been made to minimize the developing time. But these tools cannot be applicable to all the environment because they have been created for the specific platform. As such, this paper proposes an Embedded Monitoring System, which supports the various communication environment and removes the limitation of adaptability to the various platforms. Using the Code Inline technique, this system can perform the monitoring process. However, we should consider the optimization for the monitoring process and monitoring sensors because the technique has the monitoring sensor overhead. As such, this paper proposes an approach for initializing the monitoring process and a bit-masking technique for optimizing the monitoring sensor. The Embedded Monitoring System will be applicable to all the areas using embedded systems.
A Study on Sampling and Association Relation of Class to Express Game Software Characteristics
Kim Yong-Sic ; Cho Hyun-Hoon ; Rhew Sung-Yul ;
The KIPS Transactions:PartD, volume 13D, issue 4, 2006, Pages 619~624
DOI : 10.3745/KIPSTD.2006.13D.4.619
Currently domestic game market rapidity is developmental but the game production process does not become systematization. Also it is bringing the failure of game that problem point of conversation between game planner and game developer.. The research which to conversation between game planner and game developer it sees does extracting a game element to expressed game characteristics from the product for game planning and it change extracted element into class and In order to express the relationship of element for it presents a relationship of class. Instance research it leads. It grasps a relationship among extracted classes and it supports systematic game planning.
Improvement of Datawarehouse Development Process by Applying the Configuration Management of CMMI
Park Jong-Mo ; Cho Kyung-San ;
The KIPS Transactions:PartD, volume 13D, issue 4, 2006, Pages 625~632
DOI : 10.3745/KIPSTD.2006.13D.4.625
A Datawarehouse, which extracts and saves the massive analysis data from the operating servers, is a decision support tool in which data quality and processing time are very important. Thus, it is necessary to standardize and improve datawarehouse development process in order to stabilize data quality and improve the productivity. We propose a novel improved process for datawarehouse development by applying the configuration management of CMMI (Capability Maturity Model Integration) which has become a major force in software development process improvement. In addition, we specify some matrices for evaluating datawarehouse development process. Through the comparison analysis with other existing processes, we show that our proposal is more efficient in cost and productivity as well as improves data quality and reusability.