Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
The KIPS Transactions:PartD
Journal Basic Information
Journal DOI :
Korea Information Processing Society
Editor in Chief :
Volume & Issues
Volume 14D, Issue 7 - Dec 2007
Volume 14D, Issue 6 - Oct 2007
Volume 14D, Issue 5 - Aug 2007
Volume 14D, Issue 4 - Jun 2007
Volume 14D, Issue 3 - Jun 2007
Volume 14D, Issue 2 - Apr 2007
Volume 14D, Issue 1 - Feb 2007
Selecting the target year
Clustering XML Documents Considering The Weight of Large Items in Clusters
Hwang, Jeong-Hee ;
The KIPS Transactions:PartD, volume 14D, issue 1, 2007, Pages 1~8
DOI : 10.3745/KIPSTD.2007.14-D.1.001
As the web document of XML, an exchange language of data in the advanced Internet, is increasing, a target of information retrieval becomes the web documents. Therefore, there we researches on structure, integration and retrieval of XML documents. This paper proposes a clustering method of XML documents based on frequent structures, as a basic research to efficiently process query and retrieval. To do so, first, trees representing XML documents are decomposed and we extract frequent structures from them. Second, we perform clustering considering the weight of large items to adjust cluster creation and cluster cohesion, considering frequent structures as items of transactions. Third, we show the excellence of our method through some experiments which compare which the previous methods.
Constructing Gene Regulatory Networks using Frequent Gene Expression Pattern and Chain Rules
Lee, Heon-Gyu ; Ryu, Keun-Ho ; Joung, Doo-Young ;
The KIPS Transactions:PartD, volume 14D, issue 1, 2007, Pages 9~20
DOI : 10.3745/KIPSTD.2007.14-D.1.009
Groups of genes control the functioning of a cell by complex interactions. Such interactions of gene groups are tailed Gene Regulatory Networks(GRNs). Two previous data mining approaches, clustering and classification, have been used to analyze gene expression data. Though these mining tools are useful for determining membership of genes by homology, they don`t identify the regulatory relationships among genes found in the same class of molecular actions. Furthermore, we need to understand the mechanism of how genes relate and how they regulate one another. In order to detect regulatory relationships among genes from time-series Microarray data, we propose a novel approach using frequent pattern mining and chain rules. In this approach, we propose a method for transforming gene expression data to make suitable for frequent pattern mining, and gene expression patterns we detected by applying the FP-growth algorithm. Next, we construct a gene regulatory network from frequent gene patterns using chain rules. Finally, we validate our proposed method through our experimental results, which are consistent with published results.
CC-GiST: A Generalized Framework for Efficiently Implementing Arbitrary Cache-Conscious Search Trees
Loh, Woong-Kee ; Kim, Won-Sik ; Han, Wook-Shin ;
The KIPS Transactions:PartD, volume 14D, issue 1, 2007, Pages 21~34
DOI : 10.3745/KIPSTD.2007.14-D.1.021
According to recent rapid price drop and capacity growth of main memory, the number of applications on main memory databases is dramatically increasing. Cache miss, which means a phenomenon that the data required by CPU is not resident in cache and is accessed from main memory, is one of the major causes of performance degradation of main memory databases. Several cache-conscious trees have been proposed for reducing cache miss and making the most use of cache in main memory databases. Since each cache-conscious tree has its own unique features, more than one cache-conscious tree can be used in a single application depending on the application`s requirement. Moreover, if there is no existing cache-conscious tree that satisfies the application`s requirement, we should implement a new cache-conscious tree only for the application`s sake. In this paper, we propose the cache-conscious generalized search tree (CC-GiST). The CC-GiST is an extension of the disk-based generalized search tree (GiST) [HNP95] to be tache-conscious, and provides the entire common features and algorithms in the existing cache-conscious trees including pointer compression and key compression techniques. For implementing a cache-conscious tree based on the CC-GiST proposed in this paper, one should implement only a few functions specific to the cache-conscious tree. We show how to implement the most representative cache-conscious trees such as the CSB+-tree, the pkB-tree, and the CR-tree based on the CC-GiST. The CC-GiST eliminates the troublesomeness caused by managing mire than one cache-conscious tree in an application, and provides a framework for efficiently implementing arbitrary cache-conscious trees with new features.
Development of MPEG-7 Description-based Annotation Tool for Production of Semantic Multimedia Metadata
An, Hyoung-Geun ; Koh, Jaw-Jin ;
The KIPS Transactions:PartD, volume 14D, issue 1, 2007, Pages 35~44
DOI : 10.3745/KIPSTD.2007.14-D.1.035
Recently, an increasing in quantity of multimedia data have brought a new problem that expected data should be retrieved fast and exactly. The adequate representation for the multimedia data is the key element for efficient retrieval. For this reason, MPEG-7 standard was established for description of multimedia data. In this paper, we propose a new approach to metadata production. The user can decompose a given content into units and easily annotate each unit by adding basic Information such as time, place, etc. as well as classification information such as event, relationship, etc. according to the MPEG-7 standard. The objective is to build automatically a pure semantic description; the nodes are the events and the links are the graphs which describe the relationships among the events. Finally, we have implemented an annotation tool(SMAT) for semantic description based on proposed technique and assess some of the experiment results. In conclusion, we ran say that the proposod annotation tool is characterized by two important proprieties : reusability and extendibility.
Process Management Environment for Process Improvement
Kim, Jeong-Ah ; Choi, Seung-Yong ;
The KIPS Transactions:PartD, volume 14D, issue 1, 2007, Pages 45~56
DOI : 10.3745/KIPSTD.2007.14-D.1.045
As the knowledge-based society has been constructed, the size of work process that has to be done grows bigger and the amount of the information that has to be analyzed increases. So each company is trying to construct more conformable process models in business model. This study helps to understand the interaction between Process data and structure which are collected on the profess by appling six Sigma as a process improving methodology. On Six Sigma, businesses tn practice Best Practice Process with the view of Value Chain To measure data for the process performing action, introducing Schedule management method makes it possible to collect and reflect accurate data of workers` ability. By this method, efficiency in production will be improved because workers we able to perform the process correctly and preliminary management for defects.
A Study of Enhanced Test Maturity Model with Test Process Improvement
Kim, Ki-Du ; Kim, Young-Chul ;
The KIPS Transactions:PartD, volume 14D, issue 1, 2007, Pages 57~66
DOI : 10.3745/KIPSTD.2007.14-D.1.057
Organizations of Software development are very important issue on enhancement of a software quality as rapid progress of software industry. Especially there are diverse attempts for enhancement of test maturity of the software organization through some kinds of the test maturity model. But the current test maturity models based on CMM(Capability Maturity Model) lack part of actual testing measurement and only measure level of test maturity. To solve these problems, we suggest `double V-model` to execute both software development process and test process simultaneously, and also `test attributes to Maturity Levels Correlation Matrix` for evaluating level of test maturity included with definitions of test attribute and level. That is, we enhance TMM(Test Maturity Model) adopted with `Improvement Suggestion` of TPI(Test Process Improvement) which is easy the evaluation of test maturity of organization and gives the direction of improvement to level up the test maturity for the measured organization. As a result, we will contribute to level up the test maturity of the organization.
Model-Based Quantitative Reengineering for Identifying Components from Object-Oriented Systems
Lee, Eun-Joo ;
The KIPS Transactions:PartD, volume 14D, issue 1, 2007, Pages 67~82
DOI : 10.3745/KIPSTD.2007.14-D.1.067
Due to the classes in object-orientation, which are too detailed and specific, their reusability can be decreased. Components, considered to be more coarse-grained compared to objects, help maintain software complexity effectively and facilitate software reuse. Furthermore, component technology becomes essential by the appearance of the new frameworks, such as MDA, SOA, etc. Consequently, it is necessary to reengineer an existing object-oriented system into a component-based system suitable to those new environments. In this paper, we propose a model-based quantitative reengineering methodology to identify components from object-oriented systems. We expand system model and process, which are defined in our prior work, more formally and precisely. A system model, constructed from object-oriented system, is used to extract and refine components in quantitative ways. We develop a supporting tool and show effectiveness of the methodology through applying it to an existing object-oriented system.
A Study on Modeling Heterogeneous Embedded S/W Components based on Model Driven Architecture with Extended xUML
Kim, Woo-Yeol ; Kim, Young-Chul ;
The KIPS Transactions:PartD, volume 14D, issue 1, 2007, Pages 83~88
DOI : 10.3745/KIPSTD.2007.14-D.1.083
In this paper, we introduce MDA based Development Method for Embedded Software Component. This method applies MDA approach to solve problems about reusability of the heterogeneous embedded software system. With our proposed method, we produce `Target Independent Meta Model`(TIM) which is transformed into `Target Specific Model`(TSM) and generate `Target Dependent Code` (TDC) via TSM. We would like to reuse a meta-model to develop heterogeneous embedded software systems. To achieve this mechanism, we extend xUML to solve unrepresented elements (such as real things about concurrency, and real time, etc) on dynamic modeling of the particular system. We introduce `MDA based Embedded S/W Modeling Tool` with extended XUML. With this tool, we would like to do more easily modeling embedded or concurrent/real time s/w systems. It contains two examples of heterogeneous imbedded systems which illustrate the proposed approach.
A Composite Cluster Analysis Approach for Component Classification
Lee, Sung-Koo ;
The KIPS Transactions:PartD, volume 14D, issue 1, 2007, Pages 89~96
DOI : 10.3745/KIPSTD.2007.14-D.1.089
Various classification methods have been developed to reuse components. These classification methods enable the user to access the needed components quickly and easily. Conventional classification approaches include the following problems: a labor-intensive domain analysis effort to build a classification structure, the representation of the inter-component relationships, difficult to maintain as the domain evolves, and applied to a limited domain. In order to solve these problems, this paper describes a composite cluster analysis approach for component classification. The cluster analysis approach is a combination of a hierarchical cluster analysis method, which generates a stable clustering structure automatically, and a non-hierarchical cluster analysis concept, which classifies new components automatically. The clustering information generated from the proposed approach can support the domain analysis process.
Design and Implementation of B2Bi Collaboration Workflow System for Efficient Business Process Management based on J2EE
Lee, Chang-Mog ; Chang, Ok-Bae ;
The KIPS Transactions:PartD, volume 14D, issue 1, 2007, Pages 97~106
DOI : 10.3745/KIPSTD.2007.14-D.1.097
In this paper, the business process was easily modeled by distinguishing between the business process and work logic. Based on this model, B2Bi collaboration Workflow modeling tool, which facilitates collaboration, was designed and implemented. The collaboration workflow modeling tool consists of 3 components; business process modeling tool, execution engine and monitoring tool. First, a business process modeling tool is used to build a process map that reflects the business logic of an application in a quick and accurate manner. Second an execution engine provides a real-time execution environment for business process instance. Third, a monitoring tool provides a real-time monitoring function for the business process that is in operation at the time. In addition to this, it supports flexibility and expandability based on XML and J2EE for the linkage with the legacy system that was used previously, and suggests a solution for a new corporate strategy and operation.
Building IT ROI Assessment System for Estimating the Monetary Value of Non-financial Benefits
Kim, Young-Woon ; Chong, Ki-Won ;
The KIPS Transactions:PartD, volume 14D, issue 1, 2007, Pages 107~120
DOI : 10.3745/KIPSTD.2007.14-D.1.107
When it tomes to IT investment, it`s a challenge for the management to make the right decision. Unlike investment in other business area, it`s hard to measure direct cost vs. effect in IT business. To validate the investment in IT, it is required to establish objective assessment system that both provider and beneficiary of information can accept, and it is also required to suggest an assessment tool of fixed quantity that includes measuring standards and method for the economic effect of new investment. This study, therefore, has developed IT ROI Methodology that can prove investment validity by accepting the strong points of the existing models while complementing their weak points and by analyzing IT Investment and IT Efforts. It also has built an IT ROI System that reflects the methodology which is applied to 21 companies of 5 business categories. This system is designed to provide effective and objective decision-making tool for IT investment by proving what positive impacts IT could have on business activities.
A Case Study of Software Architecture Design by Applying the Quality Attribute-Driven Design Method
Suh, Yong-Suk ; Hong, Seok-Boong ; Kim, Hyeon-Soo ;
The KIPS Transactions:PartD, volume 14D, issue 1, 2007, Pages 121~130
DOI : 10.3745/KIPSTD.2007.14-D.1.121
in a software development, the design or architecture prior to implementing the software is essential for the success. This paper presents a case that we successfully designed a software architecture of radiation monitoring system (RMS) for HANARO research reactor currently operating in KAERI by applying the quality attribute-driven design method which is modified from the attribute-driven design (ADD) introduced by Bass. The quality attribute-driven design method consists of following procedures: eliciting functionality and quality requirements of system as architecture drivers, selecting tactics to satisfy the drivers, determining architectures based on the tactics, and implementing and validating the architectures. The availability, maintainability, and interchangeability were elicited as duality requirements, hot-standby dual servers and weak-coupled modulization were selected as tactics, and client-server structure and object-oriented data processing structure were determined at architectures for the RMS. The architecture was implemented using Adroit which is a commercial off-the-shelf software tool and was validated based on performing the function-oriented testing. We found that the design method in this paper is an efficient method for a project which has constraints such as low budget and short period of development time. The architecture will be reused for the development of other RMS in KAERI. Further works are necessary to quantitatively evaluate the architecture.
Development of a Measurement Model of Personal Information Competency in Information Environment
Yoon, Chui-Young ;
The KIPS Transactions:PartD, volume 14D, issue 1, 2007, Pages 131~138
DOI : 10.3745/KIPSTD.2007.14-D.1.131
This study developed an efficient model for measuring personal information competency in an enterprise of the information environment. This model includes four measurement domains, twelve factors and feasible measurement items. This verified the validity and reliability of the developed model by factor analysis and reliability analysis through a pilot test with the application of SPSS software, and presented the concrete measurement items that tn efficiently gauge personal information competency. The developed model was applied to the measurement of 264 workers in an enterprise in order to testify its practicability and utilization, and the measurement results were presented. This model will contribute to improving the information competency of human resources in industrial fields.
A Framework for an Advanced Learning Mechanism in Context-aware Systems using Improved Back-Propagation Algorithm
Zha, Wei ; Eo, Sang-Hun ; Kim, Gyoung-Bae ; Cho, Sook-Kyoung ; Bae, Hae-Young ;
The KIPS Transactions:PartD, volume 14D, issue 1, 2007, Pages 139~144
DOI : 10.3745/KIPSTD.2007.14-D.1.139
In seeking to improve the workload efficiency and inference capability of context-aware systems, we propose a new framework for an advanced teaming mechanism that uses improved bath propagation (BP) algorithm. Even though a learning mechanism is one of the most important parts in a context-aware system, the existing algorithms focused on facilitating systems by elaborating the learning mechanism with user`s context information are rare. BP is the most adaptable algorithm for learning mechanism of context-aware systems. By using the improved BP algorithm, the framework we proposed drastically improves the inference capability so that the overall performance is far better than other systems. Also, using the special system cache, the framework manages the workload efficiently. Experiments show that there is an obvious improvement in overall performanre of the context-awareness systems using the proposed framework.