Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
The KIPS Transactions:PartD
Journal Basic Information
Journal DOI :
Korea Information Processing Society
Editor in Chief :
Volume & Issues
Volume 15D, Issue 6 - Dec 2008
Volume 15D, Issue 5 - Oct 2008
Volume 15D, Issue 4 - Aug 2008
Volume 15D, Issue 3 - Jun 2008
Volume 15D, Issue 2 - Apr 2008
Volume 15D, Issue 1 - Feb 2008
Selecting the target year
Memory Efficient Query Processing over Dynamic XML Fragment Stream
Lee, Sang-Wook ; Kim, Jin ; Kang, Hyun-Chul ;
The KIPS Transactions:PartD, volume 15D, issue 1, 2008, Pages 1~14
DOI : 10.3745/KIPSTD.2008.15-D.1.1
This paper is on query processing in the mobile devices where memory capacity is limited. In case that a query against a large volume of XML data is processed in such a mobile device, techniques of fragmenting the XML data into chunks and of streaming and processing them are required. Such techniques make it possible to process queries without materializing the XML data in its entirety. The previous schemes such as XFrag, XFPro, XFLab are not scalable with respect to the increase of the size of the XML data because they lack proper memory management capability. After some information on XML fragments necessary for query processing is stored, it is not deleted even after it becomes of no use. As such, when the XML fragments are dynamically generated and infinitely streamed, there could be no guarantee of normal completion of query processing. In this paper, we address scalability of query processing over dynamic XML fragment stream, proposing techniques of deleting information on XML fragments accumulated during query processing in order to extend the previous schemes. The performance experiments through implementation showed that our extended schemes considerably outperformed the previous ones in memory efficiency and scalability with respect to the size of the XML data.
Frequent Patterns Mining using only one-time Database Scan
Chai, Duck-Jin ; Jin, Long ; Lee, Yong-Mi ; Hwang, Bu-Hyun ; Ryu, Keun-Ho ;
The KIPS Transactions:PartD, volume 15D, issue 1, 2008, Pages 15~22
DOI : 10.3745/KIPSTD.2008.15-D.1.15
In this paper, we propose an efficient algorithm using only one-time database scan. The proposed algorithm creates the bipartite graph which indicates relationship of large items and transactions including the large items. And then we can find large itemsets using the bipartite graph. The bipartite graph is generated when database is scanned to find large items. We can`t easily find transactions which include large items in the large database. In the bipartite graph, large items and transactions are linked each other. So, we can trace the transactions which include large items through the link information. Therefore the bipartite graph is a indexed database which indicates inclusion relationship of large items and transactions. We can fast find large itemsets because proposed method conducts only one-time database scan and scans indexed the bipartite graph. Also, it don`t generate candidate itemsets.
Mining of Frequent Structures over Streaming XML Data
Hwang, Jeong-Hee ;
The KIPS Transactions:PartD, volume 15D, issue 1, 2008, Pages 23~30
DOI : 10.3745/KIPSTD.2008.15-D.1.23
The basic research of context aware in ubiquitous environment is an internet technique and XML. The XML data of continuous stream type are popular in network application through the internet. And also there are researches related to query processing for streaming XML data. As a basic research to efficiently query, we propose not only a labeled ordered tree model representing the XML but also a mining method to extract frequent structures from streaming XML data. That is, XML data to continuously be input are modeled by a stream tree which is called by XFP_tree and we exactly extract the frequent structures from the XFP_tree of current window to mine recent data. The proposed method can be applied to the basis of the query processing and index method for XML stream data.
Hybrid Lower-Dimensional Transformation for Similar Sequence Matching
Moon, Yang-Sae ; Kim, Jin-Ho ;
The KIPS Transactions:PartD, volume 15D, issue 1, 2008, Pages 31~40
DOI : 10.3745/KIPSTD.2008.15-D.1.31
We generally use lower-dimensional transformations to convert high-dimensional sequences into low-dimensional points in similar sequence matching. These traditional transformations, however, show different characteristics in indexing performance by the type of time-series data. It means that the selection of lower-dimensional transformations makes a significant influence on the indexing performance in similar sequence matching. To solve this problem, in this paper we propose a hybrid approach that integrates multiple transformations and uses them in a single multidimensional index. We first propose a new notion of hybrid lower-dimensional transformation that exploits different lower-dimensional transformations for a sequence. We next define the hybrid distance to compute the distance between the transformed sequences. We then formally prove that the hybrid approach performs the similar sequence matching correctly. We also present the index building and the similar sequence matching algorithms that use the hybrid approach. Experimental results for various time-series data sets show that our hybrid approach outperforms the single transformation-based approach. These results indicate that the hybrid approach can be widely used for various time-series data with different characteristics.
Meta Data Model based on C-A-V Structure for Context Information in Ubiquitous Environment
Choi, Ok-Joo ; Yoon, Yong-Ik ;
The KIPS Transactions:PartD, volume 15D, issue 1, 2008, Pages 41~46
DOI : 10.3745/KIPSTD.2008.15-D.1.41
In ubiquitous computer environment, by improving the computer`s access to context information for dynamic service adaptation, we can increase richness of communication in human computer interaction and make it possible to produce more useful computational services. We need new data structure in order to flexible apply dynamic information to current context information repository and enhance the communication ability between human and computer. In this paper, we proposed to C-A-V (Category-Attribute-Value) context metadata structure required to support dynamic service adaptation for increasing communication ability in user-centric environments. We also classify the context metadata, as well as define its relationship with other context information on the basis of the application services, changes in the external environments.
Web Ontology Building Methodology for Semantic Web Application
Kim, Su-Kyung ; Ahn, Kee-Hong ;
The KIPS Transactions:PartD, volume 15D, issue 1, 2008, Pages 47~60
DOI : 10.3745/KIPSTD.2008.15-D.1.47
Success of a semantic web application, currently base on web technology, depend on web ontology construction that provided rule and inference function about knowledge. For, this study compared the ontology construction methods that were already proposed, and analyzed, and investigated characteristics of semantic web and web ontology, investigated characteristics of semantic web and web ontology, and defined characteristics of web ontology as-based technology of a semantic web application and knowledge representation steps, and studied a technical element that related currently web technology, and proposed a web ontology construction method for a semantic web application with bases to these. Established web ontologies of various knowledge fields as applied the construction method that proposed. Also evaluate performance of web ontology through inference verification of web ontologies established, web ontologies evaluated performance of web ontology as used inference verification. According to this, we confirmed that proposed construction method that can establish the ontology suitable for semantic web application.
A Study on the Derivation and Sensitivity Analysis of the Adjustment Factor in the Software Cost Estimation Guidelines
Byun, Boon-Hee ; Kwon, Ki-Tae ;
The KIPS Transactions:PartD, volume 15D, issue 1, 2008, Pages 61~72
DOI : 10.3745/KIPSTD.2008.15-D.1.61
One of the most significant tasks of software development project is to know how much it will be the software development cost in the early stage of software development cycle. The software development environment and technology are changing very rapidly. For accuracy, we should apply those to the software cost estimation. And it is important that we select the suitable adjustment factor and the value of a suitable adjustment factor. For that, this paper have applied the method of AHP. And we have also analyzed the sensitivity of the adjustment factor which is influenced by decision metrics. In conclusion, the value of the application type adjustment factor is responded more sensitively to the data complexity and the control complexity than processing complexity. And the value of the language adjustment factor is responded more sensitively to the supplying manpower and the time of the coding than the time of the debugging. In the future, we will research the selection of an additional adjustment factor and a suitable value of the adjustment factor which are influenced by the environment and the technology of the domestic software development. And then, in the language adjustment factor, we will try to calculate the value about the individual programming language.
An adaptive load balancing method for RFID middlewares based on the Standard Architecture
Park, Jae-Geol ; Chae, Heung-Seok ;
The KIPS Transactions:PartD, volume 15D, issue 1, 2008, Pages 73~86
DOI : 10.3745/KIPSTD.2008.15-D.1.73
Because of its capability of automatic identification of objects, RFID(Radio Frequency Identification) technologies have extended their application areas to logistics, healthcare, and food management system. Load balancing is a basic technique for improving scalability of systems by moving loads of overloaded middlewares to under loaded ones. Adaptive load balancing has been known to be effective for distributed systems of a large load variance under unpredictable situations. There are needs for applying load balancing to RFID middlewares because they must efficiently treat vast numbers of RFID tags which are collected from multiple RFID readers. Because there can be a large amount of variance in loads of RFID middlewares which are difficult to predict, it is desirable to consider adaptive load balancing approach for RFID middlewares, which can dynamically choose a proper load balancing strategy depending on the current load. This paper proposes an adaptive load balancing approach for RFID middlewares and presents its design and implementation. First we decide a performance model by a experiment with a real RFID middleware. Then, a set of proper load balancing strategies for high/medium/low system loads is determined from a simulation of various load balancing strategies based on the performance model.
A Partition Technique of UML-based Software Models for Multi-Processor Embedded Systems
Kim, Jong-Phil ; Hong, Jang-Eui ;
The KIPS Transactions:PartD, volume 15D, issue 1, 2008, Pages 87~98
DOI : 10.3745/KIPSTD.2008.15-D.1.87
In company with the demand of powerful processing units for embedded systems, the method to develop embedded software is also required to support the demand in new approach. In order to improve the resource utilization and system performance, software modeling techniques have to consider the features of hardware architecture. This paper proposes a partitioning technique of UML-based software models, which focus the generation of the allocatable software components into multiprocessor architecture. Our partitioning technique, at first, transforms UML models to CBCFGs(Constraint-Based Control Flow Graphs), and then slices the CBCFGs with consideration of parallelism and data dependency. We believe that our proposition gives practical applicability in the areas of platform specific modeling and performance estimation in model-driven embedded software development.
Verification for Multithreaded Java Code using Java Memory Model
Lee, Min ; Kwon, Gi-Hwon ;
The KIPS Transactions:PartD, volume 15D, issue 1, 2008, Pages 99~106
DOI : 10.3745/KIPSTD.2008.15-D.1.99
Recently developed compilers perform some optimizations in order to speed up the execution time of source program. These optimizations require the reordering of the sequence of program statements. This reordering does not give any problems in a single-threaded program. However, the reordering gives some significant errors in a multi-threaded program. State-of-the-art model checkers such as JavaPathfinder do not consider the reordering resulted in the optimization step in a compiler since they just consider a single memory model. In this paper, we develop a new verification tool to verify Java source program based on Java Memory Model. And our tool is capable of handling the reordering in verifying Java programs. As a result, our tool finds an error in the test program which is not revealed with the traditional model checker JavaPathFinder.
A Restructuring Technique of Legacy Software Systems for Unit Testing
Moon, Joong-Hee ; Lee, Nam-Yong ;
The KIPS Transactions:PartD, volume 15D, issue 1, 2008, Pages 107~112
DOI : 10.3745/KIPSTD.2008.15-D.1.107
The maintenance of legacy software systems is very important in the field of a software engineering. In the maintenance, a regression test confirms the behavior preserving of the software which has been changed but most of regression tests are done in a system level and rarely done in a unit test level because there is no test case. This paper proposes how to modify legacy software systems and make unit test cases as an asset. It uses a technique with a specific module of a real software development project and analyzes test coverage results. After this, if a study about automatic restructuring techniques and a test case generation proceeds continuously, we can expect the big advance of legacy software systems maintenance.
Automatic Web Services Composition System using Web Services Choreography
Lee, Sang-Kyu ; Han, Sang-Yong ;
The KIPS Transactions:PartD, volume 15D, issue 1, 2008, Pages 113~120
DOI : 10.3745/KIPSTD.2008.15-D.1.113
Web Services composition has gained a considerable attention because of the widespread use of the Web Services and SOA. Recently, various researches on automatic Web Services composition are on going to realize more dynamic and intelligent SOA environments. However, there is no complete solution for automatic Web Services composition now and previous researches have several problems. Automatic composition based on syntactic information has low correctness through incorrect semantic linking. Moreover, many researches make an process as the result of composition which is hard for actual execution. In this paper, improved automatic Web Services composition based on Web Services choreography is proposed. In this system, the correctness is improved and the result of composition is more concrete process.
Implementation of payment settlement system through Cyber Bank for Electronic Commerce
Kim, Moon-Shik ; Lee, Eun-Seok ;
The KIPS Transactions:PartD, volume 15D, issue 1, 2008, Pages 121~130
DOI : 10.3745/KIPSTD.2008.15-D.1.121
In line with the enhanced weight and variety of electronic commerce in business activities, new type of payment settlement and banking system which will enable to store, create and transfer values from the existing method of payment settlement is highly required. Cyber Banking system draws strong attention being the solution of there requirements. The existing Cyber Banking system has the difficulty of operation, administration, in addition to the problem of initial facility investment of big amount, resulted from the usage of the current business process. As the existing Cyber Bank system is unable to carry out the function of storing, creating, and transferring values due to the adoption conventional credit card system instead of the application of non-stop payment system between the seller and buyer. As a result, current Cyber Bank system still imply the deficiency of non-performing cash payment function on internet. This paper describes (1) an integrated application process, One Process One Input (OPOI) which is essential for software development of the Cyber Bank, (2) an application process of payment settlement system to be applied to the electronic commerce in Internet. And then, with these for a basis, (3) design and implementation of payment settlement system through CyberBank for Electronic Commerce. Consequently, by means of this suggested process, we could attempt to solve the problem of existing Cyber Bank system and further to explore the possibility of advanced Cyber Banking being the non-stop payment settlement system. The effectiveness of this suggested system has been practically confirmed.
Design and Implementation of Embedded Linux-based Mobile Teller which supports CDMA and WiBro networks
Kim, Do-Hyung ; Yun, Min-Hong ; Lee, Kyung-Hee ; Lee, Cheol-Hoon ;
The KIPS Transactions:PartD, volume 15D, issue 1, 2008, Pages 131~138
DOI : 10.3745/KIPSTD.2008.15-D.1.131
This paper describes the implementations of the first application service based on embedded Linux; Mobile Teller which uses WiBro network for data communications and CDMA network for voice communications. Currently, with the appearance of WiBro service, dual-mode terminals which support two heterogeneous networks are available. But, the development of applications which effectively use these networks for providing better service to user is rarely prepared. At Mobile Teller, when a sender on a dual-mode terminal types texts, the texts are transmitted to a TTS server located in the Internet through WiBro network. Subsequently, the TTS server converts the texts into voices and transmits the voice data to the dual-mode terminal. At last the dual-mode terminal sends the voice to the receiver through the CDMA network. In case of noisy environment or when a user has difficulty in speaking, Mobile Teller makes voice communication possible
A Study on Behavior Rule Induction Method of Web User Group using 2-tier Clustering
Hwang, Jun-Won ; Song, Doo-Heon ; Lee, Chang-Hoon ;
The KIPS Transactions:PartD, volume 15D, issue 1, 2008, Pages 139~146
DOI : 10.3745/KIPSTD.2008.15-D.1.139
It is very important to identify useful web user group and induce their behavior pattern in eCRM domain. Inducing user group with a similar inclination, a reliability of user group decreases because there is an uncertainty in online user data. In this paper, we have applied the 2-tier clustering, which uses the outcome of interaction with data from other tiers. Also we propose a method which induces user behavior pattern from a cluster and compare C4.5 with our method.
Design and Implementation of Dynamic Web Server Page Builder on Web
Shin, Yong-Min ; Kim, Byung-Ki ;
The KIPS Transactions:PartD, volume 15D, issue 1, 2008, Pages 147~154
DOI : 10.3745/KIPSTD.2008.15-D.1.147
Along with the trend of internet use, various web application developments have been performed to provide information that was managed in the internal database on the web by making a web server page. However, in most cases, a direct program was made without a systematic developmental methodology or with the application of a huge developmental methodology that is inappropriate and decreased the efficiency of the development. A web application that fails to follow a systematic developmental methodology and uses a script language can decrease the productivity of the program development, maintenance, and reuse. In this thesis, the auto writing tool for a dynamic web server page was designed and established by using a database for web application development based on a fast and effective script. It suggests a regularized script model and makes a standardized script for the data bound control tag creator by analyzing a dynamic web server page pattern with the database in order to contribute to productivity by being used in the web application development and maintenance.