Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
The KIPS Transactions:PartD
Journal Basic Information
Journal DOI :
Korea Information Processing Society
Editor in Chief :
Volume & Issues
Volume 8D, Issue 6 - Oct 2001
Volume 8D, Issue 5 - Oct 2001
Volume 8D, Issue 4 - Aug 2001
Volume 8D, Issue 3 - Jun 2001
Volume 8D, Issue 2 - Apr 2001
Volume 8D, Issue 1 - Feb 2001
Selecting the target year
Design of Internet GIS Integration System using CORBA
Gang, Byeong-Geuk ; Nam, Gwang-U ; Kim, Sang-Ho ; Lee, Seong-Ho ; Ryu, Geun-Ho ;
The KIPS Transactions:PartD, volume 8D, issue 3, 2001, Pages 193~202
Currently, the components of the GIS have been physically run on a stand-alone system. With rapid advances in internet technology, GIS users require that they are able to not only access they heterogeneous and remote GIS database as well as their own information, but also share them. However, these GIS have the defects that can not handle formats different from own data format. Therefore, in this paper, we propose to integrate the components of the heterogeneous and remote GIS using CORBA in order to solve these problems, which is a distributed object technology, the mediator and wrapper technology in client and server layers.
Reduction of Approximate Rule based on Probabilistic Rough sets
Kwon, Eun-Ah ; Kim, Hong-Gi ;
The KIPS Transactions:PartD, volume 8D, issue 3, 2001, Pages 203~210
These days data is being collected and accumulated in a wide variety of fields. Stored data itself is to be an information system which helps us to make decisions. An information system includes many kinds of necessary and unnecessary attribute. So many algorithms have been developed for finding useful patterns from the data and reasoning approximately new objects. We are interested in the simple and understandable rules that can represent useful patterns. In this paper we propose an algorithm which can reduce the information in the system to a minimum, based on a probabilistic rough set theory. The proposed algorithm uses a value that tolerates accuracy of classification. The tolerant value helps minimizing the necessary attribute which is needed to reason a new object by reducing conditional attributes. It has the advantage that it reduces the time of generalizing rules. We experiment a proposed algorithm with the IRIS data and Wisconsin Breast Cancer data. The experiment results show that this algorithm retrieves a small reduct, and minimizes the size of the rule under the tolerant classification rate.
Design and Implementation of an XML-based Planning Agent for Internet Marketplaces
Lee, Yong-Ju ;
The KIPS Transactions:PartD, volume 8D, issue 3, 2001, Pages 211~220
A planning agent supporting customers plays a distinguished role in internet marketplaces. Although several internet marketplaces have been built with the maturity of tools based on internet and distributed technologies, there has been no actual study up to now with respect to the implementation of the planning agent. This paper describes the design and implementation of an XML-based planning agent for internet marketplaces. Since implementing internet marketplaces encounter problems similar to those in other fields such as multidatabase or workflow management systems, we first compare those features. Next we identify functions and roles of the planning agent. The planning agent is implemented using COM+, ASP, and XML, and demonstrated using real data used in an existing system.
Efficient Schemes for Cache Consistency Maintenance in a Mobile Database System
Lim, Sang-Min ; Kang, Hyun-Chul ;
The KIPS Transactions:PartD, volume 8D, issue 3, 2001, Pages 221~232
Due to rapid advance of wireless communication technology, demand on data services in mobile environment is gradually increasing. Caching at a mobile client could reduce bandwidth consumption and query response time, and yet a mobile client must maintain cache consistency. It could be efficient for the server to broadcast a periodic cache invalidation report for cache consistency in a cell. In case that long period of disconnection prevents a mobile client from checking validity of its cache based solely on the invalidation report received, the mobile client could request the server to check cache validity. In doing so, some schemes may be more efficient than others depending on the number of available channels and the mobile clients involved. In this paper, we propose new cache consistency schemes, effects, efficient especially (1) when channel capacity is enough to deal with the mobile clients involved or (2) when that is not the case, and evaluate their performance.
Quantification Methods for Software Entity Complexity with Hybrid Metrics
Hong, Euii-Seok ; Kim, Tae-Guun ;
The KIPS Transactions:PartD, volume 8D, issue 3, 2001, Pages 233~240
As software technology is in progress and software quantification is getting more important, many metrics have been proposed to quantify a variety of system entities. These metrics can be classified into two different forms : scalar metric and metric vector. Though some recent studies pointed out the composition problem of the scalar metric form, many scalar metrics are successfully used in software development organizations due to their practical applications. In this paper, it is concluded that hybrid metric form weighting external complexity is most suitable for scalar metric form. With this concept, a general framework for hybrid metrics construction independent of the development methodologies and target system type is proposed. This framework was successfully used in two projects that quantify the analysis phase of the structured methodology and the design phase of the object oriented real-time system, respectively. Any organization can quantify system entities in a short time using this framework.
Software Development Effort Estimation Using Neural Network Model
Lee, Sang-Un ;
The KIPS Transactions:PartD, volume 8D, issue 3, 2001, Pages 241~246
Area of software measurement in software engineering is active more than thirty years. There is a huge collection of researches but still no a concrete software cost estimation model. If we want to measure the cost-effort of a software project, we need to estimate the size of the software. A number of software metrics are identified in the literature ; the most frequently cited measures are LOC(line of code) and FPA(function point analysis). The FPA approach has features that overcome the major problems with using LOC as a measure of system size. This paper presents an neural networks(NN) models that related software development effort to software size measured in FPs and function element types. The research describes appropriate NN modeling in the context of a case study for 24 software development projects. Also, this paper compared the NN model with a regression analysis model and found the NN model has better estimative accuracy.
Proposal and Evaluation of Metrics for Measurement of Documents Reliability
Nam, Ki-Hyun ; Han, Pan-Am ; Yang, Hae-Sool ;
The KIPS Transactions:PartD, volume 8D, issue 3, 2001, Pages 247~256
Software is developing toward having more large scale and many functions day by day. Also, user’s requirements level for software is being high, especially, requirements for software quality is being high continuously. Methods which can satisfy such User’s requirements is being studied in various viewpoint. First of all, study about quality evaluation system and methodology is energetically in progress in viewpoint to improve quality of software by feed-back software quality evaluation result to developers. In this paper, we define metrics according to a system and developed quality measurement tables according to internal characteristics system of quality characteristics, subcharacteritics, internal characteristics for reliability between quality characteristics of international standard, ISO/IEC 9126 about software quality. And we propose evaluation results about development products using internal characteristics.
A Slice-based Complexity Measure
Moon, Yu-Mi ; Choi, Wan-Kyoo ; Lee, Sung-Joo ;
The KIPS Transactions:PartD, volume 8D, issue 3, 2001, Pages 257~264
We developed a SIFG (Slice-based Information Graph), which modelled the information flow on program on the basis of the information flow of data tokens on data slices. Then we defied a SCM (Slice-based complexity measure), which measured the program complexity by measuring the complexity of information flow on SIFG. SCM satisfied the necessary properties for complexity measure proposed by Briand et al. SCM could measure not only the control and data flow on program but also the physical size of program unlike the existing measures.
WAP Abstract Kernel Layer Supporting Multi-platform
Gang, Yeong-Man ; Han, Sun-Hui ; Jo, Guk-Hyeon ;
The KIPS Transactions:PartD, volume 8D, issue 3, 2001, Pages 265~272
In case of implementing a complicated application like WAP (Wireless Application Protocol) in a mobile terminal with the characteristics of bare machine and versatile kernel aspects of which are control, interrupt and IPC(Inter Process Communication), a special methodology should be needed. If not, it will cause more cost and human resources, even delayed product into launching for the time-to-market. This paper suggests AKL, (Abstract Kernel Layer) for the design and implementation of WAP on basis of multi-platform. AKL is running on the various kernel including REX, MS-DOS, MS-Window, UNIX and LINUX. For the purpose of it, AKL makes machine-dependant features be minimized and supports a consistent interface on API (Application Program Interface) point of view. Therefore, it makes poring times of a device be shorten and makes easy of maintenance. We validated our suggestion as a consequent of porting WAP into PlamV PDA and mobile phone with AKL.
High-Speed Korean Address Searching System for Efficient Delivery Point Code Generation
Kim, Gyeong-Hwan ; Lee, Seok-Goo ; Shin, Mi-Young ; Nam, Yun-Seok ;
The KIPS Transactions:PartD, volume 8D, issue 3, 2001, Pages 273~284
A systematic approach for interpreting Korean addresses based on postal code is presented in this paper. The implementation is focused on producing the final delivery point code from various types of address recognized. There are two stages in the address interpretation : 1) agreement verification between the recognized postal code and upper part of the address and 2) analysis of lower part of the address. In the agreement verification procedure, the recognized postal code is used as the key to the address dictionary and each of the retrieved addresses is compared with the words in the recognized address. As the result, the boundary between the upper part and the lower part is located. The confusion matrix, which is introduced to correct possible mis-recognized characters, is applied to improve the performance of the process. In the procedure for interpreting the lower part address, a delivery code is assigned using the house number and/or the building name. Several rules for the interpretation have been developed based on the real addresses collected. Experiments have been performed to evaluate the proposed approach using addresses collected from Kwangju and Pusan areas.
A Study on the Method of High-Speed Reading of Postal 4-state Bar Code for Supporting Automatic Processing
Park, Moon-Sung ; Kim, Hye-Kyu ; Jung, Hoe-Kyung ;
The KIPS Transactions:PartD, volume 8D, issue 3, 2001, Pages 285~294
Recently many efforts on the development of automatic processing system for delivery sequency sorting have been performed in ETRI, which requires the use of postal 4-state bar code system to encode delivery points. This paper addresses the issue on the extension of read range and the improvement of image processing method. For the improvement of image processing procedure, applied information acquisition method through basic two thresholds onto the horizontal axial line of gray image based on reference information of 4-state bar code symbology. Symbol values are computed after creating two threshold values based on the obtained information through search of horizontal axial values. The implementation result of 4-state bar code reader are obtained the symbol values within 30~60 msec (58,000~116,000 mail item/hour)without noise removal or image rotation in spite of the incline
A Model for evaluating the efficiency of inputting Hangul on a telephone keyboard
Koo, Min-Mo ; Lee, Mahn-Young ;
The KIPS Transactions:PartD, volume 8D, issue 3, 2001, Pages 295~304
The standards of a telephone Hangul keyboard should be decided in terms of objective factors : the number of strokes and fingers’moving distance. A number of designers will agree on them, because these factors can be calculated in an objective manner. So, We developed the model which can evaluate the efficiency of inputting Hangul on a telephone keyboard in terms of two factors. As compared with other models, the major features of this model are as follows : in order to evaluate the efficiency of Hangul input on a telephone keyboard, (1) this model calculated not a typing time but the number of strokes ; (2) concurrence frequency that had been counted on KOREA-1 Corpus was used directly ; (3) a total set of 67 consonants and vowels was used ; and (4) this model could evaluate a number of keyboards that use a kind of syllabic function key-the complete key, the null key and the final consonant key and also calculate a lot of keyboards that adopt no syllabic function key. However, there are many other factors to judge the efficiency of inputting Hangul on a telephone keyboard. If we want to make more accurate estimate of a telephone Hangul keyboard, we must consider both logical data and experimental data as well.