Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
Journal of Information Processing Systems
Journal Basic Information
Journal DOI :
Korea Information Processing Society
Editor in Chief :
Young-Sik Jeong / Mohammad S. Obaidat
Volume & Issues
Volume 7, Issue 4 - Dec 2011
Volume 7, Issue 3 - Sep 2011
Volume 7, Issue 2 - Jun 2011
Volume 7, Issue 1 - Mar 2011
Selecting the target year
The Principle of Justifiable Granularity and an Optimization of Information Granularity Allocation as Fundamentals of Granular Computing
Pedrycz, Witold ;
Journal of Information Processing Systems, volume 7, issue 3, 2011, Pages 397~412
DOI : 10.3745/JIPS.2011.7.3.397
Granular Computing has emerged as a unified and coherent framework of designing, processing, and interpretation of information granules. Information granules are formalized within various frameworks such as sets (interval mathematics), fuzzy sets, rough sets, shadowed sets, probabilities (probability density functions), to name several the most visible approaches. In spite of the apparent diversity of the existing formalisms, there are some underlying commonalities articulated in terms of the fundamentals, algorithmic developments and ensuing application domains. In this study, we introduce two pivotal concepts: a principle of justifiable granularity and a method of an optimal information allocation where information granularity is regarded as an important design asset. We show that these two concepts are relevant to various formal setups of information granularity and offer constructs supporting the design of information granules and their processing. A suite of applied studies is focused on knowledge management in which case we identify several key categories of schemes present there.
A Novel Similarity Measure for Sequence Data
Pandi, Mohammad. H. ; Kashefi, Omid ; Minaei, Behrouz ;
Journal of Information Processing Systems, volume 7, issue 3, 2011, Pages 413~424
DOI : 10.3745/JIPS.2011.7.3.413
A variety of different metrics has been introduced to measure the similarity of two given sequences. These widely used metrics are ranging from spell correctors and categorizers to new sequence mining applications. Different metrics consider different aspects of sequences, but the essence of any sequence is extracted from the ordering of its elements. In this paper, we propose a novel sequence similarity measure that is based on all ordered pairs of one sequence and where a Hasse diagram is built in the other sequence. In contrast with existing approaches, the idea behind the proposed sequence similarity metric is to extract all ordering features to capture sequence properties. We designed a clustering problem to evaluate our sequence similarity metric. Experimental results showed the superiority of our proposed sequence similarity metric in maximizing the purity of clustering compared to metrics such as d2, Smith-Waterman, Levenshtein, and Needleman-Wunsch. The limitation of those methods originates from some neglected sequence features, which are considered in our proposed sequence similarity metric.
Wavelet-based Feature Extraction Algorithm for an Iris Recognition System
Panganiban, Ayra ; Linsangan, Noel ; Caluyo, Felicito ;
Journal of Information Processing Systems, volume 7, issue 3, 2011, Pages 425~434
DOI : 10.3745/JIPS.2011.7.3.425
The success of iris recognition depends mainly on two factors: image acquisition and an iris recognition algorithm. In this study, we present a system that considers both factors and focuses on the latter. The proposed algorithm aims to find out the most efficient wavelet family and its coefficients for encoding the iris template of the experiment samples. The algorithm implemented in software performs segmentation, normalization, feature encoding, data storage, and matching. By using the Haar and Biorthogonal wavelet families at various levels feature encoding is performed by decomposing the normalized iris image. The vertical coefficient is encoded into the iris template and is stored in the database. The performance of the system is evaluated by using the number of degrees of freedom, False Reject Rate (FRR), False Accept Rate (FAR), and Equal Error Rate (EER) and the metrics show that the proposed algorithm can be employed for an iris recognition system.
Probabilistic Soft Error Detection Based on Anomaly Speculation
Yoo, Joon-Hyuk ;
Journal of Information Processing Systems, volume 7, issue 3, 2011, Pages 435~446
DOI : 10.3745/JIPS.2011.7.3.435
Microprocessors are becoming increasingly vulnerable to soft errors due to the current trends of semiconductor technology scaling. Traditional redundant multi-threading architectures provide perfect fault tolerance by re-executing all the computations. However, such a full re-execution technique significantly increases the verification workload on the processor resources, resulting in severe performance degradation. This paper presents a pro-active verification management approach to mitigate the verification workload to increase its performance with a minimal effect on overall reliability. An anomaly-speculation-based filter checker is proposed to guide a verification priority before the re-execution process starts. This technique is accomplished by exploiting a value similarity property, which is defined by a frequent occurrence of partially identical values. Based on the biased distribution of similarity distance measure, this paper investigates further application to exploit similar values for soft error tolerance with anomaly speculation. Extensive measurements prove that the majority of instructions produce values, which are different from the previous result value, only in a few bits. Experimental results show that the proposed scheme accelerates the processor to be 180% faster than traditional fully-fault-tolerant processor with a minimal impact on overall soft error rate.
An Approach to Art Collections Management and Content-based Recovery
De Celis Herrero, Concepcion Perez ; Alvarez, Jaime Lara ; Aguilar, Gustavo Cossio ; Garcia, Maria Josefa Somodevilla ;
Journal of Information Processing Systems, volume 7, issue 3, 2011, Pages 447~458
DOI : 10.3745/JIPS.2011.7.3.447
This study presents a comprehensive solution to the collection management, which is based on the model for Cultural Objects (CCO). The developed system manages and spreads the collections that are safeguarded in museums and galleries more easily by using IT. In particular, we present our approach for a non-structured search and recovery of the objects based on the annotation of artwork images. In this methodology, we have introduced a faceted search used as a framework for multi-classification and for exploring/browsing complex information bases in a guided, yet unconstrained way, through a visual interface.
Utilizing Various Natural Language Processing Techniques for Biomedical Interaction Extraction
Park, Kyung-Mi ; Cho, Han-Cheol ; Rim, Hae-Chang ;
Journal of Information Processing Systems, volume 7, issue 3, 2011, Pages 459~472
DOI : 10.3745/JIPS.2011.7.3.459
The vast number of biomedical literature is an important source of biomedical interaction information discovery. However, it is complicated to obtain interaction information from them because most of them are not easily readable by machine. In this paper, we present a method for extracting biomedical interaction information assuming that the biomedical Named Entities (NEs) are already identified. The proposed method labels all possible pairs of given biomedical NEs as INTERACTION or NO-INTERACTION by using a Maximum Entropy (ME) classifier. The features used for the classifier are obtained by applying various NLP techniques such as POS tagging, base phrase recognition, parsing and predicate-argument recognition. Especially, specific verb predicates (activate, inhibit, diminish and etc.) and their biomedical NE arguments are very useful features for identifying interactive NE pairs. Based on this, we devised a twostep method: 1) an interaction verb extraction step to find biomedically salient verbs, and 2) an argument relation identification step to generate partial predicate-argument structures between extracted interaction verbs and their NE arguments. In the experiments, we analyzed how much each applied NLP technique improves the performance. The proposed method can be completely improved by more than 2% compared to the baseline method. The use of external contextual features, which are obtained from outside of NEs, is crucial for the performance improvement. We also compare the performance of the proposed method against the co-occurrence-based and the rule-based methods. The result demonstrates that the proposed method considerably improves the performance.
Integrated Software Quality Evaluation: A Fuzzy Multi-Criteria Approach
Challa, Jagat Sesh ; Paul, Arindam ; Dada, Yogesh ; Nerella, Venkatesh ; Srivastava, Praveen Ranjan ; Singh, Ajit Pratap ;
Journal of Information Processing Systems, volume 7, issue 3, 2011, Pages 473~518
DOI : 10.3745/JIPS.2011.7.3.473
Software measurement is a key factor in managing, controlling, and improving the software development processes. Software quality is one of the most important factors for assessing the global competitive position of any software company. Thus the quantification of quality parameters and integrating them into quality models is very essential. Software quality criteria are not very easily measured and quantified. Many attempts have been made to exactly quantify the software quality parameters using various models such as ISO/IEC 9126 Quality Model, Boehm's Model, McCall's model, etc. In this paper an attempt has been made to provide a tool for precisely quantifying software quality factors with the help of quality factors stated in ISO/IEC 9126 model. Due to the unpredictable nature of the software quality attributes, the fuzzy multi criteria approach has been used to evolve the quality of the software.
A Fast Snake Algorithm for Tracking Multiple Objects
Fang, Hua ; Kim, Jeong-Woo ; Jang, Jong-Whan ;
Journal of Information Processing Systems, volume 7, issue 3, 2011, Pages 519~530
DOI : 10.3745/JIPS.2011.7.3.519
A Snake is an active contour for representing object contours. Traditional snake algorithms are often used to represent the contour of a single object. However, if there is more than one object in the image, the snake model must be adaptive to determine the corresponding contour of each object. Also, the previous initialized snake contours risk getting the wrong results when tracking multiple objects in successive frames due to the weak topology changes. To overcome this problem, in this paper, we present a new snake method for efficiently tracking contours of multiple objects. Our proposed algorithm can provide a straightforward approach for snake contour rapid splitting and connection, which usually cannot be gracefully handled by traditional snakes. Experimental results of various test sequence images with multiple objects have shown good performance, which proves that the proposed method is both effective and accurate.
A Fair and Efficient Congestion Avoidance Scheme Based on the Minority Game
Kutsuna, Hiroshi ; Fujita, Satoshi ;
Journal of Information Processing Systems, volume 7, issue 3, 2011, Pages 531~542
DOI : 10.3745/JIPS.2011.7.3.531
In this paper, we propose a new congestion control scheme for high-speed networks. The basic idea of our proposed scheme is to adopt a game theory called, "Minority Game" (MG), to realize a selective reduction of the transmission speed of senders. More concretely, upon detecting any congestion, the scheme starts a game among all senders who are participating in the communication. The losers of the game reduce the transmission speed by a multiplicative factor. MG is a game that has recently attracted considerable attention, and it is known to have a remarkable property so that the number of winners converges to a half the number of players in spite of the selfish behavior of the players to increase its own profit. By using this property of MG, we can realize a fair reduction of the transmission speed, which is more efficient than the previous schemes in which all senders uniformly reduce their transmission speed. The effect of the proposed scheme is evaluated by simulation. The result of simulations indicates that the proposed scheme certainly realizes a selective reduction of the transmission speed. It is sufficiently fair compared to other simple randomized schemes and is sufficiently efficient compared to other conventional schemes.
A Study on the Business Strategy of Smart Devices for Multimedia Contents
Lee, Hong-Joo ;
Journal of Information Processing Systems, volume 7, issue 3, 2011, Pages 543~548
DOI : 10.3745/JIPS.2011.7.3.543
Information technology is changing the business value chain and business systems. This situation is due to the business value chain and the value creation factors in business. Technology companies and researchers are developing new businesses, but many companies and researchers cannot find successful ways to analyze and develop a business in a specific way. In this paper, first, the value creation motive in business is analyzed through a literature review. Second, business attributes are analyzed, while considering the value creation motive and the business factors in management. Finally, the business attributes of information technology are studied through a review of previous research papers on this topic.
Efficient Proof of Vote Validity Without Honest-Verifier Assumption in Homomorphic E-Voting
Peng, Kun ;
Journal of Information Processing Systems, volume 7, issue 3, 2011, Pages 549~560
DOI : 10.3745/JIPS.2011.7.3.549
Vote validity proof and verification is an efficiency bottleneck and privacy drawback in homomorphic e-voting. The existing vote validity proof technique is inefficient and only achieves honest-verifier zero knowledge. In this paper, an efficient proof and verification technique is proposed to guarantee vote validity in homomorphic e-voting. The new proof technique is mainly based on hash function operations that only need a very small number of costly public key cryptographic operations. It can handle untrusted verifiers and achieve stronger zero knowledge privacy. As a result, the efficiency and privacy of homomorphic e-voting applications will be significantly improved.