Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
Journal of Information Processing Systems
Journal Basic Information
Journal DOI :
Korea Information Processing Society
Editor in Chief :
Volume & Issues
Volume 8, Issue 4 - Dec 2012
Volume 8, Issue 3 - Sep 2012
Volume 8, Issue 2 - Jun 2012
Volume 8, Issue 1 - Mar 2012
Selecting the target year
Evaluation of an Abstract Component Model for Embedded Systems Development
Bunse, Christian ; Choi, Yunja ; Gross, Hans Gerhard ;
Journal of Information Processing Systems, volume 8, issue 4, 2012, Pages 539~554
DOI : 10.3745/JIPS.2012.8.4.539
Model-driven and component-oriented development is increasingly being used in the development of embedded systems. When combined, both paradigms provide several advantages, such as higher reuse rates, and improved system quality. Performing model-driven and component-oriented development should be accompanied by a component model and a method that prescribes how the component model is used. This article provides an overview on the MARMOT method, which consists of an abstract component model and a methodology for the development of embedded systems. The paper describes a feasibility study that demonstrates MARMOT's capability to alleviate system design, verification, implementation, and reuse. Results indicate that model-driven and component-based development following the MARMOT method outperforms Agile development for embedded systems, leads to maintainable systems, and higher than normal reuse rates.
An Adaptive Workflow Scheduling Scheme Based on an Estimated Data Processing Rate for Next Generation Sequencing in Cloud Computing
Kim, Byungsang ; Youn, Chan-Hyun ; Park, Yong-Sung ; Lee, Yonggyu ; Choi, Wan ;
Journal of Information Processing Systems, volume 8, issue 4, 2012, Pages 555~566
DOI : 10.3745/JIPS.2012.8.4.555
The cloud environment makes it possible to analyze large data sets in a scalable computing infrastructure. In the bioinformatics field, the applications are composed of the complex workflow tasks, which require huge data storage as well as a computing-intensive parallel workload. Many approaches have been introduced in distributed solutions. However, they focus on static resource provisioning with a batch-processing scheme in a local computing farm and data storage. In the case of a large-scale workflow system, it is inevitable and valuable to outsource the entire or a part of their tasks to public clouds for reducing resource costs. The problems, however, occurred at the transfer time for huge dataset as well as there being an unbalanced completion time of different problem sizes. In this paper, we propose an adaptive resource-provisioning scheme that includes run-time data distribution and collection services for hiding the data transfer time. The proposed adaptive resource-provisioning scheme optimizes the allocation ratio of computing elements to the different datasets in order to minimize the total makespan under resource constraints. We conducted the experiments with a well-known sequence alignment algorithm and the results showed that the proposed scheme is efficient for the cloud environment.
A Strong Designated Verifiable DL Based Signcryption Scheme
Mohanty, Sujata ; Majhi, Banshidhar ;
Journal of Information Processing Systems, volume 8, issue 4, 2012, Pages 567~574
DOI : 10.3745/JIPS.2012.8.4.567
This paper presents a strong designated verifiable signcryption scheme, in which a message is signcrypted by a signcryptor and only a specific receiver, who called a "designated verifier", verifies it using his own secret key. The scheme is secure, as an adversary can not verify the signature even if the secret key of the signer is compromised or leaked. The security of the proposed scheme lies in the complexity of solving two computationally hard problems, namely, the Discrete Logarithm Problem (DLP) and the Integer Factorization Problem (IFP). The security analysis of the scheme has been done and it is proved that, the proposed scheme can withstand an adaptive chosen ciphertext attack. This scheme can be very useful in organizations where there is a need to send confidential documents to a specific recipient. This scheme can also be applicable to real life scenarios, such as, e-commerce applications, e-banking and e-voting.
An Active Co-Training Algorithm for Biomedical Named-Entity Recognition
Munkhdalai, Tsendsuren ; Li, Meijing ; Yun, Unil ; Namsrai, Oyun-Erdene ; Ryu, Keun Ho ;
Journal of Information Processing Systems, volume 8, issue 4, 2012, Pages 575~588
DOI : 10.3745/JIPS.2012.8.4.575
Exploiting unlabeled text data with a relatively small labeled corpus has been an active and challenging research topic in text mining, due to the recent growth of the amount of biomedical literature. Biomedical named-entity recognition is an essential prerequisite task before effective text mining of biomedical literature can begin. This paper proposes an Active Co-Training (ACT) algorithm for biomedical named-entity recognition. ACT is a semi-supervised learning method in which two classifiers based on two different feature sets iteratively learn from informative examples that have been queried from the unlabeled data. We design a new classification problem to measure the informativeness of an example in unlabeled data. In this classification problem, the examples are classified based on a joint view of a feature set to be informative/non-informative to both classifiers. To form the training data for the classification problem, we adopt a query-by-committee method. Therefore, in the ACT, both classifiers are considered to be one committee, which is used on the labeled data to give the informativeness label to each example. The ACT method outperforms the traditional co-training algorithm in terms of f-measure as well as the number of training iterations performed to build a good classification model. The proposed method tends to efficiently exploit a large amount of unlabeled data by selecting a small number of examples having not only useful information but also a comprehensive pattern.
A Survey of QoS Based Routing Protocols for Wireless Sensor Networks
Sumathi, R. ; Srinivas, M.G. ;
Journal of Information Processing Systems, volume 8, issue 4, 2012, Pages 589~602
DOI : 10.3745/JIPS.2012.8.4.589
With the increasing demand for real time applications in the Wireless Senor Network (WSN), real time critical events anticipate an efficient quality-of-service (QoS) based routing for data delivery from the network infrastructure. Designing such QoS based routing protocol to meet the reliability and delay guarantee of critical events while preserving the energy efficiency is a challenging task. Considerable research has been focused on developing robust energy efficient QoS based routing protocols. In this paper, we present the state of the research by summarizing the work on QoS based routing protocols that has already been published and by highlighting the QoS issues that are being addressed. The performance comparison of QoS based routing protocols such as SAR, MMSPEED, MCMP, MCBR, and EQSR has also been analyzed using ns-2 for various parameters.
Design and Simulation of a Flow Mobility Scheme Based on Proxy Mobile IPv6
Choi, Hyon-Young ; Min, Sung-Gi ; Han, Youn-Hee ; Koodli, Rajeev ;
Journal of Information Processing Systems, volume 8, issue 4, 2012, Pages 603~620
DOI : 10.3745/JIPS.2012.8.4.603
Proxy Mobile IPv6 (PMIPv6) is a network-based mobility support protocol and it does not require Mobile Nodes (MNs) to be involved in the mobility support signaling. In the case when multiple interfaces are active in an MN simultaneously, each data flow can be dynamically allocated to and redirected between different access networks to adapt to the dynamically changing network status and to balance the workload. Such a flow redistribution control is called "flow mobility". In the existing PMIPv6-based flow mobility support, although the MN's logical interface can solve the well-known problems of flow mobility in a heterogeneous network, some missing procedures, such as an MN-derived flow handover, make PMIPv6-based flow mobility incomplete. In this paper, an enhanced flow mobility support is proposed for actualizing the flow mobility support in PMIPv6. The proposed scheme is also based on the MN's logical interface, which hides the physical interfaces from the network layer and above. As new functional modules, the flow interface manager is placed at the MN's logical interface and the flow binding manager in the Local Mobility Anchor (LMA) is paired with the MN's flow interface manager. They manage the flow bindings, and select the proper access technology to send packets. In this paper, we provide the complete flow mobility procedures which begin with the following three different triggering cases: the MN's new connection/disconnection, the LMA's decision, and the MN's request. Simulation using the ns-3 network simulator is performed to verify the proposed procedures and we show the network throughput variation caused by the network offload using the proposed procedures.
A Comparative Study of Estimation by Analogy using Data Mining Techniques
Nagpal, Geeta ; Uddin, Moin ; Kaur, Arvinder ;
Journal of Information Processing Systems, volume 8, issue 4, 2012, Pages 621~652
DOI : 10.3745/JIPS.2012.8.4.621
Software Estimations provide an inclusive set of directives for software project developers, project managers, and the management in order to produce more realistic estimates based on deficient, uncertain, and noisy data. A range of estimation models are being explored in the industry, as well as in academia, for research purposes but choosing the best model is quite intricate. Estimation by Analogy (EbA) is a form of case based reasoning, which uses fuzzy logic, grey system theory or machine-learning techniques, etc. for optimization. This research compares the estimation accuracy of some conventional data mining models with a hybrid model. Different data mining models are under consideration, including linear regression models like the ordinary least square and ridge regression, and nonlinear models like neural networks, support vector machines, and multivariate adaptive regression splines, etc. A precise and comprehensible predictive model based on the integration of GRA and regression has been introduced and compared. Empirical results have shown that regression when used with GRA gives outstanding results; indicating that the methodology has great potential and can be used as a candidate approach for software effort estimation.
Online Recognition of Handwritten Korean and English Characters
Ma, Ming ; Park, Dong-Won ; Kim, Soo Kyun ; An, Syungog ;
Journal of Information Processing Systems, volume 8, issue 4, 2012, Pages 653~668
DOI : 10.3745/JIPS.2012.8.4.653
In this study, an improved HMM based recognition model is proposed for online English and Korean handwritten characters. The pattern elements of the handwriting model are sub character strokes and ligatures. To deal with the problem of handwriting style variations, a modified Hierarchical Clustering approach is introduced to partition different writing styles into several classes. For each of the English letters and each primitive grapheme in Korean characters, one HMM that models the temporal and spatial variability of the handwriting is constructed based on each class. Then the HMMs of Korean graphemes are concatenated to form the Korean character models. The recognition of handwritten characters is implemented by a modified level building algorithm, which incorporates the Korean character combination rules within the efficient network search procedure. Due to the limitation of the HMM based method, a post-processing procedure that takes the global and structural features into account is proposed. Experiments showed that the proposed recognition system achieved a high writer independent recognition rate on unconstrained samples of both English and Korean characters. The comparison with other schemes of HMM-based recognition was also performed to evaluate the system.
ECG Denoising by Modeling Wavelet Sub-Band Coefficients using Kernel Density Estimation
Ardhapurkar, Shubhada ; Manthalkar, Ramchandra ; Gajre, Suhas ;
Journal of Information Processing Systems, volume 8, issue 4, 2012, Pages 669~684
DOI : 10.3745/JIPS.2012.8.4.669
Discrete wavelet transforms are extensively preferred in biomedical signal processing for denoising, feature extraction, and compression. This paper presents a new denoising method based on the modeling of discrete wavelet coefficients of ECG in selected sub-bands with Kernel density estimation. The modeling provides a statistical distribution of information and noise. A Gaussian kernel with bounded support is used for modeling sub-band coefficients and thresholds and is estimated by placing a sliding window on a normalized cumulative density function. We evaluated this approach on offline noisy ECG records from the Cardiovascular Research Centre of the University of Glasgow and on records from the MIT-BIH Arrythmia database. Results show that our proposed technique has a more reliable physical basis and provides improvement in the Signal-to-Noise Ratio (SNR) and Percentage RMS Difference (PRD). The morphological information of ECG signals is found to be unaffected after employing denoising. This is quantified by calculating the mean square error between the feature vectors of original and denoised signal. MSE values are less than 0.05 for most of the cases.
Evaluation of the Image Backtrack-Based Fast Direct Mode Decision Algorithm
Choi, Yungho ; Park, Neungsoo ;
Journal of Information Processing Systems, volume 8, issue 4, 2012, Pages 685~692
DOI : 10.3745/JIPS.2012.8.4.685
B frame bi-directional predictions and the DIRECT mode coding of the H.264 video compression standard necessitate a complex mode decision process, resulting in a long computation time. To make H.264 feasible, this paper proposes an image backtrack-based fast (IBFD) algorithm and evaluates the performances of two promising fast algorithms (i.e., AFDM and IBFD). Evaluation results show that an image backtrack-based fast (IBFD) algorithm can determine DIRECT mode macroblocks with 13% higher accuracy, as compared with the AFDM. Furthermore, IBFD is shown to reduce the motion estimation time of B frames by up to 23% with a negligible quality degradation.
Machine Learning Based Keyphrase Extraction: Comparing Decision Trees, Naïve Bayes, and Artificial Neural Networks
Sarkar, Kamal ; Nasipuri, Mita ; Ghose, Suranjan ;
Journal of Information Processing Systems, volume 8, issue 4, 2012, Pages 693~712
DOI : 10.3745/JIPS.2012.8.4.693
The paper presents three machine learning based keyphrase extraction methods that respectively use Decision Trees, Na
ve Bayes, and Artificial Neural Networks for keyphrase extraction. We consider keyphrases as being phrases that consist of one or more words and as representing the important concepts in a text document. The three machine learning based keyphrase extraction methods that we use for experimentation have been compared with a publicly available keyphrase extraction system called KEA. The experimental results show that the Neural Network based keyphrase extraction method outperforms two other keyphrase extraction methods that use the Decision Tree and Na
ve Bayes. The results also show that the Neural Network based method performs better than KEA.
Enhanced FFD-AABB Collision Algorithm for Deformable Objects
Jeon, JaeHong ; Choi, Min-Hyung ; Hong, Min ;
Journal of Information Processing Systems, volume 8, issue 4, 2012, Pages 713~720
DOI : 10.3745/JIPS.2012.8.4.713
Unlike FEM (Finite Element Method), which provides an accurate deformation of soft objects, FFD (Free Form Deformation) based methods have been widely used for a quick and responsive representation of deformable objects in real-time applications such as computer games, animations, or simulations. The FFD-AABB (Free Form Deformation Axis Aligned Bounding Box) algorithm was also suggested to address the collision handling problems between deformable objects at an interactive rate. This paper proposes an enhanced FFD-AABB algorithm to improve the frame rate of simulation by adding the bounding sphere based collision test between 3D deformable objects. We provide a comparative analysis with previous methods and the result of proposed method shows about an 85% performance improvement.
Proactive: Comprehensive Access to Job Information
Lee, Danielle ; Brusilovsky, Peter ;
Journal of Information Processing Systems, volume 8, issue 4, 2012, Pages 721~738
DOI : 10.3745/JIPS.2012.8.4.721
The Internet has become an increasingly important source for finding the right employees, so more and more companies post their job openings on the Web. The large amount and dynamic nature of career recruiting information causes information overload problems for job seekers. To assist Internet users in searching for the right job, a range of research and commercial systems were developed over the past 10 years. Surprisingly, the majority of existing job search systems support just one, rarely two ways of information access. In contrast, our work focused on exploring a value of comprehensive access to job information in a single system (i.e., a system which supports multiple ways). We designed Proactive, a recommendation system providing comprehensive and personalized information access. To assist the varied needs of users, Proactive has four information retrieval methods - a navigable list of jobs, keyword-based search, implicit preference-based recommendations, and explicit preference-based recommendations. This paper introduces the Proactive and reports the results of a study focusing on the experimental evaluation of these methods. The goal of the study was to assess whether all of the methods are necessary for users to find relevant jobs and to what extent different methods can meet different users' information requirements.
Performance Anomaly of the IEEE 802.11 DCF in Different Frame Error Rate Conditions
Kang, Koohong ;
Journal of Information Processing Systems, volume 8, issue 4, 2012, Pages 739~748
DOI : 10.3745/JIPS.2012.8.4.739
We propose an analytic model to compute the station's saturated throughput and packet delay performance of the IEEE 802.11 DCF (Distributed Coordination Function) in which frame transmission error rates in the channel are different from each other. Our analytic model shows that a station experiencing worse frame error rates than the others suffers severe performance degradation below its deserved throughput and delay performance. 802.11 DCF adopts an exponential back-off scheme. When some stations suffer from high frame error rates, their back-off stages should be increased so that others get the benefit from the smaller collision probabilities. This impact is then recursively applied to degrade the performance of the victim stations. In particular, we show that the performance is considerably degraded even if the frame error rate of the victim station satisfies the receiver input level sensitivity that has been specified in the IEEE 802.11 standard. We also verify the analytic results by the OPNET simulations.