Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
KIISE Transactions on Computing Practices
Journal Basic Information
Journal DOI :
Korean Institute of Information Scientists and Engineers
Editor in Chief :
Volume & Issues
Volume 21, Issue 12 - Dec 2015
Volume 21, Issue 11 - Nov 2015
Volume 21, Issue 10 - Oct 2015
Volume 21, Issue 9 - Sep 2015
Volume 21, Issue 8 - Aug 2015
Volume 21, Issue 7 - Jul 2015
Volume 21, Issue 6 - Jun 2015
Volume 21, Issue 5 - May 2015
Volume 21, Issue 4 - Apr 2015
Volume 21, Issue 3 - Mar 2015
Volume 21, Issue 2 - Feb 2015
Volume 21, Issue 1 - Jan 2015
Selecting the target year
Recommendation of Best Empirical Route Based on Classification of Large Trajectory Data
Lee, Kye Hyung ; Jo, Yung Hoon ; Lee, Tea Ho ; Park, Heemin ;
KIISE Transactions on Computing Practices, volume 21, issue 2, 2015, Pages 101~108
DOI : 10.5626/KTCP.2015.21.2.101
This paper presents the implementation of a system that recommends empirical best routes based on classification of large trajectory data. As many location-based services are used, we expect the amount of location and trajectory data to become big data. Then, we believe we can extract the best empirical routes from the large trajectory repositories. Large trajectory data is clustered into similar route groups using Hadoop MapReduce framework. Clustered route groups are stored and managed by a DBMS, and thus it supports rapid response to the end-users' request. We aim to find the best routes based on collected real data, not the ideal shortest path on maps. We have implemented 1) an Android application that collects trajectories from users, 2) Apache Hadoop MapReduce program that can cluster large trajectory data, 3) a service application to query start-destination from a web server and to display the recommended routes on mobile phones. We validated our approach using real data we collected for five days and have compared the results with commercial navigation systems. Experimental results show that the empirical best route is better than routes recommended by commercial navigation systems.
A Reexamination on the Influence of Fine-particle between Districts in Seoul from the Perspective of Information Theory
Lee, Jaekoo ; Lee, Taehoon ; Yoon, Sungroh ;
KIISE Transactions on Computing Practices, volume 21, issue 2, 2015, Pages 109~114
DOI : 10.5626/KTCP.2015.21.2.109
This paper presents a computational model on the transfer of airborne fine particles to analyze the similarities and influences among the 25 districts in Seoul by quantifying a time series data collected from each district. The properties of each district are driven with the model of a time series of the fine particle concentrations, and the calculation of edge-based weights are carried out with the transfer entropies between all pairs of the districts. We applied a modularity-based graph clustering technique to detect the communities among the 25 districts. The result indicates the discovered clusters correspond to a high transfer-entropy group among the communities with geographical adjacency or high in-between traffic volumes. We believe that this approach can be further extended to the discovery of significant flows of other indicators causing environmental pollution.
Proposal : Improvement of Testing Frontier Capability Assessment Model through Comparing International Standards in Software Product and Software Testing Process Perspective
Yoon, Hyung-Jin ; Choi, Jin-Young ;
KIISE Transactions on Computing Practices, volume 21, issue 2, 2015, Pages 115~120
DOI : 10.5626/KTCP.2015.21.2.115
The Testing Frontier Capability Assessment Model (TCAM) is based on ISO/IEC 9126, TMMi and TPI. Since ISO/IEC 9126, TMMi and TPI were made over 10 years ago, TCAM faces the problem that it can not assess and analyze the capability of small businesses that employ new software development methods or processes, for example Agile, TDD(Test Driven Development), App software, and Web Software. In this paper, a method to improve the problem is proposed. The paper is composed of the following sections: 1) ISO/IEC 9126, ISO/IEC 25010 and ISO/IEC/IEEE 29119 part 2 review 2) TCAM review 3) software product quality perspective comparison, and analysis between ISO/IEC 9126, ISO/IEC 25010 and TCAM 4) comparison, and analysis between ISO/IEC/IEEE 29119 part2 and TCAM and 5) proposal for the improvement of TCAM.
Recommendation Algorithm by Item Classification Using Preference Difference Metric
Park, Chan-Soo ; Hwang, Taegyu ; Hong, Junghwa ; Kim, Sung Kwon ;
KIISE Transactions on Computing Practices, volume 21, issue 2, 2015, Pages 121~125
DOI : 10.5626/KTCP.2015.21.2.121
In recent years, research on collaborative filtering-based recommendation systems emphasized the accuracy of rating predictions, and this has led to an increase in computation time. As a result, such systems have divergeded from the original purpose of making quick recommendations. In this paper, we propose a recommendation algorithm that uses a Preference Difference Metric to reduce the computation time and to maintain adequate performance. The system recommends items according to their preference classification.
Randomness based Static Wear-Leveling for Enhancing Reliability in Large-scale Flash-based Storage
Choi, Kilmo ; Kim, Sewoog ; Choi, Jongmoo ;
KIISE Transactions on Computing Practices, volume 21, issue 2, 2015, Pages 126~131
DOI : 10.5626/KTCP.2015.21.2.126
As flash-based storage systems have been actively employed in large-scale servers and data centers, reliability has become an indispensable element. One promising technique for enhancing reliability is static wear-leveling, which distributes erase operations evenly among blocks so that the lifespan of storage systems can be prolonged. However, increasing the capacity makes the processing overhead of this technique non-trivial, mainly due to searching for blocks whose erase count would be minimum (or maximum) among all blocks. To reduce this overhead, we introduce a new randomized block selection method in static wear-leveling. Specifically, without exhaustive search, it chooses n blocks randomly and selects the maximal/minimal erased blocks among the chosen set. Our experimental results revealed that, when n is 2, the wear-leveling effects can be obtained, while for n beyond 4, the effect is close to that obtained from traditional static wear-leveling. For quantitative evaluation of the processing overhead, the scheme was actually implemented on an FPGA board, and overhead reduction of more than 3 times was observed. This implies that the proposed scheme performs as effectively as the traditional static wear-leveling while reducing overhead.
Analysis of Timed Automata Model-based Testing Approaches and Case Study
Kim, Hanseok ; Jee, Eunkyoung ; Bae, Doo-Hwan ;
KIISE Transactions on Computing Practices, volume 21, issue 2, 2015, Pages 132~137
DOI : 10.5626/KTCP.2015.21.2.132
A real-time system is a system wherein the behavior of the system depends not only on the input but also on the timing of the input. Timed automata is a widely used model for real-time system modeling and analysis. Model-based testing is employed to check whether the system under test (SUT) works according to the model specifications by using test cases generated from models that represent software requirements. In this paper, a case study was performed applying the timed automata based testing tools, UPPAAL-TRON, UPPAAL-COVER and SYMBOLRT, to the same system. Comparison of the testing approaches and tools is then made based on the results of the case study.
Parallelization and Performance Optimization of the Boyer-Moore Algorithm on GPU
Jeong, Yosang ; Tran, Nhat-Phuong ; Lee, Myungho ; Nam, Dukyun ; Kim, Jik-Soo ; Hwang, Soonwook ;
KIISE Transactions on Computing Practices, volume 21, issue 2, 2015, Pages 138~143
DOI : 10.5626/KTCP.2015.21.2.138
The Boyer-Moore algorithm is a single pattern string matching algorithm that is widely used in various applications such as computer and internet security, and bioinformatics. This algorithm is computationally demanding and requires high-performance parallel processing. In this paper, we propose a parallelization and performance optimization methodology for the BM algorithm on a GPU. Our methodology adopts an algorithmic cascading technique. This results in significant reductions in the mapping overheads for the threads participating in the parallel string matching. It also results in the efficient utilization of the multithreading capability of the GPU which improves the load balancing among threads. Our experimental results show that this approach achieves a 45-times speedup at maximum, in comparison with a serial execution.
Domain Question Answering System
Yoon, Seunghyun ; Rhim, Eunhee ; Kim, Deokho ;
KIISE Transactions on Computing Practices, volume 21, issue 2, 2015, Pages 144~147
DOI : 10.5626/KTCP.2015.21.2.144
Question Answering (QA) services can provide exact answers to user questions written in natural language form. This research focuses on how to build a QA system for a specific domain area. Online and offline QA system architecture of targeted domain such as domain detection, question analysis, reasoning, information retrieval, filtering, answer extraction, re-ranking, and answer generation, as well as data preparation are presented herein. Test results with an official Frequently Asked Question (FAQ) set showed 68% accuracy of the top 1 and 77% accuracy of the top 5. The contribution of each part such as question analysis system, document search engine, knowledge graph engine and re-ranking module for achieving the final answer are also presented.
A Practical Study on Code Static Analysis through Open Source based Tool Chains
Kang, Geon-Hee ; Kim, R. Young Chul ; Yi, Geun Sang ; Kim, Young Soo ; Park, Yong. B. ; Son, Hyun Seung ;
KIISE Transactions on Computing Practices, volume 21, issue 2, 2015, Pages 148~153
DOI : 10.5626/KTCP.2015.21.2.148
In our domestic software industries, it is focused on such a high quality development/ testing process, maturity measurement, and so on. But the real industrial fields are still working on a code-centric development. Most of the existing legacy systems did not keep the design and highly increased the code complexity with more patching of the original codes. To solve this problem, we adopt a code visualization technique which is important to reduce the code complexity among modules. To do this, we suggest a tool chaining method based on the existing open source software tools, which extends NIPA's Software Visualization techniques applied to procedural languages. In addition, it should be refactored to fix bad couplings of the quality measurement indicators within the code visualization. As a result, we can apply reverse engineering to the legacy code, that is, from programming via model to architecture, and then make high quality software with this approach.
Smartphone-User Interactive based Self Developing Place-Time-Activity Coupled Prediction Method for Daily Routine Planning System
Lee, Beom-Jin ; Kim, Jiseob ; Ryu, Je-Hwan ; Heo, Min-Oh ; Kim, Joo-Seuk ; Zhang, Byoung-Tak ;
KIISE Transactions on Computing Practices, volume 21, issue 2, 2015, Pages 154~159
DOI : 10.5626/KTCP.2015.21.2.154
Over the past few years, user needs in the smartphone application market have been shifted from diversity toward intelligence. Here, we propose a novel cognitive agent that plans the daily routines of users using the lifelog data collected by the smart phones of individuals. The proposed method first employs DPGMM (Dirichlet Process Gaussian Mixture Model) to automatically extract the users' POI (Point of Interest) from the lifelog data. After extraction, the POI and other meaningful features such as GPS, the user's activity label extracted from the log data is then used to learn the patterns of the user's daily routine by POMDP (Partially Observable Markov Decision Process). To determine the significant patterns within the user's time dependent patterns, collaboration was made with the SNS application Foursquare to record the locations visited by the user and the activities that the user had performed. The method was evaluated by predicting the daily routine of seven users with 3300 feedback data. Experimental results showed that daily routine scheduling can be established after seven days of lifelogged data and feedback data have been collected, demonstrating the potential of the new method of place-time-activity coupled daily routine planning systems in the intelligence application market.
Adaptive Speech Emotion Recognition Framework Using Prompted Labeling Technique
Bang, Jae Hun ; Lee, Sungyoung ;
KIISE Transactions on Computing Practices, volume 21, issue 2, 2015, Pages 160~165
DOI : 10.5626/KTCP.2015.21.2.160
Traditional speech emotion recognition techniques recognize emotions using a general training model based on the voices of various people. These techniques can not consider personalized speech character exactly. Therefore, the recognized results are very different to each person. This paper proposes an adaptive speech emotion recognition framework made from user's' immediate feedback data using a prompted labeling technique for building a personal adaptive recognition model and applying it to each user in a mobile device environment. The proposed framework can recognize emotions from the building of a personalized recognition model. The proposed framework was evaluated to be better than the traditional research techniques from three comparative experiment. The proposed framework can be applied to healthcare, emotion monitoring and personalized service.