Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
Journal of KIISE
Journal Basic Information
Journal DOI :
Korean Institute of Information Scientists and Engineers
Editor in Chief :
Volume & Issues
Volume 42, Issue 12 - Dec 2015
Volume 42, Issue 11 - Nov 2015
Volume 42, Issue 10 - Oct 2015
Volume 42, Issue 9 - Sep 2015
Volume 42, Issue 8 - Aug 2015
Volume 42, Issue 7 - Jul 2015
Volume 42, Issue 6 - Jun 2015
Volume 42, Issue 5 - May 2015
Volume 42, Issue 4 - Apr 2015
Volume 42, Issue 3 - Mar 2015
Volume 42, Issue 2 - Feb 2015
Volume 42, Issue 1 - Jan 2015
Selecting the target year
The Least-Dirty-First CLOCK Replacement Policy for Phase-Change Memory based Swap Devices
Yoo, Seunghoon ; Lee, Eunji ; Bahn, Hyokyung ;
Journal of KIISE, volume 42, issue 9, 2015, Pages 1071~1077
DOI : 10.5626/JOK.2015.42.9.1071
In this paper, we adopt PCM (phase-change memory) as a virtual memory swap device and present a new page replacement policy that considers the characteristics of PCM. Specifically, we aim to reduce the write traffic to PCM by considering the dirtiness of pages when making a replacement decision. The proposed policy tracks the dirtiness of a page at the granularity of a sub-page and replaces the least dirty page among the pages not recently used. Experimental results show that the proposed policy reduces the amount of data written to PCM by 22.9% on average and up to 73.7% compared to CLOCK. It also extends the lifespan of PCM by 49.0% and reduces the energy consumption of PCM by 3.0% on average.
Improving the Lifetime of NAND Flash-based Storages by Min-hash Assisted Delta Compression Engine
Kwon, Hyoukjun ; Kim, Dohyun ; Park, Jisung ; Kim, Jihong ;
Journal of KIISE, volume 42, issue 9, 2015, Pages 1078~1089
DOI : 10.5626/JOK.2015.42.9.1078
In this paper, we propose the Min-hash Assisted Delta-compression Engine(MADE) to improve the lifetime of NAND flash-based storages at the device level. MADE effectively reduces the write traffic to NAND flash through the use of a novel delta compression scheme. The delta compression performance was optimized by introducing min-hash based LSH(Locality Sensitive Hash) and efficiently combining it with our delta compression method. We also developed a delta encoding technique that has functionality equivalent to deduplication and lossless compression. The results of our experiment show that MADE reduces the amount of data written on NAND flash by up to 90%, which is better than a simple combination of deduplication and lossless compression schemes by 12% on average.
Architecture of Virtual Cloud Bank for Mediating Cloud Services based on Cloud User Requirements
Park, Joonseok ; An, Youngmin ; Yeom, Keunhyuk ;
Journal of KIISE, volume 42, issue 9, 2015, Pages 1090~1099
DOI : 10.5626/JOK.2015.42.9.1090
The concept of Cloud Service Brokerage (CSB) has been introduced as a result of the expansion of the cloud-computing paradigm. Cloud services that provide similar functionality are registered with a CSB. A CSB intermediates cloud services between cloud users and providers. However, there are differences in the price and performance offered by each of the cloud providers. Thus, cloud users have difficulty in finding suitable services to use. Therefore, a CSB is required in order to provide an approach for cloud services to fulfill the requirements of cloud users. In this paper, we propose a virtual cloud bank architecture that includes both a Service Analysis Model (SAM) that can be used to specify and analyze various cloud services and a requirement analysis method that can be used to collect and analyze the cloud user requirements. The VCB architecture that is herein proposed can be used as a reference architecture to provide user-centric cloud services.
Robust Anti Reverse Engineering Technique for Protecting Android Applications using the AES Algorithm
Kim, JungHyun ; Lee, Kang Seung ;
Journal of KIISE, volume 42, issue 9, 2015, Pages 1100~1108
DOI : 10.5626/JOK.2015.42.9.1100
Classes.dex, which is the executable file for android operation system, has Java bite code format, so that anyone can analyze and modify its source codes by using reverse engineering. Due to this characteristic, many android applications using classes.dex as executable file have been illegally copied and distributed, causing damage to the developers and software industry. To tackle such ill-intended behavior, this paper proposes a technique to encrypt classes.dex file using an AES(Advanced Encryption Standard) encryption algorithm and decrypts the applications encrypted in such a manner in order to prevent reverse engineering of the applications. To reinforce the file against reverse engineering attack, hash values that are obtained from substituting a hash equation through the combination of salt values, are used for the keys for encrypting and decrypting classes.dex. The experiments demonstrated that the proposed technique is effective in preventing the illegal duplication of classes.dex-based android applications and reverse engineering attack. As a result, the proposed technique can protect the source of an application and also prevent the spreading of malicious codes due to repackaging attack.
SPARQL Query Processing in Distributed In-Memory System
Jagvaral, Batselem ; Lee, Wangon ; Kim, Kang-Pil ; Park, Young-Tack ;
Journal of KIISE, volume 42, issue 9, 2015, Pages 1109~1116
DOI : 10.5626/JOK.2015.42.9.1109
In this paper, we propose a query processing approach that uses the Spark functional programming and distributed memory system to solve the computational overhead of SPARQL. In the semantic web, RDF ontology data is produced at large scale, and the main challenge for the semantic web is to query and manipulate such a large ontology with a high throughput. The most existing studies on SPARQL have focused on deploying the Hadoop MapReduce framework, and although approaches based on Hadoop MapReduce have shown promising results, they achieve a low level of throughput due to the underlying distributed file processes. Therefore, in order to speed up the query processes, we suggest query- processing methods that are based on memory caching in distributed memory system. Our approach is also integrated with a clause unification method for propagating between the clauses that exploits Spark join, map and filter methods along with caching. In our experiments, we have achieved a high level of performance relative to other approaches. In particular, our performance was nearly similar to that of Sempala, which has been considered to be the fastest query processing system.
An Elementary-Function-Based Refinement Method for Use Cases to Improve Reliability of Use Case Points
Heo, Ryoung ; Seo, Young-Duk ; Baik, Doo-Kwon ;
Journal of KIISE, volume 42, issue 9, 2015, Pages 1117~1123
DOI : 10.5626/JOK.2015.42.9.1117
Use The Use Case Points method is a software estimation method that is based on user requirements. When requirement analysts elicit user requirements, they obtain different use cases because different levels of detail are possible for the Use Case, and this affects the Use Case Points. In this paper, we suggest a method to refine the level of detail of the Use Case by using the concept of an elementary function. This refinement method achieves the desired reliability for the Use Case Points because it produces less of a deviation in the Use Case Points for different requirement analysts than other methods that are based on the step, transaction, and narrative of the Use Case.
Effect of Application of Ensemble Method on Machine Learning with Insufficient Training Set in Developing Automated English Essay Scoring System
Lee, Gyoung Ho ; Lee, Kong Joo ;
Journal of KIISE, volume 42, issue 9, 2015, Pages 1124~1132
DOI : 10.5626/JOK.2015.42.9.1124
In order to train a supervised machine learning algorithm, it is necessary to have non-biased labels and a sufficient amount of training data. However, it is difficult to collect the required non-biased labels and a sufficient amount of training data to develop an automatic English Composition scoring system. In addition, an English writing assessment is carried out using a multi-faceted evaluation of the overall level of the answer. Therefore, it is difficult to choose an appropriate machine learning algorithm for such work. In this paper, we show that it is possible to alleviate these problems through ensemble learning. The results of the experiment indicate that the ensemble technique exhibited an overall performance that was better than that of other algorithms.
Running a SCRUM project within a Document Driven Process: An Experimental Case Study Report
Sawyer, Jonathan ; Lee, Seok-Won ;
Journal of KIISE, volume 42, issue 9, 2015, Pages 1133~1146
DOI : 10.5626/JOK.2015.42.9.1133
This paper examines how a Computer Engineering Graduate student team ran their Advanced Software Engineering Capstone project using SCRUM. The environment provided contextual challenges in terms of the on-site customer and upfront requirements document, not uncommon in a document driven single-step methodology. The paper details the methodology and practices used to run the project, and reflects on some of the challenges faced by the members of a typical software team when transitioning to a SCRUM process. The paper concludes by evaluating the success of the techniques and practices compared to the Agile Manifesto and Henrik Kniberg's Scrum checklist. The project was undertaken at South Korea's Ajou University.
A Design of Metadata Registry Database based on Object-Relational Transformation Methodology
Cha, Sooyoung ; Lee, Sukhoon ; Jeong, Dongwon ; Baik, Doo-Kwon ;
Journal of KIISE, volume 42, issue 9, 2015, Pages 1147~1161
DOI : 10.5626/JOK.2015.42.9.1147
The ISO/IEC 11179 Metadata registry (MDR) is an international standard that was developed to register and share metadata. ISO/IEC 11179 represents an MDR as a metamodel that is an object model. However, it is difficult to develop an MDR based on ISO/IEC 11179 because the standard has no clear criteria to transform the metamodel into a database. In this paper, we suggest the design of an MDR data model that is based on object-relational transformation methodology (ORTM) for the MDR implementation. Hence, we classify the transformation methods of ORTM according to the corresponding relationships. After classification, we propose modeling rules by defining the standard use of the transformation. This paper builds the relational database tables as an implementation result of an MDR data model. Through experiments and evaluation, we verify the proposed modeling rules and evaluate the suitability of the created table structures. As the result, the proposed method shows that the table structures preserve classes and relationships of the standard metamodel well.
Network Adaptive Congestion Control Scheme to Improve Bandwidth Occupancy and RTT Fairness in HBDP Networks
Oh, Junyeol ; Chung, Kwangsue ;
Journal of KIISE, volume 42, issue 9, 2015, Pages 1162~1174
DOI : 10.5626/JOK.2015.42.9.1162
These days, the networks have exhibited HBDP (High Bandwidth Delay Product) characteristics. The legacy TCP slowly increases the size of the congestion window and drastically decreases the size of a congestion window. The legacy TCP has been found to be unsuitable for HBDP networks. TCP mechanisms for solving the problems of the legacy TCP can be categorized into the loss-based TCP and the delay-based TCP. Most of the TCP mechanisms use the standard slow start phase, which leads to a heavy packet loss event caused by the overshoot. Also, in the case of congestion avoidance, the loss-based TCP has shown problems of wastage in terms of the bandwidth and RTT (Round Trip Time) fairness. The delay-based TCP has shown a slow increase in speed and low occupancy of the bandwidth. In this paper, we propose a new scheme for improving the over shoot, increasing the speed of the bandwidth and overcoming the bandwidth occupancy and RTT fairness issues. By monitoring the buffer condition in the bottleneck link, the proposed scheme does congestion control and solves problems of slow start and congestion avoidance. By evaluating performance, we prove that our proposed scheme offers better performance in HBDP networks compared to the previous TCP mechanisms.
Data Block based User Authentication for Outsourced Data
Hahn, Changhee ; Kown, Hyunsoo ; Kim, Daeyeong ; Hur, Junbeom ;
Journal of KIISE, volume 42, issue 9, 2015, Pages 1175~1184
DOI : 10.5626/JOK.2015.42.9.1175
Recently, there has been an explosive increase in the volume of multimedia data that is available as a result of the development of multimedia technologies. More and more data is becoming available on a variety of web sites, and it has become increasingly cost prohibitive to have a single data server store and process multimedia files locally. Therefore, many service providers have been likely to outsource data to cloud storage to reduce costs. Such behavior raises one serious concern: how can data users be authenticated in a secure and efficient way? The most widely used password-based authentication methods suffer from numerous disadvantages in terms of security. Multi-factor authentication protocols based on a variety of communication channels, such as SMS, biometric, or hardware tokens, may improve security but inevitably reduce usability. To this end, we present a data block-based authentication scheme that is secure and guarantees usability in such a manner where users do nothing more than enter a password. In addition, the proposed scheme can be effectively used to revoke user rights. To the best of our knowledge, our scheme is the first data block-based authentication scheme for outsourced data that is proven to be secure without degradation in usability. An experiment was conducted using the Amazon EC2 cloud service, and the results show that the proposed scheme guarantees a nearly constant time for user authentication.