Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
Journal of Internet Computing and Services
Journal Basic Information
Journal DOI :
Korean Society for Internet Information
Editor in Chief :
Volume & Issues
Volume 15, Issue 6 - Dec 2014
Volume 15, Issue 5 - Oct 2014
Volume 15, Issue 4 - Aug 2014
Volume 15, Issue 3 - Jun 2014
Volume 15, Issue 2 - Apr 2014
Volume 15, Issue 1 - Feb 2014
Selecting the target year
Optimizing Performance and Energy Efficiency in Cloud Data Centers Through SLA-Aware Consolidation of Virtualized Resources
Elijorde, Frank I. ; Lee, Jaewan ;
Journal of Internet Computing and Services, volume 15, issue 3, 2014, Pages 1~10
DOI : 10.7472/jksii.2014.15.3.01
The cloud computing paradigm introduced pay-per-use models in which IT services can be created and scaled on-demand. However, service providers are still concerned about the constraints imposed by their physical infrastructures. In order to keep the required QoS and achieve the goal of upholding the SLA, virtualized resources must be efficiently consolidated to maximize system throughput while keeping energy consumption at a minimum. Using ANN, we propose a predictive SLA-aware approach for consolidating virtualized resources in a cloud environment. To maintain the QoS and to establish an optimal trade-off between performance and energy efficiency, the server's utilization threshold dynamically adapts to the physical machine's resource consumption. Furthermore, resource-intensive VMs are prevented from getting underprovisioned by assigning them to hosts that are both capable and reputable. To verify the performance of our proposed approach, we compare it with non-optimized conventional approaches as well as with other previously proposed techniques in a heterogeneous cloud environment setup.
NUI/NUX framework based on intuitive hand motion
Lee, Gwanghyung ; Shin, Dongkyoo ; Shin, Dongil ;
Journal of Internet Computing and Services, volume 15, issue 3, 2014, Pages 11~19
DOI : 10.7472/jksii.2014.15.3.11
The natural user interface/experience (NUI/NUX) is used for the natural motion interface without using device or tool such as mice, keyboards, pens and markers. Up to now, typical motion recognition methods used markers to receive coordinate input values of each marker as relative data and to store each coordinate value into the database. But, to recognize accurate motion, more markers are needed and much time is taken in attaching makers and processing the data. Also, as NUI/NUX framework being developed except for the most important intuition, problems for use arise and are forced for users to learn many NUI/NUX framework usages. To compensate for this problem in this paper, we didn't use markers and implemented for anyone to handle it. Also, we designed multi-modal NUI/NUX framework controlling voice, body motion, and facial expression simultaneously, and proposed a new algorithm of mouse operation by recognizing intuitive hand gesture and mapping it on the monitor. We implement it for user to handle the "hand mouse" operation easily and intuitively.
Implementation of Service Model to Exchange of Biosignal Information based on HL7 Fast Health Interoperability Resources for the hypertensive management
Cho, Hune ; Won, Ju Ok ; Hong, Hae Sook ; Kim, Hwa Sun ;
Journal of Internet Computing and Services, volume 15, issue 3, 2014, Pages 21~30
DOI : 10.7472/jksii.2014.15.3.21
Hypertension is one of the major causes of death in the world as it is related with cardiovascular or cerebrovascular disease, so it is needed to provide continuos management for blood pressure. This study selected Health Level 7 Fast Health Interoperability Resources (HL7 FHIR) as a bio-signal data exchange service model that can provide constant blood pressure management in the rapidly growing mobile health care environment. The HL7 FHIR framework developed communicates with the IEEE 11073-10407 Personal Health Device (PHD) protocol through the bluetooth Health Device Profile (HDP) between the manager (smart phone) and the agent (hemomanometer) and acquires information about blood pressure. According to the test results, it performed its tasks successfully including hypertension patients' blood pressure monitoring, management on measured records, generation of document, or transmission of measured information. Because in the actual, clinical environment, it is possible to transmit measured information through the TCP/IP protocol, it will be needed to conduct constant research on it and vitalize it in the field of mobile health care afterwards.
Malicious Trojan Horse Application Discrimination Mechanism using Realtime Event Similarity on Android Mobile Devices
Ham, You Joung ; Lee, Hyung-Woo ;
Journal of Internet Computing and Services, volume 15, issue 3, 2014, Pages 31~43
DOI : 10.7472/jksii.2014.15.3.31
Large number of Android mobile application has been developed and deployed through the Android open market by increasing android-based smart work device users recently. But, it has been discovered security vulnerabilities on malicious applications that are developed and deployed through the open market or 3rd party market. There are issues to leak user's personal and financial information in mobile devices to external server without the user's knowledge in most of malicious application inserted Trojan Horse forms of malicious code. Therefore, in order to minimize the damage caused by malignant constantly increasing malicious application, it is required a proactive detection mechanism development. In this paper, we analyzed the existing techniques' Pros and Cons to detect a malicious application and proposed discrimination and detection result using malicious application discrimination mechanism based on Jaccard similarity after collecting events occur in real-time execution on android-mobile devices.
An Analysis of Big Video Data with Cloud Computing in Ubiquitous City
Lee, Hak Geon ; Yun, Chang Ho ; Park, Jong Won ; Lee, Yong Woo ;
Journal of Internet Computing and Services, volume 15, issue 3, 2014, Pages 45~52
DOI : 10.7472/jksii.2014.15.3.45
The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed
resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.
Hybrid Spray and Wait Routing Protocol in DTN
Hyun, Sung-Su ; Jeong, Hyeon-Jin ; Choi, Seoung-Sik ;
Journal of Internet Computing and Services, volume 15, issue 3, 2014, Pages 53~62
DOI : 10.7472/jksii.2014.15.3.53
DTN is the next generation network that is used in not guaranteed end-to-end connection such as communication between planet and satellite, frequent connection severance, and not enough for qualified network infrastructure. In this paper, we propose the hybrid Spray-and-Wait algorithm to predict the node contact time by monitoring the periodic contacts information between the nodes. Based on this method, we select one node on the basis of prediction time and copy a message for spray and wait algorithm. In order to verify the the hybrid Spray and Wait algorithm, we use the ONE(Opportunistic Network Environment) Simulator of Helsinki University. The delivery probability of the proposed algorithm is compared to the Binary Spray and Wait algorithm, it is showed that it has 10% less overhead than Binary Spray and Wait routing. It has also shown that it reduces unnecessary copying of this message.
Channel assignment for 802.11p-based multi-radio multi-channel networks considering beacon message dissemination using Nash bargaining solution
Kwon, Yong-Ho ; Rhee, Byung-Ho ;
Journal of Internet Computing and Services, volume 15, issue 3, 2014, Pages 63~69
DOI : 10.7472/jksii.2014.15.3.63
For the safety messages in IEEE 802.11p vehicles network environment(WAVE), strict periodic beacon broadcasting requires status advertisement to assist the driver for safety. WAVE standards apply multiple radios and multiple channels to provide open public road safety services and improve the comfort and efficiency of driving. Although WAVE standards have been proposed multi-channel multi-radio, the standards neither consider the WAVE multi-radio environment nor its effect on the beacon broadcasting. Most of beacon broadcasting is designed to be delivered on only one physical device and one control channel by the WAVE standard. also conflict-free channel assignment of the fewest channels to a given set of radio nodes without causing collision is NP-hard, even with the knowledge of the network topology and all nodes have the same transmission radio. Based on the latest standard IEEE 802.11p and IEEE 1609.4, this paper proposes an interference aware-based channel assignment algorithm with Nash bargaining solution that minimizes interference and increases throughput with wireless mesh network, which is deigned for multi-radio multi-cahnnel structure of WAVE. The proposed algorithm is validated against numerical simulation results and results show that our proposed algorithm is improvements on 8 channels with 3 radios compared to Tabu and random channel allocation algorithm.
Complexity Reduction of Blind Algorithms based on Cross-Information Potential and Delta Functions
Kim, Namyong ;
Journal of Internet Computing and Services, volume 15, issue 3, 2014, Pages 71~77
DOI : 10.7472/jksii.2014.15.3.71
The equalization algorithm based on the cross-information potential concept and Dirac-delta functions (CIPD) has outstanding ISI elimination performance even under impulsive noise environments. The main drawback of the CIPD algorithm is a heavy computational burden caused by the use of a block processing method for its weight update process. In this paper, for the purpose of reducing the computational complexity, a new method of the gradient calculation is proposed that can replace the double summation with a single summation for the weight update of the CIPD algorithm. In the simulation results, the proposed method produces the same gradient learning curves as the CIPD algorithm. Even under strong impulsive noise, the proposed method yields the same results while having significantly reduced computational complexity regardless of the number of block data, to which that of the e conventional algorithm is proportional.
Behavioural Analysis of Password Authentication and Countermeasure to Phishing Attacks - from User Experience and HCI Perspectives
Ryu, Hong Ryeol ; Hong, Moses ; Kwon, Taekyoung ;
Journal of Internet Computing and Services, volume 15, issue 3, 2014, Pages 79~90
DOI : 10.7472/jksii.2014.15.3.79
User authentication based on ID and PW has been widely used. As the Internet has become a growing part of people' lives, input times of ID/PW have been increased for a variety of services. People have already learned enough to perform the authentication procedure and have entered ID/PW while ones are unconscious. This is referred to as the adaptive unconscious, a set of mental processes incoming information and producing judgements and behaviors without our conscious awareness and within a second. Most people have joined up for various websites with a small number of IDs/PWs, because they relied on their memory for managing IDs/PWs. Human memory decays with the passing of time and knowledges in human memory tend to interfere with each other. For that reason, there is the potential for people to enter an invalid ID/PW. Therefore, these characteristics above mentioned regarding of user authentication with ID/PW can lead to human vulnerabilities: people use a few PWs for various websites, manage IDs/PWs depending on their memory, and enter ID/PW unconsciously. Based on the vulnerability of human factors, a variety of information leakage attacks such as phishing and pharming attacks have been increasing exponentially. In the past, information leakage attacks exploited vulnerabilities of hardware, operating system, software and so on. However, most of current attacks tend to exploit the vulnerabilities of the human factors. These attacks based on the vulnerability of the human factor are called social-engineering attacks. Recently, malicious social-engineering technique such as phishing and pharming attacks is one of the biggest security problems. Phishing is an attack of attempting to obtain valuable information such as ID/PW and pharming is an attack intended to steal personal data by redirecting a website's traffic to a fraudulent copy of a legitimate website. Screens of fraudulent copies used for both phishing and pharming attacks are almost identical to those of legitimate websites, and even the pharming can include the deceptive URL address. Therefore, without the supports of prevention and detection techniques such as vaccines and reputation system, it is difficult for users to determine intuitively whether the site is the phishing and pharming sites or legitimate site. The previous researches in terms of phishing and pharming attacks have mainly studied on technical solutions. In this paper, we focus on human behaviour when users are confronted by phishing and pharming attacks without knowing them. We conducted an attack experiment in order to find out how many IDs/PWs are leaked from pharming and phishing attack. We firstly configured the experimental settings in the same condition of phishing and pharming attacks and build a phishing site for the experiment. We then recruited 64 voluntary participants and asked them to log in our experimental site. For each participant, we conducted a questionnaire survey with regard to the experiment. Through the attack experiment and survey, we observed whether their password are leaked out when logging in the experimental phishing site, and how many different passwords are leaked among the total number of passwords of each participant. Consequently, we found out that most participants unconsciously logged in the site and the ID/PW management dependent on human memory caused the leakage of multiple passwords. The user should actively utilize repudiation systems and the service provider with online site should support prevention techniques that the user can intuitively determined whether the site is phishing.
A Framework for Making Decision on Optimal Security Investment to the Proactive and Reactive Security Solutions management
Choi, Yoon-Ho ;
Journal of Internet Computing and Services, volume 15, issue 3, 2014, Pages 91~100
DOI : 10.7472/jksii.2014.15.3.91
While IT security investment of organizations has been increased, the amount of the monetary loss of organizations caused by IT security breaches did not decrease as much as their expectation. Also, from surveys, it was discovered that the poor usage of their security budget thwarted the improvement of the organization's security level. In this paper, to resolve the poor usage of security budget of organizations, we propose a comprehensive economic model for determining the optimal amount of investment in security solutions, including the proactive security solutions(PSSs) and the reactive security solutions(RSSs). Using the proposed analytical model under different parameters of security solutions, we show the optimal condition to maximize the expected net benefits from IT security investment of organizations. Also, we verify the common belief that the optimal level of investment in security solutions is an increasing function of vulnerability. Through simulations, we find the optimal level of IT security investment, given parameters of different characteristics of security solutions.
Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window
Pyun, Gwangbum ; Yun, Unil ;
Journal of Internet Computing and Services, volume 15, issue 3, 2014, Pages 101~107
DOI : 10.7472/jksii.2014.15.3.101
With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).
Classification of Parkinson's Disease Using Defuzzification-Based Instance Selection
Lee, Sang-Hong ;
Journal of Internet Computing and Services, volume 15, issue 3, 2014, Pages 109~116
DOI : 10.7472/jksii.2014.15.3.109
This study proposed new instance selection using neural network with weighted fuzzy membership functions(NEWFM) based on Takagi-Sugeno(T-S) fuzzy model to improve the classification performance. The proposed instance selection adopted weighted average defuzzification of the T-S fuzzy model and an interval selection, same as the confidence interval in a normal distribution used in statistics. In order to evaluate the classification performance of the proposed instance selection, the results were compared with depending on whether to use instance selection from the case study. The classification performances of depending on whether to use instance selection show 77.33% and 78.19%, respectively. Also, to show the difference between the classification performance of depending on whether to use instance selection, a statistics methodology, McNemar test, was used. The test results showed that the instance selection was superior to no instance selection as the significance level was lower than 0.05.
A Study on Introduction of Online Education to Provide Opportunities for Spreading University-level Program
Han, Oakyoung ; Chung, Mihyun ; Kim, Jaehyoun ;
Journal of Internet Computing and Services, volume 15, issue 3, 2014, Pages 117~124
DOI : 10.7472/jksii.2014.15.3.117
This paper studies on introduction of online education to provide opportunities for spreading university-level program by analyzing perception of high school students and teachers. The university-level program can be defined as the fulfillment of learning needs and the value offer of excellence education for outstanding high school students who want to improve their potential capabilities. For the study, a survey was conducted at high school students and teachers. As the result of the survey for high school students, the efficiency of education was the most important factor for the university-level program. The order of next important factors was the aid to entering university, the method of education, the satisfaction, and the recommendation of others. The result of high school teacher indicated that the efficiency of education was the most important factor as the high school students. The order of next important factors by high school teachers was the satisfaction, the aid to entering university, and the method of education. An activation of the university-level programs can be spread by analyzing the results of the survey. With the introduction of online education for the university-level program can conclude the guarantee of the right of studying and the reduction of education gap. This paper proposed an online education for the university-level program to guarantee the right of studying and to reduce the education gap.
A Study on Ontology Based Knowledge Representation Method with the Alzheimer Disease Related Articles
Lee, Jaeho ; Kim, Younhee ; Shin, Hyunkyung ; Song, Kibong ;
Journal of Internet Computing and Services, volume 15, issue 3, 2014, Pages 125~135
DOI : 10.7472/jksii.2014.15.3.125
In the medical field, for the purpose of diagnosis and treatment of diseases, building knowledge base has received a lot of attention. The most important thing to build a knowledge base is representing the knowledge accurately. In this paper we suggest a knowledge representation method using Ontology technique with the datasets obtained from the domestic papers on Alzheimer disease that has received a lot of attention recently in the medical field. The suggested Ontology for Alzheimer disease defines all the possible classes: lexical information from journals such as 'author' and 'publisher' research subjects extracted from 'title', 'abstract', 'keywords', and 'results'. It also included various semantic relationships between classes through the Ontology properties. Inference can be supported since our Ontology adopts hierarchical tree structure for the classes and transitional characteristics of the properties. Therefore, semantic representation based query is allowed as well as simple keyword query, which enables inference based knowledge query using an Ontology query language 'SPARQL'.