Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
Journal of Intelligence and Information Systems
Journal Basic Information
Journal DOI :
Korea Inteligent Information System Society
Editor in Chief :
Volume & Issues
Volume 19, Issue 4 - Dec 2013
Volume 19, Issue 3 - Sep 2013
Volume 19, Issue 2 - Jun 2013
Volume 19, Issue 1 - Mar 2013
Selecting the target year
Recommender System using Implicit Trust-enhanced Collaborative Filtering
Kim, Kyoung-Jae ; Kim, Youngtae ;
Journal of Intelligence and Information Systems , volume 19, issue 4, 2013, Pages 1~10
DOI : 10.13088/jiis.2013.19.4.001
Personalization aims to provide customized contents to each user by using the user's personal preferences. In this sense, the core parts of personalization are regarded as recommendation technologies, which can recommend the proper contents or products to each user according to his/her preference. Prior studies have proposed novel recommendation technologies because they recognized the importance of recommender systems. Among several recommendation technologies, collaborative filtering (CF) has been actively studied and applied in real-world applications. The CF, however, often suffers sparsity or scalability problems. Prior research also recognized the importance of these two problems and therefore proposed many solutions. Many prior studies, however, suffered from problems, such as requiring additional time and cost for solving the limitations by utilizing additional information from other sources besides the existing user-item matrix. This study proposes a novel implicit rating approach for collaborative filtering in order to mitigate the sparsity problem as well as to enhance the performance of recommender systems. In this study, we propose the methods of reducing the sparsity problem through supplementing the user-item matrix based on the implicit rating approach, which measures the trust level among users via the existing user-item matrix. This study provides the preliminary experimental results for testing the usefulness of the proposed model.
Correlation between Car Accident and Car Color for Intelligent Service
Shin, Seong-Yoon ; Lee, Sangwon ;
Journal of Intelligence and Information Systems , volume 19, issue 4, 2013, Pages 11~20
DOI : 10.13088/jiis.2013.19.4.011
In designing Intelligent Traffic Systems, it should be necessary to consider telecommunications, appearance, environment, auxiliary functions, safety, and so on. Also, in choosing a car, a consumer considers those properties. This paper tried to elucidate the fact that car color has a very significant meaning for car safety when administrating intelligent traffic services and making car-purchasing decision. We first studied on occurrence probability of car accident according to car color that has something to do with car safety. Then, we studied on the concepts of advancing color and receding color. Advancing color causes less accidents since the color looks closer than it actually is. And receding color causes more accidents since the color looks farther than it actually is. And we classified car colors into eight classes and assign their ranking to each class, considering the number of car accidents. We tried to verify our research by use of telephone questionnaire for residents in Kunsan, Republic of Korea.
A Study on the Determinants of Patent Citation Relationships among Companies : MR-QAP Analysis
Park, Jun Hyung ; Kwahk, Kee-Young ; Han, Heejun ; Kim, Yunjeong ;
Journal of Intelligence and Information Systems , volume 19, issue 4, 2013, Pages 21~37
DOI : 10.13088/jiis.2013.19.4.021
Recently, as the advent of the knowledge-based society, there are more people getting interested in the intellectual property. Especially, the ICT companies leading the high-tech industry are working hard to strive for systematic management of intellectual property. As we know, the patent information represents the intellectual capital of the company. Also now the quantitative analysis on the continuously accumulated patent information becomes possible. The analysis at various levels becomes also possible by utilizing the patent information, ranging from the patent level to the enterprise level, industrial level and country level. Through the patent information, we can identify the technology status and analyze the impact of the performance. We are also able to find out the flow of the knowledge through the network analysis. By that, we can not only identify the changes in technology, but also predict the direction of the future research. In the field using the network analysis there are two important analyses which utilize the patent citation information; citation indicator analysis utilizing the frequency of the citation and network analysis based on the citation relationships. Furthermore, this study analyzes whether there are any impacts between the size of the company and patent citation relationships. 74 S&P 500 registered companies that provide IT and communication services are selected for this study. In order to determine the relationship of patent citation between the companies, the patent citation in 2009 and 2010 is collected and sociomatrices which show the patent citation relationship between the companies are created. In addition, the companies' total assets are collected as an index of company size. The distance between companies is defined as the absolute value of the difference between the total assets. And simple differences are considered to be described as the hierarchy of the company. The QAP Correlation analysis and MR-QAP analysis is carried out by using the distance and hierarchy between companies, and also the sociomatrices that shows the patent citation in 2009 and 2010. Through the result of QAP Correlation analysis, the patent citation relationship between companies in the 2009's company's patent citation network and the 2010's company's patent citation network shows the highest correlation. In addition, positive correlation is shown in the patent citation relationships between companies and the distance between companies. This is because the patent citation relationship is increased when there is a difference of size between companies. Not only that, negative correlation is found through the analysis using the patent citation relationship between companies and the hierarchy between companies. Relatively it is indicated that there is a high evaluation about the patent of the higher tier companies influenced toward the lower tier companies. MR-QAP analysis is carried out as follow. The sociomatrix that is generated by using the year 2010 patent citation relationship is used as the dependent variable. Additionally the 2009's company's patent citation network and the distance and hierarchy networks between the companies are used as the independent variables. This study performed MR-QAP analysis to find the main factors influencing the patent citation relationship between the companies in 2010. The analysis results show that all independent variables have positively influenced the 2010's patent citation relationship between the companies. In particular, the 2009's patent citation relationship between the companies has the most significant impact on the 2010's, which means that there is consecutiveness regarding the patent citation relationships. Through the result of QAP correlation analysis and MR-QAP analysis, the patent citation relationship between companies is affected by the size of the companies. But the most significant impact is the patent citation relationships that had been done in the past. The reason why we need to maintain the patent citation relationship between companies is it might be important in the use of strategic aspect of the companies to look into relationships to share intellectual property between each other, also seen as an important auxiliary of the partner companies to cooperate with.
Context Sharing Framework Based on Time Dependent Metadata for Social News Service
Ga, Myung-Hyun ; Oh, Kyeong-Jin ; Hong, Myung-Duk ; Jo, Geun-Sik ;
Journal of Intelligence and Information Systems , volume 19, issue 4, 2013, Pages 39~53
DOI : 10.13088/jiis.2013.19.4.039
The emergence of the internet technology and SNS has increased the information flow and has changed the way people to communicate from one-way to two-way communication. Users not only consume and share the information, they also can create and share it among their friends across the social network service. It also changes the Social Media behavior to become one of the most important communication tools which also includes Social TV. Social TV is a form which people can watch a TV program and at the same share any information or its content with friends through Social media. Social News is getting popular and also known as a Participatory Social Media. It creates influences on user interest through Internet to represent society issues and creates news credibility based on user's reputation. However, the conventional platforms in news services only focus on the news recommendation domain. Recent development in SNS has changed this landscape to allow user to share and disseminate the news. Conventional platform does not provide any special way for news to be share. Currently, Social News Service only allows user to access the entire news. Nonetheless, they cannot access partial of the contents which related to users interest. For example user only have interested to a partial of the news and share the content, it is still hard for them to do so. In worst cases users might understand the news in different context. To solve this, Social News Service must provide a method to provide additional information. For example, Yovisto known as an academic video searching service provided time dependent metadata from the video. User can search and watch partial of video content according to time dependent metadata. They also can share content with a friend in social media. Yovisto applies a method to divide or synchronize a video based whenever the slides presentation is changed to another page. However, we are not able to employs this method on news video since the news video is not incorporating with any power point slides presentation. Segmentation method is required to separate the news video and to creating time dependent metadata. In this work, In this paper, a time dependent metadata-based framework is proposed to segment news contents and to provide time dependent metadata so that user can use context information to communicate with their friends. The transcript of the news is divided by using the proposed story segmentation method. We provide a tag to represent the entire content of the news. And provide the sub tag to indicate the segmented news which includes the starting time of the news. The time dependent metadata helps user to track the news information. It also allows them to leave a comment on each segment of the news. User also may share the news based on time metadata as segmented news or as a whole. Therefore, it helps the user to understand the shared news. To demonstrate the performance, we evaluate the story segmentation accuracy and also the tag generation. For this purpose, we measured accuracy of the story segmentation through semantic similarity and compared to the benchmark algorithm. Experimental results show that the proposed method outperforms benchmark algorithms in terms of the accuracy of story segmentation. It is important to note that sub tag accuracy is the most important as a part of the proposed framework to share the specific news context with others. To extract a more accurate sub tags, we have created stop word list that is not related to the content of the news such as name of the anchor or reporter. And we applied to framework. We have analyzed the accuracy of tags and sub tags which represent the context of news. From the analysis, it seems that proposed framework is helpful to users for sharing their opinions with context information in Social media and Social news.
Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach
Yun, YoungSu ;
Journal of Intelligence and Information Systems , volume 19, issue 4, 2013, Pages 55~79
DOI : 10.13088/jiis.2013.19.4.055
In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.
Story-based Information Retrieval
You, Eun-Soon ; Park, Seung-Bo ;
Journal of Intelligence and Information Systems , volume 19, issue 4, 2013, Pages 81~96
DOI : 10.13088/jiis.2013.19.4.081
Video information retrieval has become a very important issue because of the explosive increase in video data from Web content development. Meanwhile, content-based video analysis using visual features has been the main source for video information retrieval and browsing. Content in video can be represented with content-based analysis techniques, which can extract various features from audio-visual data such as frames, shots, colors, texture, or shape. Moreover, similarity between videos can be measured through content-based analysis. However, a movie that is one of typical types of video data is organized by story as well as audio-visual data. This causes a semantic gap between significant information recognized by people and information resulting from content-based analysis, when content-based video analysis using only audio-visual data of low level is applied to information retrieval of movie. The reason for this semantic gap is that the story line for a movie is high level information, with relationships in the content that changes as the movie progresses. Information retrieval related to the story line of a movie cannot be executed by only content-based analysis techniques. A formal model is needed, which can determine relationships among movie contents, or track meaning changes, in order to accurately retrieve the story information. Recently, story-based video analysis techniques have emerged using a social network concept for story information retrieval. These approaches represent a story by using the relationships between characters in a movie, but these approaches have problems. First, they do not express dynamic changes in relationships between characters according to story development. Second, they miss profound information, such as emotions indicating the identities and psychological states of the characters. Emotion is essential to understanding a character's motivation, conflict, and resolution. Third, they do not take account of events and background that contribute to the story. As a result, this paper reviews the importance and weaknesses of previous video analysis methods ranging from content-based approaches to story analysis based on social network. Also, we suggest necessary elements, such as character, background, and events, based on narrative structures introduced in the literature. We extract characters' emotional words from the script of the movie Pretty Woman by using the hierarchical attribute of WordNet, which is an extensive English thesaurus. WordNet offers relationships between words (e.g., synonyms, hypernyms, hyponyms, antonyms). We present a method to visualize the emotional pattern of a character over time. Second, a character's inner nature must be predetermined in order to model a character arc that can depict the character's growth and development. To this end, we analyze the amount of the character's dialogue in the script and track the character's inner nature using social network concepts, such as in-degree (incoming links) and out-degree (outgoing links). Additionally, we propose a method that can track a character's inner nature by tracing indices such as degree, in-degree, and out-degree of the character network in a movie through its progression. Finally, the spatial background where characters meet and where events take place is an important element in the story. We take advantage of the movie script to extracting significant spatial background and suggest a scene map describing spatial arrangements and distances in the movie. Important places where main characters first meet or where they stay during long periods of time can be extracted through this scene map. In view of the aforementioned three elements (character, event, background), we extract a variety of information related to the story and evaluate the performance of the proposed method. We can track story information extracted over time and detect a change in the character's emotion or inner nature, spatial movement, and conflicts and resolutions in the story.
Participation Level in Online Knowledge Sharing: Behavioral Approach on Wikipedia
Park, Hyun Jung ; Lee, Hong Joo ; Kim, Jong Woo ;
Journal of Intelligence and Information Systems , volume 19, issue 4, 2013, Pages 97~121
DOI : 10.13088/jiis.2013.19.4.097
With the growing importance of knowledge for sustainable competitive advantages and innovation in a volatile environment, many researches on knowledge sharing have been conducted. However, previous researches have mostly relied on the questionnaire survey which has inherent perceptive errors of respondents. The current research has drawn the relationship among primary participant behaviors towards the participation level in knowledge sharing, basically from online user behaviors on Wikipedia, a representative community for online knowledge collaboration. Without users' participation in knowledge sharing, knowledge collaboration for creating knowledge cannot be successful. By the way, the editing patterns of Wikipedia users are diverse, resulting in different revisiting periods for the same number of edits, and thus varying results of shared knowledge. Therefore, we illuminated the participation level of knowledge sharing from two different angles of number of edits and revisiting period. The behavioral dimensions affecting the level of participation in knowledge sharing includes the article talk for public discussion and user talk for private messaging, and community registration, which are observable on Wiki platform. Public discussion is being progressed on article talk pages arranged for exchanging ideas about each article topic. An article talk page is often divided into several sections which mainly address specific type of issues raised during the article development procedure. From the diverse opinions about the relatively trivial things such as what text, link, or images should be added or removed and how they should be restructured to the profound professional insights are shared, negotiated, and improved over the course of discussion. Wikipedia also provides personal user talk pages as a private messaging tool. On these pages, diverse personal messages such as casual greetings, stories about activities on Wikipedia, and ordinary affairs of life are exchanged. If anyone wants to communicate with another person, he or she visits the person's user talk page and leaves a message. Wikipedia articles are assessed according to seven quality grades, of which the featured article level is the highest. The dataset includes participants' behavioral data related with 2,978 articles, which have reached the featured article level, with editing histories of articles, their article talk histories, and user talk histories extracted from user talk pages for each article. The time period for analysis is from the initiation of articles until their promotion to the featured article level. The number of edits represents the total number of participation in the editing of an article, and the revisiting period is the time difference between the first and last edits. At first, the participation levels of each user category classified according to behavioral dimensions have been analyzed and compared. And then, robust regressions have been conducted on the relationships among independent variables reflecting the degree of behavioral characteristics and the dependent variable representing the participation level. Especially, through adopting a motivational theory adequate for online environment in setting up research hypotheses, this work suggests a theoretical framework for the participation level of online knowledge sharing. Consequently, this work reached the following practical behavioral results besides some theoretical implications. First, both public discussion and private messaging positively affect the participation level in knowledge sharing. Second, public discussion exerts greater influence than private messaging on the participation level. Third, a synergy effect of public discussion and private messaging on the number of edits was found, whereas a pretty weak negative interaction effect of them on the revisiting period was observed. Fourth, community registration has a significant impact on the revisiting period, whereas being insignificant on the number of edits. Fifth, when it comes to the relation generated from private messaging, the frequency or depth of relation is shown to be more critical than the scope of relation for the participation level.
Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone
Ha, Eu Tteum ; Kim, Jeongmin ; Ryu, Kwang Ryel ;
Journal of Intelligence and Information Systems , volume 19, issue 4, 2013, Pages 123~132
DOI : 10.13088/jiis.2013.19.4.123
As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 (
) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.
A Study on the Differences of Information Diffusion Based on the Type of Media and Information
Lee, Sang-Gun ; Kim, Jin-Hwa ; Baek, Heon ; Lee, Eui-Bang ;
Journal of Intelligence and Information Systems , volume 19, issue 4, 2013, Pages 133~146
DOI : 10.13088/jiis.2013.19.4.133
While the use of internet is routine nowadays, users receive and share information through a variety of media. Through the use of internet, information delivery media is diversifying from traditional media of one-way communication, such as newspaper, TV, and radio, into media of two-way communication. In contrast of traditional media, blogs enable individuals to directly upload and share news, which can be considered to have a differential speed of information diffusion than news media that convey information unilaterally. Therefore this Study focused on the difference between online news and social media blogs. Moreover, there are variations in the speed of information diffusion because that information closely related to one person boosts communications between individuals. We believe that users' standard of evaluation would change based on the types of information. As well, the speed of information diffusion would change based on the level of proximity. Therefore, the purpose of this study is to examine the differences in information diffusion based on the types of media. And then information is segmentalized and an examination is done to see how information diffusion differentiates based on the types of information. This study used the Bass diffusion model, which has been frequently used because this model has higher explanatory power than other models by explaining diffusion of market through innovation effect and imitation effect. Also this model has been applied a lot in other information diffusion related studies. The Bass diffusion model includes an innovation effect and an imitation effect. Innovation effect measures the early-stage impact, while the imitation effect measures the impact of word of mouth at the later stage. According to Mahajan et al. (2000), Innovation effect is emphasized by usefulness and ease-of-use, as well Imitation effect is emphasized by subjective norm and word-of-mouth. Also, according to Lee et al. (2011), Innovation effect is emphasized by mass communication. According to Moore and Benbasat (1996), Innovation effect is emphasized by relative advantage. Because Imitation effect is adopted by within-group influences and Innovation effects is adopted by product's or service's innovation. Therefore, ours study compared online news and social media blogs to examine the differences between media. We also choose different types of information including entertainment related information "Psy Gentelman", Current affair news "Earthquake in Sichuan, China", and product related information "Galaxy S4" in order to examine the variations on information diffusion. We considered that users' information proximity alters based on the types of information. Hence, we chose the three types of information mentioned above, which have different level of proximity from users' standpoint, in order to examine the flow of information diffusion. The first conclusion of this study is that different media has similar effect on information diffusion, even the types of media of information provider are different. Information diffusion has only been distinguished by a disparity between proximity of information. Second, information diffusions differ based on types of information. From the standpoint of users, product and entertainment related information has high imitation effect because of word of mouth. On the other hand, imitation effect dominates innovation effect on Current affair news. From the results of this study, the flow changes of information diffusion is examined and be applied to practical use. This study has some limitations, and those limitations would be able to provide opportunities and suggestions for future research. Presenting the difference of Information diffusion according to media and proximity has difficulties for generalization of theory due to small sample size. Therefore, if further studies adopt to a request for an increase of sample size and media diversity, difference of the information diffusion according to media type and information proximity could be understood more detailed.
Image Watermarking for Copyright Protection of Images on Shopping Mall
Bae, Kyoung-Yul ;
Journal of Intelligence and Information Systems , volume 19, issue 4, 2013, Pages 147~157
DOI : 10.13088/jiis.2013.19.4.147
With the advent of the digital environment that can be accessed anytime, anywhere with the introduction of high-speed network, the free distribution and use of digital content were made possible. Ironically this environment is raising a variety of copyright infringement, and product images used in the online shopping mall are pirated frequently. There are many controversial issues whether shopping mall images are creative works or not. According to Supreme Court's decision in 2001, to ad pictures taken with ham products is simply a clone of the appearance of objects to deliver nothing but the decision was not only creative expression. But for the photographer's losses recognized in the advertising photo shoot takes the typical cost was estimated damages. According to Seoul District Court precedents in 2003, if there are the photographer's personality and creativity in the selection of the subject, the composition of the set, the direction and amount of light control, set the angle of the camera, shutter speed, shutter chance, other shooting methods for capturing, developing and printing process, the works should be protected by copyright law by the Court's sentence. In order to receive copyright protection of the shopping mall images by the law, it is simply not to convey the status of the product, the photographer's personality and creativity can be recognized that it requires effort. Accordingly, the cost of making the mall image increases, and the necessity for copyright protection becomes higher. The product images of the online shopping mall have a very unique configuration unlike the general pictures such as portraits and landscape photos and, therefore, the general image watermarking technique can not satisfy the requirements of the image watermarking. Because background of product images commonly used in shopping malls is white or black, or gray scale (gradient) color, it is difficult to utilize the space to embed a watermark and the area is very sensitive even a slight change. In this paper, the characteristics of images used in shopping malls are analyzed and a watermarking technology which is suitable to the shopping mall images is proposed. The proposed image watermarking technology divide a product image into smaller blocks, and the corresponding blocks are transformed by DCT (Discrete Cosine Transform), and then the watermark information was inserted into images using quantization of DCT coefficients. Because uniform treatment of the DCT coefficients for quantization cause visual blocking artifacts, the proposed algorithm used weighted mask which quantizes finely the coefficients located block boundaries and coarsely the coefficients located center area of the block. This mask improves subjective visual quality as well as the objective quality of the images. In addition, in order to improve the safety of the algorithm, the blocks which is embedded the watermark are randomly selected and the turbo code is used to reduce the BER when extracting the watermark. The PSNR(Peak Signal to Noise Ratio) of the shopping mall image watermarked by the proposed algorithm is 40.7~48.5[dB] and BER(Bit Error Rate) after JPEG with QF = 70 is 0. This means the watermarked image is high quality and the algorithm is robust to JPEG compression that is used generally at the online shopping malls. Also, for 40% change in size and 40 degrees of rotation, the BER is 0. In general, the shopping malls are used compressed images with QF which is higher than 90. Because the pirated image is used to replicate from original image, the proposed algorithm can identify the copyright infringement in the most cases. As shown the experimental results, the proposed algorithm is suitable to the shopping mall images with simple background. However, the future study should be carried out to enhance the robustness of the proposed algorithm because the robustness loss is occurred after mask process.