Go to the main menu
Skip to content
Go to bottom
REFERENCE LINKING PLATFORM OF KOREA S&T JOURNALS
> Journal Vol & Issue
Journal of Intelligence and Information Systems
Journal Basic Information
Journal DOI :
Korea Inteligent Information System Society
Editor in Chief :
Volume & Issues
Volume 18, Issue 4 - Dec 2012
Volume 18, Issue 3 - Sep 2012
Volume 18, Issue 2 - Jun 2012
Volume 18, Issue 1 - Mar 2012
Selecting the target year
Mobile Device and Virtual Storage-Based Approach to Automatically and Pervasively Acquire Knowledge in Dialogues
Yoo, Kee-Dong ;
Journal of Intelligence and Information Systems , volume 18, issue 2, 2012, Pages 1~17
The Smartphone, one of essential mobile devices widely used recently, can be very effectively applied to capture knowledge on the spot by jointly applying the pervasive functionality of cloud computing. The process of knowledge capturing can be also effectively automated if the topic of knowledge is automatically identified. Therefore, this paper suggests an interdisciplinary approach to automatically acquire knowledge on the spot by combining technologies of text mining-based topic identification and cloud computing-based Smartphone. The Smartphone is used not only as the recorder to record knowledge possessor's dialogue which plays the role of the knowledge source, but also as the sensor to collect knowledge possessor's context data which characterize specific situations surrounding him or her. The support vector machine, one of well-known outperforming text mining algorithms, is applied to extract the topic of knowledge. By relating the topic and context data, a business rule can be formulated, and by aggregating the rule, the topic, context data, and the dictated dialogue, a set of knowledge is automatically acquired.
The Individual Discrimination Location Tracking Technology for Multimodal Interaction at the Exhibition
Jung, Hyun-Chul ; Kim, Nam-Jin ; Choi, Lee-Kwon ;
Journal of Intelligence and Information Systems , volume 18, issue 2, 2012, Pages 19~28
After the internet era, we are moving to the ubiquitous society. Nowadays the people are interested in the multimodal interaction technology, which enables audience to naturally interact with the computing environment at the exhibitions such as gallery, museum, and park. Also, there are other attempts to provide additional service based on the location information of the audience, or to improve and deploy interaction between subjects and audience by analyzing the using pattern of the people. In order to provide multimodal interaction service to the audience at the exhibition, it is important to distinguish the individuals and trace their location and route. For the location tracking on the outside, GPS is widely used nowadays. GPS is able to get the real time location of the subjects moving fast, so this is one of the important technologies in the field requiring location tracking service. However, as GPS uses the location tracking method using satellites, the service cannot be used on the inside, because it cannot catch the satellite signal. For this reason, the studies about inside location tracking are going on using very short range communication service such as ZigBee, UWB, RFID, as well as using mobile communication network and wireless lan service. However these technologies have shortcomings in that the audience needs to use additional sensor device and it becomes difficult and expensive as the density of the target area gets higher. In addition, the usual exhibition environment has many obstacles for the network, which makes the performance of the system to fall. Above all these things, the biggest problem is that the interaction method using the devices based on the old technologies cannot provide natural service to the users. Plus the system uses sensor recognition method, so multiple users should equip the devices. Therefore, there is the limitation in the number of the users that can use the system simultaneously. In order to make up for these shortcomings, in this study we suggest a technology that gets the exact location information of the users through the location mapping technology using Wi-Fi and 3d camera of the smartphones. We applied the signal amplitude of access point using wireless lan, to develop inside location tracking system with lower price. AP is cheaper than other devices used in other tracking techniques, and by installing the software to the user's mobile device it can be directly used as the tracking system device. We used the Microsoft Kinect sensor for the 3D Camera. Kinect is equippedwith the function discriminating the depth and human information inside the shooting area. Therefore it is appropriate to extract user's body, vector, and acceleration information with low price. We confirm the location of the audience using the cell ID obtained from the Wi-Fi signal. By using smartphones as the basic device for the location service, we solve the problems of additional tagging device and provide environment that multiple users can get the interaction service simultaneously. 3d cameras located at each cell areas get the exact location and status information of the users. The 3d cameras are connected to the Camera Client, calculate the mapping information aligned to each cells, get the exact information of the users, and get the status and pattern information of the audience. The location mapping technique of Camera Client decreases the error rate that occurs on the inside location service, increases accuracy of individual discrimination in the area through the individual discrimination based on body information, and establishes the foundation of the multimodal interaction technology at the exhibition. Calculated data and information enables the users to get the appropriate interaction service through the main server.
Ensemble Learning with Support Vector Machines for Bond Rating
Kim, Myoung-Jong ;
Journal of Intelligence and Information Systems , volume 18, issue 2, 2012, Pages 29~45
Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.
Video Scene Detection using Shot Clustering based on Visual Features
Shin, Dong-Wook ; Kim, Tae-Hwan ; Choi, Joong-Min ;
Journal of Intelligence and Information Systems , volume 18, issue 2, 2012, Pages 47~60
Video data comes in the form of the unstructured and the complex structure. As the importance of efficient management and retrieval for video data increases, studies on the video parsing based on the visual features contained in the video contents are researched to reconstruct video data as the meaningful structure. The early studies on video parsing are focused on splitting video data into shots, but detecting the shot boundary defined with the physical boundary does not cosider the semantic association of video data. Recently, studies on structuralizing video shots having the semantic association to the video scene defined with the semantic boundary by utilizing clustering methods are actively progressed. Previous studies on detecting the video scene try to detect video scenes by utilizing clustering algorithms based on the similarity measure between video shots mainly depended on color features. However, the correct identification of a video shot or scene and the detection of the gradual transitions such as dissolve, fade and wipe are difficult because color features of video data contain a noise and are abruptly changed due to the intervention of an unexpected object. In this paper, to solve these problems, we propose the Scene Detector by using Color histogram, corner Edge and Object color histogram (SDCEO) that clusters similar shots organizing same event based on visual features including the color histogram, the corner edge and the object color histogram to detect video scenes. The SDCEO is worthy of notice in a sense that it uses the edge feature with the color feature, and as a result, it effectively detects the gradual transitions as well as the abrupt transitions. The SDCEO consists of the Shot Bound Identifier and the Video Scene Detector. The Shot Bound Identifier is comprised of the Color Histogram Analysis step and the Corner Edge Analysis step. In the Color Histogram Analysis step, SDCEO uses the color histogram feature to organizing shot boundaries. The color histogram, recording the percentage of each quantized color among all pixels in a frame, are chosen for their good performance, as also reported in other work of content-based image and video analysis. To organize shot boundaries, SDCEO joins associated sequential frames into shot boundaries by measuring the similarity of the color histogram between frames. In the Corner Edge Analysis step, SDCEO identifies the final shot boundaries by using the corner edge feature. SDCEO detect associated shot boundaries comparing the corner edge feature between the last frame of previous shot boundary and the first frame of next shot boundary. In the Key-frame Extraction step, SDCEO compares each frame with all frames and measures the similarity by using histogram euclidean distance, and then select the frame the most similar with all frames contained in same shot boundary as the key-frame. Video Scene Detector clusters associated shots organizing same event by utilizing the hierarchical agglomerative clustering method based on the visual features including the color histogram and the object color histogram. After detecting video scenes, SDCEO organizes final video scene by repetitive clustering until the simiarity distance between shot boundaries less than the threshold h. In this paper, we construct the prototype of SDCEO and experiments are carried out with the baseline data that are manually constructed, and the experimental results that the precision of shot boundary detection is 93.3% and the precision of video scene detection is 83.3% are satisfactory.
Ontology-Based Process-Oriented Knowledge Map Enabling Referential Navigation between Knowledge
Yoo, Kee-Dong ;
Journal of Intelligence and Information Systems , volume 18, issue 2, 2012, Pages 61~83
A knowledge map describes the network of related knowledge into the form of a diagram, and therefore underpins the structure of knowledge categorizing and archiving by defining the relationship of the referential navigation between knowledge. The referential navigation between knowledge means the relationship of cross-referencing exhibited when a piece of knowledge is utilized by a user. To understand the contents of the knowledge, a user usually requires additionally information or knowledge related with each other in the relation of cause and effect. This relation can be expanded as the effective connection between knowledge increases, and finally forms the network of knowledge. A network display of knowledge using nodes and links to arrange and to represent the relationship between concepts can provide a more complex knowledge structure than a hierarchical display. Moreover, it can facilitate a user to infer through the links shown on the network. For this reason, building a knowledge map based on the ontology technology has been emphasized to formally as well as objectively describe the knowledge and its relationships. As the necessity to build a knowledge map based on the structure of the ontology has been emphasized, not a few researches have been proposed to fulfill the needs. However, most of those researches to apply the ontology to build the knowledge map just focused on formally expressing knowledge and its relationships with other knowledge to promote the possibility of knowledge reuse. Although many types of knowledge maps based on the structure of the ontology were proposed, no researches have tried to design and implement the referential navigation-enabled knowledge map. This paper addresses a methodology to build the ontology-based knowledge map enabling the referential navigation between knowledge. The ontology-based knowledge map resulted from the proposed methodology can not only express the referential navigation between knowledge but also infer additional relationships among knowledge based on the referential relationships. The most highlighted benefits that can be delivered by applying the ontology technology to the knowledge map include; formal expression about knowledge and its relationships with others, automatic identification of the knowledge network based on the function of self-inference on the referential relationships, and automatic expansion of the knowledge-base designed to categorize and store knowledge according to the network between knowledge. To enable the referential navigation between knowledge included in the knowledge map, and therefore to form the knowledge map in the format of a network, the ontology must describe knowledge according to the relation with the process and task. A process is composed of component tasks, while a task is activated after any required knowledge is inputted. Since the relation of cause and effect between knowledge can be inherently determined by the sequence of tasks, the referential relationship between knowledge can be circuitously implemented if the knowledge is modeled to be one of input or output of each task. To describe the knowledge with respect to related process and task, the Protege-OWL, an editor that enables users to build ontologies for the Semantic Web, is used. An OWL ontology-based knowledge map includes descriptions of classes (process, task, and knowledge), properties (relationships between process and task, task and knowledge), and their instances. Given such an ontology, the OWL formal semantics specifies how to derive its logical consequences, i.e. facts not literally present in the ontology, but entailed by the semantics. Therefore a knowledge network can be automatically formulated based on the defined relationships, and the referential navigation between knowledge is enabled. To verify the validity of the proposed concepts, two real business process-oriented knowledge maps are exemplified: the knowledge map of the process of 'Business Trip Application' and 'Purchase Management'. By applying the 'DL-Query' provided by the Protege-OWL as a plug-in module, the performance of the implemented ontology-based knowledge map has been examined. Two kinds of queries to check whether the knowledge is networked with respect to the referential relations as well as the ontology-based knowledge network can infer further facts that are not literally described were tested. The test results show that not only the referential navigation between knowledge has been correctly realized, but also the additional inference has been accurately performed.
Does Online Social Network Contribute to WOM Effect on Product Sales?
Lee, Ju-Yoon ; Son, In-Soo ; Lee, Dong-Won ;
Journal of Intelligence and Information Systems , volume 18, issue 2, 2012, Pages 85~105
In recent years, IT advancement has brought out the new Internet communication environment such as online social network services, where people are connected in global network without temporal and spatial limitation. The popular use of online social network helps people share their experience and preference for specific products and services, thus holding large potential to significantly affect firms' business performance through Word-of-Mouth (WOM). This study examines the role of online social network in raising WOM effect on the movie industry by comparing with the similar role of Internet portal, another major online communication channel. Analyzing 109 movies and data from both Twitter and Naver movie, we found that significant WOM effect exists simultaneously in both Twitter and Naver movie. However, we also found that different figures of online viral effects exist depending on the popularity of movies. In the hit movie group, before the movie release, the WOM effect occurs only in Twitter while the WOM effect arises in both Twitter and Naver movie at the same time after the movie release. In the less-popular (or niche) movie group, the WOM effect occurs in both Twitter and Naver movie only before the movie release. Our findings not only deepen theoretical insights into different roles of the two online communication channels in provoking the WOM effect on entertainment products but also provide practitioners with incentive to utilize SNS as strategic marketing platform to enhance their brand reputations.
The Viral Effect of Online Social Network on New Products Promotion: Investigating Information Diffusion on Twitter
Kim, Hyung-Jin ; Son, In-Soo ; Lee, Dong-Won ;
Journal of Intelligence and Information Systems , volume 18, issue 2, 2012, Pages 107~130
In Twitter, a user can post a message below 140 characters on his/her account, and can also repost a message of other users who the user follows. The message posted by the user in turn can be seen and reposted by other users who follow the user, which is called Re-tweet (RT). While some messages spread widely, other messages have relatively less or no RT. What factors cause these quantity variances of RT originated from original messages? How can the messages become influential in online social networks? As an effort to answer the above questions, we focused on information vividness, message characteristics, and originator characteristics. In perspective of managerial implication, we expect that the findings of this paper will provide corporations with helpful insight on the Word-of-Mouth (WOM) effect for efficient and effective advertisements and communications when they send a message of new products or services through Social Network Services. In perspective of academic implication, we identify the effect of contents of a message on WOM, which has been dealt with by few social network researches.
The Effect of the Personalized Settings for CF-Based Recommender Systems
Im, Il ; Kim, Byung-Ho ;
Journal of Intelligence and Information Systems , volume 18, issue 2, 2012, Pages 131~141
In this paper, we propose a new method for collaborative filtering (CF)-based recommender systems. Traditional CF-based recommendation algorithms have applied constant settings such as a reference group (neighborhood) size and a significance level to all users. In this paper we develop a new method that identifies optimal personalized settings for each user and applies them to generating recommendations for individual users. Personalized parameters are identified through iterative simulations with 'training' and 'verification' datasets. The method is compared with traditional 'constant settings' methods using Netflix data. The results show that the new method outperforms traditional, ordinary CF. Implications and future research directions are also discussed.
Stock-Index Invest Model Using News Big Data Opinion Mining
Kim, Yoo-Sin ; Kim, Nam-Gyu ; Jeong, Seung-Ryul ;
Journal of Intelligence and Information Systems , volume 18, issue 2, 2012, Pages 143~156
People easily believe that news and stock index are closely related. They think that securing news before anyone else can help them forecast the stock prices and enjoy great profit, or perhaps capture the investment opportunity. However, it is no easy feat to determine to what extent the two are related, come up with the investment decision based on news, or find out such investment information is valid. If the significance of news and its impact on the stock market are analyzed, it will be possible to extract the information that can assist the investment decisions. The reality however is that the world is inundated with a massive wave of news in real time. And news is not patterned text. This study suggests the stock-index invest model based on "News Big Data" opinion mining that systematically collects, categorizes and analyzes the news and creates investment information. To verify the validity of the model, the relationship between the result of news opinion mining and stock-index was empirically analyzed by using statistics. Steps in the mining that converts news into information for investment decision making, are as follows. First, it is indexing information of news after getting a supply of news from news provider that collects news on real-time basis. Not only contents of news but also various information such as media, time, and news type and so on are collected and classified, and then are reworked as variable from which investment decision making can be inferred. Next step is to derive word that can judge polarity by separating text of news contents into morpheme, and to tag positive/negative polarity of each word by comparing this with sentimental dictionary. Third, positive/negative polarity of news is judged by using indexed classification information and scoring rule, and then final investment decision making information is derived according to daily scoring criteria. For this study, KOSPI index and its fluctuation range has been collected for 63 days that stock market was open during 3 months from July 2011 to September in Korea Exchange, and news data was collected by parsing 766 articles of economic news media M company on web page among article carried on stock information>news>main news of portal site Naver.com. In change of the price index of stocks during 3 months, it rose on 33 days and fell on 30 days, and news contents included 197 news articles before opening of stock market, 385 news articles during the session, 184 news articles after closing of market. Results of mining of collected news contents and of comparison with stock price showed that positive/negative opinion of news contents had significant relation with stock price, and change of the price index of stocks could be better explained in case of applying news opinion by deriving in positive/negative ratio instead of judging between simplified positive and negative opinion. And in order to check whether news had an effect on fluctuation of stock price, or at least went ahead of fluctuation of stock price, in the results that change of stock price was compared only with news happening before opening of stock market, it was verified to be statistically significant as well. In addition, because news contained various type and information such as social, economic, and overseas news, and corporate earnings, the present condition of type of industry, market outlook, the present condition of market and so on, it was expected that influence on stock market or significance of the relation would be different according to the type of news, and therefore each type of news was compared with fluctuation of stock price, and the results showed that market condition, outlook, and overseas news was the most useful to explain fluctuation of news. On the contrary, news about individual company was not statistically significant, but opinion mining value showed tendency opposite to stock price, and the reason can be thought to be the appearance of promotional and planned news for preventing stock price from falling. Finally, multiple regression analysis and logistic regression analysis was carried out in order to derive function of investment decision making on the basis of relation between positive/negative opinion of news and stock price, and the results showed that regression equation using variable of market conditions, outlook, and overseas news before opening of stock market was statistically significant, and classification accuracy of logistic regression accuracy results was shown to be 70.0% in rise of stock price, 78.8% in fall of stock price, and 74.6% on average. This study first analyzed relation between news and stock price through analyzing and quantifying sensitivity of atypical news contents by using opinion mining among big data analysis techniques, and furthermore, proposed and verified smart investment decision making model that could systematically carry out opinion mining and derive and support investment information. This shows that news can be used as variable to predict the price index of stocks for investment, and it is expected the model can be used as real investment support system if it is implemented as system and verified in the future.
A Study on the Performance Evaluation of G2B Procurement Process Innovation by Using MAS: Korea G2B KONEPS Case
Seo, Won-Jun ; Lee, Dae-Cheor ; Lim, Gyoo-Gun ;
Journal of Intelligence and Information Systems , volume 18, issue 2, 2012, Pages 157~175
It is difficult to evaluate the performance of process innovation of e-procurement which has large scale and complex processes. The existing evaluation methods for measuring the effects of process innovation have been mainly done with statistically quantitative methods by analyzing operational data or with qualitative methods by conducting surveys and interviews. However, these methods have some limitations to evaluate the effects because the performance evaluation of e-procurement process innovation should consider the interactions among participants who are active either directly or indirectly through the processes. This study considers the e-procurement process as a complex system and develops a simulation model based on MAS(Multi-Agent System) to evaluate the effects of e-procurement process innovation. Multi-agent based simulation allows observing interaction patterns of objects in virtual world through relationship among objects and their behavioral mechanism. Agent-based simulation is suitable especially for complex business problems. In this study, we used Netlogo Version 4.1.3 as a MAS simulation tool which was developed in Northwestern University. To do this, we developed a interaction model of agents in MAS environment. We defined process agents and task agents, and assigned their behavioral characteristics. The developed simulation model was applied to G2B system (KONEPS: Korea ON-line E-Procurement System) of Public Procurement Service (PPS) in Korea and used to evaluate the innovation effects of the G2B system. KONEPS is a successfully established e-procurement system started in the year 2002. KONEPS is a representative e-Procurement system which integrates characteristics of e-commerce into government for business procurement activities. KONEPS deserves the international recognition considering the annual transaction volume of 56 billion dollars, daily exchanges of electronic documents, users consisted of 121,000 suppliers and 37,000 public organizations, and the 4.5 billion dollars of cost saving. For the simulation, we analyzed the e-procurement of process of KONEPS into eight sub processes such as 'process 1: search products and acquisition of proposal', 'process 2 : review the methods of contracts and item features', 'process 3 : a notice of bid', 'process 4 : registration and confirmation of qualification', 'process 5 : bidding', 'process 6 : a screening test', 'process 7 : contracts', and 'process 8 : invoice and payment'. For the parameter settings of the agents behavior, we collected some data from the transactional database of PPS and some information by conducting a survey. The used data for the simulation are 'participants (government organizations, local government organizations and public institutions)', 'the number of bidding per year', 'the number of total contracts', 'the number of shopping mall transactions', 'the rate of contracts between bidding and shopping mall', 'the successful bidding ratio', and the estimated time for each process. The comparison was done for the difference of time consumption between 'before the innovation (As-was)' and 'after the innovation (As-is).' The results showed that there were productivity improvements in every eight sub processes. The decrease ratio of 'average number of task processing' was 92.7% and the decrease ratio of 'average time of task processing' was 95.4% in entire processes when we use G2B system comparing to the conventional method. Also, this study found that the process innovation effect will be enhanced if the task process related to the 'contract' can be improved. This study shows the usability and possibility of using MAS in process innovation evaluation and its modeling.
A Match-Making System Considering Symmetrical Preferences of Matching Partners
Park, Yoon-Joo ;
Journal of Intelligence and Information Systems , volume 18, issue 2, 2012, Pages 177~192
This is a study of match-making systems that considers the mutual satisfaction of matching partners. Recently, recommendation systems have been applied to people recommendation, such as recommending new friends, employees, or dating partners. One of the prominent domain areas is match-making systems that recommend suitable dating partners to customers. A match-making system, however, is different from a product recommender system. First, a match-making system needs to satisfy the recommended partners as well as the customer, whereas a product recommender system only needs to satisfy the customer. Second, match-making systems need to include as many participants in a matching pool as possible for their recommendation results, even with unpopular customers. In other words, recommendations should not be focused only on a limited number of popular people; unpopular people should also be listed on someone else's matching results. In product recommender systems, it is acceptable to recommend the same popular items to many customers, since these items can easily be additionally supplied. However, in match-making systems, there are only a few popular people, and they may become overburdened with too many recommendations. Also, a successful match could cause a customer to drop out of the matching pool. Thus, match-making systems should provide recommendation services equally to all customers without favoring popular customers. The suggested match-making system, called Mutually Beneficial Matching (MBM), considers the reciprocal satisfaction of both the customer and the matched partner and also considers the number of customers who are excluded in the matching. A brief outline of the MBM method is as follows: First, it collects a customer's profile information, his/her preferable dating partner's profile information and the weights that he/she considers important when selecting dating partners. Then, it calculates the preference score of a customer to certain potential dating partners on the basis of the difference between them. The preference score of a certain partner to a customer is also calculated in this way. After that, the mutual preference score is produced by the two preference values calculated in the previous step using the proposed formula in this study. The proposed formula reflects the symmetry of preferences as well as their quantities. Finally, the MBM method recommends the top N partners having high mutual preference scores to a customer. The prototype of the suggested MBM system is implemented by JAVA and applied to an artificial dataset that is based on real survey results from major match-making companies in Korea. The results of the MBM method are compared with those of the other two conventional methods: Preference-Based Matching (PBM), which only considers a customer's preferences, and Arithmetic Mean-Based Matching (AMM), which considers the preferences of both the customer and the partner (although it does not reflect their symmetry in the matching results). We perform the comparisons in terms of criteria such as average preference of the matching partners, average symmetry, and the number of people who are excluded from the matching results by changing the number of recommendations to 5, 10, 15, 20, and 25. The results show that in many cases, the suggested MBM method produces average preferences and symmetries that are significantly higher than those of the PBM and AMM methods. Moreover, in every case, MBM produces a smaller pool of excluded people than those of the PBM method.