- Volume 10 Issue 2
The proposed model is developed to minimize the loss of information in incomplete data including missing data. The first step is to transform the learning data to compensate for the loss information using the data extension technique. In this conversion process, the attribute values of the data are filled with binary or probability values in one-hot encoding. Next, this conversion data is input to the deep learning model, where the number of entries is not constant depending on the cardinality of each attribute. Then, the entry values of each attribute are assigned to the respective input nodes, and learning proceeds. This is different from existing learning models, and has an unusual structure in which arbitrary attribute values are distributedly input to multiple nodes in the input layer. In order to evaluate the learning performance of the proposed model, various experiments are performed on the missing data and it shows that it is superior in terms of performance. The proposed model will be useful as an algorithm to minimize the loss in the ubiquitous environment.
Deep learning model;Extended data expression;Incomplete data;Attribute value;EBP
- J. Wu, Y. S. Kim, C. H. Song, & W.D.Lee. (2008) A new classifier to deal with incomplete data, International Conference on Software Engineering, Artificial Intelligence, Networking, 105-110
- J. W. Grzymala-Busse. (2003). Rough set strategies to data with missing attribute values, Workshop on Foundations & New Directions in Data Mining, 19-22
- Y. Jeong. (2017).Subnet Generation Scheme based on Deep Learing for Healthcare Information Gathering, Journal of Digital Convergence, 15(3), 221-228 https://doi.org/10.14400/JDC.2017.15.3.221
- M. Yang & S. Yoon. (2018). Production of agricultural weather information by Deep Learning, Journal of Digital Convergence, 16(12), 293-299
- B. Ahn. (2018).Study for Drowsy Driving Detection & Prevention System, Journal of Convergence for Information Technology, 8(3), 193-198
- E. Keogh, C. Blake, C. J. Merz. UCI Repository of Machine Learning Databases, http://www.ics.uci.edu/-mlearn/MLRepository.html
- J. C. Lee, D. H. Seo, C. H. Song, & W. D. Lee.(2007). FLDF based Decision Tree using Extended Data Expression, Conference on Machine Learning & Cybernetics,HongKong, 3478-3483
- D. E.Rumelhart, G. E.Hinton, & R. J.Williams. (1986). Learning Internal Representations by Error Propagation, PDP, I, 318-362
- Y. L. Cun, Y. Bengio, & G. Hinton. (2015) Deep learning, Nature, 521(7553), 436-444. DOI : 10.1038/nature14539 https://doi.org/10.1038/nature14539
- L. Deng, & D. Yu. (2014). Deep learning: methods and applications, Foundations and Trends in Signal Processing, 197-387.
- J. Schmidhuber. (2015). Deep learning in neural networks : An overview, Elsevier.
- D. Lee, W. Yu, & H. Lim. (2017). Bi-directional LSTM-CNN-CRF for Korean Named Entity Recognition System with Feature Augmentation, Journal of the Korea Convergence Society, 8(12), 55-62.
- J. Lee. (2018). A Method of Eye and Lip Region Detection using Faster R-CNN in Face Image, Journal of the Korea Convergence Society, 9(8), 1-8.
- D. Kim,D. Lee, & W. D. Lee. (2006). Classifier using Extended Data Expression, IEEE Mountain Workshop on Adaptive and Learning Systems., 154-159, DOI: 10.1109/SMCALS.2006.250708
- D. Kim, D. Seo, Y. Li, & W. D. Lee. (2008). A classifier capable of rule refinement, International Conference on Service Operations and Logistics, and Informatics, 168-173
- J. M. Kong, D. H. Seo, & W. D. Lee. (2007) Rule refinement with extended data expression, Sixth International Conference on Machine Learning and Applications, 310-315