JOURNAL BROWSE
Search
Advanced SearchSearch Tips
Quantified Lockscreen: Integration of Personalized Facial Expression Detection and Mobile Lockscreen application for Emotion Mining and Quantified Self
facebook(new window)  Pirnt(new window) E-mail(new window) Excel Download
  • Journal title : Journal of KIISE
  • Volume 42, Issue 11,  2015, pp.1459-1466
  • Publisher : Korean Institute of Information Scientists and Engineers
  • DOI : 10.5626/JOK.2015.42.11.1459
 Title & Authors
Quantified Lockscreen: Integration of Personalized Facial Expression Detection and Mobile Lockscreen application for Emotion Mining and Quantified Self
Kim, Sung Sil; Park, Junsoo; Woo, Woontack;
 
 Abstract
Lockscreen is one of the most frequently encountered interfaces by smartphone users. Although users perform unlocking actions every day, there are no benefits in using lockscreens apart from security and authentication purposes. In this paper, we replace the traditional lockscreen with an application that analyzes facial expressions in order to collect facial expression data and provide real-time feedback to users. To evaluate this concept, we have implemented Quantified Lockscreen application, supporting the following contributions of this paper: 1) an unobtrusive interface for collecting facial expression data and evaluating emotional patterns, 2) an improvement in accuracy of facial expression detection through a personalized machine learning process, and 3) an enhancement of the validity of emotion data through bidirectional, multi-channel and multi-input methodology.
 Keywords
personal informatics;quantified self;life logging;user interface;
 Language
Korean
 Cited by
 References
1.
M. Mary. (2013, May 29). 2013 Internet Trends [Online]. Available: http://www.kpcb.com/blog/2013-internet-trends (downloaded 2015, May. 18)

2.
S. Berthoz, and E. L. Hill, "The validity of using self-reports to assess emotion regulation abilities in adults with autism spectrum disorder, "European psychiatry, Vol. 20, No. 3, pp. 291-298, May 2005. crossref(new window)

3.
M. Swan. "The quantified self: fundamental disruption in big data science and biological discovery," Big Data, Vol. 1, No. 2 pp. 85-99, Jun. 2013. crossref(new window)

4.
Q. Zheng, Q. Jiwei, "Evaluating the emotion based on ontology," Web Society (SWS), 2011 3rd Symposium on. IEEE, pp. 32-36, 2011.

5.
T. Sharma, K. Bhanu, "Emotion estimation of physiological signals by using low power embedded system," Proc. of the Conference on Advances in Communication and Control Systems, pp. 42-45, 2013.

6.
A. Barliya, L Omlor, M. A. Giese, A. Berthoz and T. Flash, "Expression of emotion in the kinematics of locomotion," Experimental brain research, Vol. 22, No. 2, pp. 159-176, 2013.

7.
M. E. Ayadi, M. S. Kamel, and F. Karray, "Survey on speech emotion recognition: Features, classification schemes, and databases," Pattern Recognition, Vol. 44, No. 3, pp. 572-587, 2011. crossref(new window)

8.
M. Thelwall, D. Wilkinson, and S. Uppal, "Data mining emotion in social network communication: Gender differences in MySpace," Journal of the American Society for Information Science and Technology, Vol. 61, No. 1, pp. 190-199, 2010. crossref(new window)

9.
G. R. Duncan. (2012, Jan 9), A Smart Phone That Knows You're Angry [Online]. Available: http://www.technologyreview.com/news/426560/a-smart-phone-that-knows-youre-angry/ (downloaded 2015, Feb. 11)

10.
P. Ekman, and W. V. Friesen, "Facial action coding system," 1977.

11.
T. Kanade, J. F. Cohn, Y. Tian, "Comprehensive database for facial expression analysis," Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on. IEEE, pp. 484-491, 2000.

12.
R. Gross, I. Matthews, and S. Baker, "Generic vs. person specific active appearance models," Image and Vision Computing, Vol. 23, No. 12, pp. 1080-1093, 2005. crossref(new window)

13.
J. Hamm, C. G. Kohler, R. C. Gur and R. Verma, "Automated facial action coding system for dynamic analysis of facial expressions in neuropsychiatric disorders," Journal of neuroscience methods, Vol. 200, No. 2, pp. 237-256, 2011. crossref(new window)

14.
G. Castellano, L. Kessous, G. Caridakis, "Emotion recognition through multiple modalities: face, body gesture, speech," Affect and emotion in human-computer interaction, pp. 92-103, 2008.

15.
S. D'Mello, and J. Kory, "Consistent but modest: a meta-analysis on unimodal and multimodal affect detection accuracies from 30 studies," Proc. of the 14th ACM international conference on Multimodal interaction, pp. 31-38, 2012.

16.
N. Cummins, J. Joshi, A. Dhall, V. Sethu, R. Goecke, J. Epps, "Diagnosis of depression by behavioural signals: a multimodal approach," Proc. of the 3rd ACM international workshop on Audio/visual emotion challenge, pp. 11-20, 2013.

17.
S. M. Sergio, O. C. Santos, J. G. Boticario, "Affective state detection in educational systems through mining multimodal data sources," 6th International Conference on Educational Data Mining, pp. 348-349, 2013.

18.
C. C. Chang, and C. J. Lin, "LIBSVM: a library for support vector machines," ACM Transactions on Intelligent Systems and Technology, Vol. 2, No. 3, pp. 27, 2011.

19.
T. Ojala, M. Pietikainen, and D. Harwood, "A comparative study of texture measures with classification based on featured distributions," Pattern recognition, Vol. 29, No. 1, pp. 51-59, 1996. crossref(new window)