Advanced SearchSearch Tips
Automatic Detection of Usability Issues on Mobile Applications
facebook(new window)  Pirnt(new window) E-mail(new window) Excel Download
 Title & Authors
Automatic Detection of Usability Issues on Mobile Applications
Ma, Kyeong Wook; Park, Sooyong; Park, Soojin;
  PDF(new window)
Given the attributes of mobile apps that shorten the time to make purchase decisions while enabling easy purchase cancellations, usability can be regarded to be a highly prioritized quality attribute among the diverse quality attributes that must be provided by mobile apps. With that backdrop, mobile app developers have been making great effort to minimize usability hampering elements that degrade the merchantability of apps in many ways. Most elements that hamper the convenience in use of mobile apps stem from those potential errors that occur when GUIs are designed. In our previous study, we have proposed a technique to analyze the usability of mobile apps using user behavior logs. We proposes a technique to detect usability hampering elements lying dormant in mobile apps` GUI models by expressing user behavior logs with finite state models, combining user behavior models extracted from multiple users, and comparing the combined user behavior model with the expected behavior model on which the designer`s intention is reflected. In addition, to reduce the burden of the repeated test operations that have been conducted by existing developers to detect usability errors, the present paper also proposes a mobile usability error detection automation tool that enables automatic application of the proposed technique. The utility of the proposed technique and tool is being discussed through comparison between the GUI issue reports presented by actual open source app developers and the symptoms detected by the proposed technique.
Mobile Application;GUI Usability;User Behavior Model;Automatic Detection of GUI Bad Symptom;
 Cited by
Y. Kim, J. Byun, S. Choi, S. Park, and S. Park, "A Usability Analysis Method of Mobile Application using User Behavior Logs," Journal of KIISE: Software and Applications, Vol.39, No.2, pp.91-98, 2012.

E. Dolstra, R. Vliegendhart, and J. Pouwelse, "Crowdsourcing GUI Tests," in Software Testing, Verification and Validation, 2013 IEEE Sixth International Conference, pp.332-341, 2013.

R. Gomez, D. Caballero, and J. Sevillano, "Heuristic Evaluation on Mobile Interfaces: A New Checklist," The Scientific World Journal, pp.178-188, 2014.

M. Zen. "Metric-based evaluation of graphical user interfaces: model, method, and software support," in Proceedings of the 5th ACM SIGCHI symposium on Engineering interactive computing systems, pp.183-186, 2013.

S. Abrahao and E. Insfran, "Early Usability Evaluation in Model Driven Architecture Environments," Quality Software, International Conference on, Quality Software, International Conference, pp.287-294, 2006.

W. A. Munson and M. Gardner, "Standardizing Auditory Tests," Acoustical Society of America, Vol.22, No.5, p.675, 2005.

K. Ma and S. Park, "A Methodology of Usability Issue Detection for GUI Self-Adaptation," in Proceedings of the Korea Conference on Software Engineering, pp.411-412, 2015.

L. Mariani, F. Pastore, and M. Pezze, "Dynamic Analysis for Diagnosing Integration Faults," Software Engineering, IEEE Transactions, Vol.37, No.4, pp.486-508, 2011. crossref(new window)

J. Nielson, Why you only need to test with 5 users [Internet],

R. G. O'Brien, "A General ANOVA Method for Robust Tests of Additive Models for Variances," Journal of the American Statistical Association, Vol.74, No.13, pp.877-880, 1979. crossref(new window)

A. M. Memon, "A comprehensive framework for testing graphical user interfaces," Ph.D. dissertation, University of Pittsburgh, 2001.