Automatic Capture, Representation, and Analysis of User Behavior
Sharon J. Laskowski, J A. Landay, M Lister
With the advent of the Web and the refinement of instrumentation and monitoring tools, software user interactions are being captured on a much larger scale than ever before. Automated support for the capture, representation, and empirical analysis of user behavior is leading to new ways to evaluate usability and validate theories of human-computer interaction. It enables remote testing, allows testing with larger numbers of subjects, and motivates the development of tools for in-depth analysis. The data capture can take place in a formal experimental setting or on a deployed system.The main questions are: can we leverage these capabilities to validate or change our models, to improve the user experience, and to change the user interfaces in products in measurably better ways? How will human-computer interaction (HCI) and usability engineering (UE) as bodies of knowledge and practice change? How has HCI/UE research and practice changed as new analysis, design, and evaluation methods have emerged and been adopted?Specifically, a number of different approaches based on these methods have appeared in the research literature [3,7] and in commercial tools [1,5,6,9]. However, these have led to a number of unresolved issues under discussion in both the HCI and UE communities, such as how and when to apply these methods, when is remote, automated testing useful, and what can server logs provide.
, Landay, J.
and Lister, M.
Automatic Capture, Representation, and Analysis of User Behavior, Human Factors in Computing Systems, [online], https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=151541
(Accessed May 29, 2023)