Computational Cognitive Modeling of Touch and Gesture on Mobile Multitouch Devices: Applications and Challenges for Existing Theory
Kristen Greene, Ross J. Micheals, Franklin Tamborello
As technology continues to evolve, so too must our modeling and simulation techniques. While formal engineering models of cognitive and perceptual-motor processes are well-developed and extensively validated in the traditional desktop computing environment, their application in the new mobile computing environment is far less mature. ACT-Touch, an extension of the ACT-R 6 (Adaptive Control of Thought-Rational) cognitive architecture, seeks to enable new methods for modeling touch and gesture in todays mobile computing environment. The current objective, the addition of new ACT-R interaction command vocabulary, is a critical first-step to support modeling users multitouch gestural inputs with greater fidelity and precision. Immediate practical application and validation challenges are discussed, along with a proposed path forward for the larger modeling community to better measure, understand, and predict human performance in todays increasingly complex interaction landscape.
Proceedings of the 15th International Conference on Human-Computer Interaction
July 21-26, 2013
Las Vegas, NV
15th International Conference on Human-Computer Interaction
ACT-R, ACT-Touch, cognitive architectures, touch and gesture, computational cognitive modeling, modeling and simulation, movement vocabulary, gestural input, mobile handheld devices, multitouch tablets, model validation, Fitts Law
, Micheals, R.
and Tamborello, F.
Computational Cognitive Modeling of Touch and Gesture on Mobile Multitouch Devices: Applications and Challenges for Existing Theory, Proceedings of the 15th International Conference on Human-Computer Interaction, Las Vegas, NV
(Accessed December 9, 2023)