Interactive Intelligent Systems
The main focus of the Interactive Intelligent Systems project is to perform research using the best technologies that push the envelope of both interface and interaction. The resulting goal is to develop methods and metrics for assessing the utility of novel technologies coming out of government-sponsored R&D programs.
When the main source of computing power was located in central computing facilities, those systems were evaluated by their throughput, accuracy, reliability, etc. Today's computing power resides in true end users. The methods for assessing human-machine systems need to evolve along with the technology. Novel interfaces (e.g. virtual environments and wearable devices) and novel interactions (e.g. haptic devices, touch screens, speech) require novel evaluation methods and new metrics for measuring the effectiveness of these human-machine systems. The problem is hard because new technologies are appearing faster than ever and appropriate methods need to keep up.
One part of the evaluation problem has been addressed through attention to the principles of usability; the usability metrics are efficiency, effectiveness and user satisfaction. The major part of the problem, however, involves assessing whether novel technologies actually provide utility to the end user. NIST has been successful in applying interactive system utility measurements against some hard problems in both the intelligence community (IC) and the military (DARPA). Industry has accepted usability testing, while the government lags behind. Government, especially its analysts and war-fighters, need utility analysis.
By working with the prime R&D organizations in the government, NIST is in an enviable position to have access to the best new technologies – technologies that push the envelope of both interface and interaction. Our focus on measuring utility is the key to our success. We can succeed now because the DOD and IC are highly motivated to do this right. The intelligence failures of 9/11 and the current war effort are the prime motivators. In addition, IARPA and DARPA, as stewards of the taxpayers' money, want to see that R&D dollars deliver measurably usable and useful systems. Current efforts include Collaboration and Analyst/System Effectiveness (CASE), Analyst Space for Exploitation (A-SpaceX), Reynard (socio-cultural behavior in Virtual Worlds) and Advanced Soldier Sensor Information System & Technology (ASSIST).
- ASSIST – Field evaluations conducted at Aberdeen Test Center: Jun 07, Nov-Dec 07, Apr-May 08, Aug 08, including analysis of collected data and subsequent reports for each. Proposed new work being negotiated with DARPA PM.
- ASpaceX – Developed and tested metrics for visual analytics and applied them in several formative evaluations with working analysts. In reports, provided feedback to improve insertion-readiness. Started research into a metrics framework to accommodate metrics for human interaction with visualizations and virtual environments
- CASE – Program Closeout series of evaluations centering on User Modeling systems. Produced sections of closeout reports that discussed evaluation methods and metrics.
- Reynard – Developed materials on metrics and evaluation methods that can be applied to Virtual Worlds. Material will be presented to the IARPA director 1/2009.
- Journal Articles: 5; Conference Papers: 5; Conference Workshops: 3; Total Evaluations: 12