The ability to comprehensively evaluate the quantitative and qualitative performance of an intelligent system is critical to accurately predicting how it will perform in various situations. The design of such evaluations is often as much of a research challenge as is the design of the intelligent systems themselves. Over the past decade, the Intelligent Systems Division (ISD), a part of the National Institute of Standards and Technology (NIST), has been at the forefront of assessing the performance of the various intelligent systems. This paper will give a broad overview of some of the evaluation efforts that have been pursued by ISD over the past few years, including performance evaluation of emergency response robots, sensor systems on unmanned ground vehicles, speech-to-speech translation systems, and the development of performance metrics for mixed-palletizing through the use of a simulation environment.
Citation: International Test and Evaluation Association (ITEA) Journal
Pub Type: Journals
US&R robot performance, TRANSTAC, mixed-palletizing, simulation, unmanned vehicle, tracking moving objects, performance evaluation, metrics