Preface to the Edited Book "Performance Evaluation and Benchmarking of Intelligent Systems"
Rajmohan (. Madhavan, Elena R. Messina, Edward Tunstel
To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scientific methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of emerging robotic and intelligent systems' technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-defined requirements; and furthermore, there is no consensus on what objective evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communicating results, inter-comparing system performance, and leveraging previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as manufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and benefits associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intelligent systems, in general, while emphasizing the need and solutions for standardized methods.
Performance Evaluation and Benchmarking of Intelligent Systems
, Messina, E.
and Tunstel, E.
Preface to the Edited Book "Performance Evaluation and Benchmarking of Intelligent Systems", Performance Evaluation and Benchmarking of Intelligent Systems, Springer, Norwell, MA, [online], https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=902748
(Accessed December 3, 2023)