Advanced and intelligent systems within the manufacturing, military, homeland security, and automotive fields are constantly emerging and progressing. Testing these technologies is crucial to (1) inform the technology developers of targeted areas for improvement, (2) capture end-user feedback, and (3) substantiate the degree of the technology's capabilities. Evaluation designers have put forth considerable effort in developing methods to speed test-plan generation. The Multi-Relationship Evaluation Design (MRED) methodology is being created to gather multiple inputs from several source categories and automatically output evaluation blueprints that identify the pertinent test-plan characteristics. MRED captures input from three categories including the evaluation stakeholders, the technology state, and the available resources. This information and the relationships among these inputs are combined as input into an algorithm that will yield specific test plan characteristics. This paper reviews the MRED methodology as it enters its final stages of development including new discussion of: the relationships among the various inputs and the chosen method of Evaluative Voting to capture Stakeholder Preferences. An example focusing on the design of test plans to evaluate a robotic arm is also presented to bring further clarity to the latest MRED developments.
Proceedings Title: 2012 Performance Metrics for Intelligent Systems Workshop
Conference Dates: March 20-22, 2012
Conference Location: College Park, MD
Pub Type: Conferences
MRED, Performance Metrics, Evaluation Framework, Uncertainty