Advanced and intelligent systems within the manufacturing, military, homeland security, and automotive fields are constantly under development or improvement. Testing the performance of these technologies is critical to (1) update the system designers of specific areas for improvement, (2) solicit end-user feedback, and (3) validate the extent of a technology's capabilities. Evaluation designers have expended considerable effort in devising methodologies to stream-line the development of test plans. This is especially true when devising comprehensive test plans which are often required to evaluate advanced and emerging technologies. The Multi-Relationship Evaluation Design (MRED) methodology is being developed to take multiple inputs from numerous input source categories and automatically output evaluation blueprints that specify the test characteristics. The MRED methodology is being created to have numerous advantages over current test design methods including 1) creating test plans to appraise both quantitative and qualitative performance of technologies that incorporate both human-controlled and autonomous capabilities, 2) speeding the test plan and implementation cycle to improve the effectiveness of a technology's development cycle, and 3) factor in unknown and uncertain test plan input data. This paper will take an in-depth look at the stakeholder input category and its influence on the MRED-output determining test plan evaluation personnel. The stakeholder's input preferences, with respect to evaluation personnel, will be formalized in this report. Future efforts will formalize the entire MRED model as relationships are further explored between all of the inputs and the design plan variables.
Proceedings Title: ASME 2011 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference (IDETC/CIE 2011)
Conference Dates: August 29-31, 2011
Conference Location: Washington, DC
Pub Type: Conferences
MRED, Performance Metrics, Evaluation Framework, Uncertainty