Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Transactions on Human-Robot Interaction Special Issue

The impact of technology in collaborative human-robot teams is both driven and limited by its performance and ease of use.  As robots become increasingly common in society, exposure to and expectations for robots is ever-increasing.  However, the means by which performance can be measured has not kept pace with the rapid evolution of human-robot interaction (HRI) technologies.  The resulting situation is one in which people demand more from robots, but have relatively few mechanisms by which they can assess the market when making purchasing decisions, or integrating the systems already acquired. As such, robots specifically intended to interact with people are frequently met with enthusiasm, but ultimately fall short of expectations.

HRI research is focused on developing new and better theories, algorithms, and hardware specifically intended to push innovation.  Yet determining whether these advances are, indeed, actually driving technology forward is a particular challenge.  Few repeatability studies are ever performed, and the test methods and metrics used to demonstrate effectiveness and efficiency are often based on qualitative measures for which all external factors may not necessarily be accounted; or, worse, they may be based on measures that are specifically chosen to highlight the strengths of new approaches without also exposing the limitations.  As such, despite the rapid progression of HRI technology in the research realm, advances in applied robotics lag behind.  Without verification and validation, the gap between the cutting edge and the state of practice will continue to expand.

The necessity for validated test methods and metrics for HRI is driven by the desire for repeatable, consistent, and informative evaluations of HRI methodologies to demonstrably prove functionality.  Such evaluations are critical for advancing the underlying models of HRI, and for providing guidance to developers and consumers of HRI technologies to meter expectations while promoting adoption.

This special issue of Transactions of Human-Robot Interaction, Test methods for human-robot teaming performance evaluations, is specifically intended to highlight the test methods, metrics, artifacts, and measurement systems designed to assess and assure HRI performance in human-robot teams.  A broad spectrum of application domains encompass the topic of HRI teaming, and special attention is being paid to those test methods that are broadly applicable across multiple domains.  These domains include medical, field, service, personal care, and manufacturing applications.  This special issue will focus on highlighting the metrics used for addressing HRI metrology, and identifying the underlying issues of traceability, objective repeatability and reproducibility, benchmarking, and transparency in HRI.

List of Topics

For this special issue, topics of interest include but are not limited to: 

  • Test methods and metrics for evaluating human-robot teams
  • Case studies in industry, medicine, service, and personal care robot applications, with particular attention to use cases that have verifiable analogues across multiple application domains
  • Documented HRI data set generation, formatting, and dissemination for human-robot performance benchmarking and repeatability studies
  • Design and evaluation of human-centric robot interfaces, including wearable technologies
  • Repeatability and replication studies of previously published HRI research, specifically including metrics for evaluation that take into account demographics and cultural impacts
  • Studies exploring the cultural impact of HRI performance measures
  • Validated, quantitative analogues of qualitative HRI metrics
  • Quantitative and statistical models of human performance for offline HRI evaluation   
  • Best practices and real-world case studies in human-robot teaming
  • Verification and validation of HRI studies involving human-robot teams
  • Evaluation of novel human-robot team designs and methods

Important Dates

  • Submission period begins:  30 November, 2019
  • Submission deadline:  15 August, 2020 (initially 30 April, 2020)
  • Notification of initial reviews:  15 December, 2020 (initially 31 July, 2020)
  • Paper revision/resubmission deadline: 30 January, 2021 (initially 30 September, 2020)
  • Notification of final decisions:  30 March, 2021 (initially 30 November, 2020)
  • Tentative publication date:  June 2021

Submission Website

https://mc.manuscriptcentral.com/thri

Contact Information

Inquiries regarding this special issue.

jeremy.marvel [at] nist.gov (Jeremy Marvel)

Created December 6, 2019, Updated April 13, 2020