NOTICE: Due to a lapse in annual appropriations, most of this website is not being updated. Learn more.
Form submissions will still be accepted but will not receive responses at this time. Sections of this site for programs using non-appropriated funds (such as NVLAP) or those that are excepted from the shutdown (such as CHIPS and NVD) will continue to be updated.
An official website of the United States government
Here’s how you know
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS
A lock (
) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
Ontology-Based State Representations for Intention Recognition in Human-Robot Collaborative Environments
Published
Author(s)
Craig I. Schlenoff, Anthony Pietromartire, Zeid Kootbally, Stephen B. Balakirsky, Sebti Foufou
Abstract
In this paper, we describe a novel approach for representing state information for the purpose of intention recognition in cooperative human-robot environments. States are represented by a combination of spatial relationships in a Cartesian frame along with cardinal direction information. This approach is applied to a manufacturing kitting operation, where humans and robots are working together to develop kits. Based upon a set of predefined high-level states relationships that must be true for future actions to occur, a robot can use the detailed state information described in this paper to infer the probability of subsequent actions occurring. This would allow the robot to better help the human with the task or, at a minimum, better stay out of his or her way.
Schlenoff, C.
, Pietromartire, A.
, Kootbally, Z.
, Balakirsky, S.
and Foufou, S.
(2013),
Ontology-Based State Representations for Intention Recognition in Human-Robot Collaborative Environments, Robotics and Autonomous Systems Journal, [online], https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=913796
(Accessed October 8, 2025)