Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Workshop on Standards for 3D Perception Systems for Robotic Assembly Applications

December 2-3, 2019

at the National Institute of Standards and Technology

Gaithersburg, MD

The ASTM International Committee on 3D Imaging Systems (E57) held a workshop on standards for 3D perception systems used in robotic applications on December 2-3, 2019. The workshop was co-sponsored by the Intelligent Systems Division (ISD) of the National Institute of Standards and Technology (NIST) and was held at NIST’s campus in Gaithersburg, MD.

DOWNLOAD REPORT HERE

BACKGROUND

3D perception systems consist of sensors that acquire 3D data (as well as other information such as color or grayscale intensity) from physical objects through non-contact measurement techniques (e.g., laser line scanning), and software that interprets the 3D information in order to perceive (e.g., to recognize, identify, and localize parts in a bin).
 
3D perception systems are useful in many robotic applications such as autonomous navigation, assembly, and inspection. However, there are currently very few performance standards for these kinds of systems. Understanding the performance of 3D perception systems using industry-accepted metrics and tests is crucial for users and developers of these systems.

NIST and ASTM E57 have been involved in developing standards for 3D Imaging Systems for over 20 years. This current effort is focused on short to medium range systems that can be used in robotic assembly applications.

 

robot

WORKSHOP OBJECTIVES

The goal of the workshop was to bring together stakeholders in 3D perception systems (vendors/manufacturers, users, researchers, etc.) in order to:

  1. Learn about the challenges, barriers, and solutions to implementing 3D perception systems for robotic applications;
  2. Develop a roadmap of consensus standards needed for 3D perception systems; and
  3. Identify high-priority standards for the manufacturing industry and organize ASTM task groups to develop these standards.

 

standard

SPEAKERS

  • Remus Boca, Ph.D., Senior Principal Scientist, Mechatronics and Sensors, ABB US Corporate Research
  • Jared Glover, CEO and co-founder of CapSen Robotics
  • Michele Pratusevich, Director of Software Development, Root AI, Inc.
  • Miguel Saez, Ph.D., Researcher – Robotics and Automation, Manufacturing Systems Research Laboratory, General Motors Company
  • Kamel Saidi, Ph.D., Group Leader, National Institute of Standards and Technology
  • Joseph Schornak, Engineer, Southwest Research Institute (or alternate from SwRI)
  • John Sweetser, Ph.D., CTO Office for RealSense Group, Intel Corp.
  • Song Zhang, Ph.D., Professor of Mechanical Engineering, Purdue University

WHO ATTENDED

 

part
  • Manufacturers (vendors) of 3D sensors and software for robotic applications.
  • System Integrators and End Users of 3D sensors and software for robotic applications.
  • Researchers in 3D sensing, 3D data processing, object localization, automation, etc.
  • Anyone interested in developing standards for 3D sensing and perception systems.

Program

Location: Building 101, Lecture Room B 

Day 1 (Dec. 2, 2019)

07:30 - 08:00

Arrival at NIST and Visitor Center Registration

08:00 - 08:15

Welcome

08:15 - 08:30

Introductions

08:30 - 08:50

Presentation 1 – Kamel Saidi, NIST

08:50 - 09:20

Presentation 2 – Remus Boca, ABB

09:20 - 09:50

Presentation 3 – Miguel Saez, General Motors

09:50 - 10:05

Break

10:05 - 12:00

Work Session 1

12:00 - 13:00

Lunch  (self-funded at the NIST cafeteria)

13:00 - 14:30

Lab Tours

14:30 - 15:00

Presentation 4 – Michele Pratusevich, Root AI

15:00 - 15:30

Presentation 5 – John Sweetser, Intel Corp.

15:30 - 15:45

Break

15:45 - 17:45

Work Session 2

17:45 - 18:00

Summary of Work Session 2

18:30 - 20:00

Group Dinner (self-funded)

Location: TBD

 

Day 2 (Dec. 3, 2019)

08:00 - 08:15

Summary of Day 1

08:15 - 08:45

Presentation 6 – Song Zhang, Purdue University

08:45 - 09:15

Presentation 7 – Joseph Schornak, Southwest Research Institute

09:15 - 09:30

Break

09:30 - 11:15

Work Session 3

11:15 - 11:30

Break

11:30 - 12:00

Presentation 8 – Jared Glover, Capsen Robotics

12:00 - 13:00

Lunch  (self-funded at the NIST cafeteria)

13:00 - 15:00

Lab Tours

15:00 - 15:30

Summary of Work Session 3

15:30 - 15:45

Break

15:45 - 17:30

Panel Discussion

 

Keynote abstracts and speaker bios

Depth Quality Assessment at Close Range Using 3D Printed Fixtures

Michele Pratusevich, Root AI Inc.

Abstract

Mobile robots that manipulate their environments require high-accuracy scene understanding at close range. Typically this understanding is achieved with RGBD cameras, but the evaluation process for selecting an appropriate RGBD camera for the application is minimally quantitative. Limited manufacturer-published metrics do not translate to observed quality in real-world cluttered environments, since quality is application-specific. To bridge the gap, we developed a method for quantitatively measuring depth quality using a set of extendable 3D printed fixtures that approximate real-world conditions. By framing depth quality as point cloud density and root mean square error (RMSE) from a known geometry, we present a method that is extendable by other system integrators for custom environments. We show a comparison of 3 cameras and present a case study for camera selection, provide reference meshes and analysis code, and discuss further extensions.

Bio

Michele Pratusevich leads software and algorithm development as the Director of Software at Root AI, an agricultural robotics startup. Previously, Michele worked on computer vision, machine learning, and neural network applications targeted towards resource-starved systems at Amazon. At ICRA 2019 Michele presented her work on close-range perception, showcasing a set of metrics for depth camera quality measurement and camera selection. She holds a BS and MEng in computer science and electrical engineering from MIT.

--------------------------------------------------------------------------------

Depth Camera Image Quality Definition and Measurement

John Sweetser, RealSense Group, Intel Corp.

Abstract

We will discuss the basic methods used at RealSense to evaluate the performance of depth cameras. This includes the definition of specific image quality metrics, methods, tools and test procedures for their measurement, typical performance standards, and examples of test results. Some discussion of qualitative image quality assessment as well as factors that can affect test results and overall performance will be included.

Bio

John Sweetser is currently a Computer Vision Engineer at Intel’s RealSense CTO Group (previously known as Perceptual Computing). He has previously worked in various areas involving R&D, technology, and product development at start-ups (Templex Technology, ThinkOptics) and research labs (Sandia National Labs, Univ of Rochester) as well as Intel in a variety of areas involving Optical Engineering and Photonics. He has BS (Applied Physics) and MEng (EE) degrees from Cornell University and PhD from the University of Rochester’s Institute of Optics.

--------------------------------------------------------------------------------

Using 3D vision to control robots in dirty, industrial environments

Jared Glover, Capsen Robotics

Abstract

CapSen Robotics writes 3D vision and motion planning software to give robots more spatial intelligence for manipulation tasks.  The company’s core product, CapSen PiC (“Pick in Clutter”), turns any industrial robot arm into a bin picking and machine tending cell.  CapSen PiC handles parts of a wide range of sizes and shapes, and can even disentangle picked objects.  Our accompanying CapSen Scanner product captures 3D models in minutes, enabling the robot to quickly adapt to new jobs and parts.  In this talk, I will discuss the practical challenges that robotics companies face in deploying 3D vision-guided robots in dirty, industrial settings.  I will focus on two recent installations we've done.  The first is in a wire & spring factory where our robot was tasked with picking metal hooks out of a bin, disentangling them (a first-of-its-kind capability in the robotics industry) and feeding them into a press.  The second is for an application where novel parts must be scanned and then washed off.  Both applications are in dirty environments and require the use of cutting-edge 3D vision algorithms.  Yet they differ greatly in their requirements and methods.  It is my hope that grounding our standards discussions with these practical case studies will help ensure that our metrics align with what end-users care most about--reliability!

Bio

Jared Glover is the CEO and co-founder of CapSen Robotics--a company that makes software to give robots more spatial intelligence.  Jared received his Ph.D. in Computer Science from MIT in 2014, where he developed and applied new theoretical tools for processing 3D orientation information to applications in computer vision and robot manipulation.  Prior to that, he completed his B.S. in Computer Science from Carnegie Mellon University, where he led a team developing robotic walkers for the Nursebot project.  He has over 15 years of research experience in robotics and computer vision and over 400 paper citations.  He is also a board member of Catalyst Connection, a private non-profit that provides consulting and training services to small manufacturers in southwestern Pennsylvania, and on advisory committees for the Advanced Robotics for Manufacturing (ARM) Institute, the Pittsburgh Robotics Network, and the National Institute of Standards and Technology (NIST).

--------------------------------------------------------------------------------

Robotic Assembly: Challenges and Opportunities in the Automotive Industry

Miguel Saez, Ph.D., GM

Abstract

The automotive industry is constantly being challenged with increasing product variety, shorter life cycle, and demand uncertainty. In order to adapt in a highly competitive environment, the vehicle and components assembly plants need to have the flexibility to rapidly reconfigure and adapt to different products and production volumes. The concept of robotic assembly, where robots are used to place parts in the proper position was introduced as a solution to improve manufacturing flexibility while reducing cost and footprint. However, the use of robots for assembly presents some unique challenges particularly in perception and path planning that can affect the dimensional quality and throughput. Perception refers to the use of sensors such as cameras or laser radars to see and understand the part, process, and work environment conditions. The use of perception systems such as vision for robot guidance in precise positioning applications is often a challenge in a manufacturing environment due to inadequate lighting, poor part contrast, or limited field of view. Moreover, the vision system is expected to have high accuracy and reliability in order to maintain high levels of productivity. Some of the first developments of vision-based robotic assembly faced capability challenges mostly due to high cycle time and positioning errors. In the automotive industry the development of robotic assembly methods and control algorithms has focused largely on automotive body parts where 2D vision systems have been used to locate part features and define the path of robot arms. Other perception alternatives such as 3D vision and a combination of 2D vision and laser readings have been introduced in various applications in order to improve accuracy and reduce cycle time. Moreover, the use of 2D vision might require additional robot movements that can be eliminated by using 3D vision, which can potentially help reduce cycle time. Recent developments in industrial robotics and artificial vision could help enable the next generation of robotic assembly systems. In this presentation a review of the challenges and opportunities of robotic assembly in the automotive industry is discussed. Also, examples of 2D and 3D vision for robotic assembly will be introduced. The focus will be to review the state-of-the-art of vision-based robot guidance for assembly and to highlight some key perception technology areas where research and development is required to enable robotic assembly of automotive body, powertrain, and battery assembly.

Bio

Dr. Miguel Saez is currently a researcher for General Motors Research and Development, Manufacturing Systems Research Lab in Warren, Michigan. In his current role, he develops novel industrial robotics and automation solutions to advance the technology used for manufacturing electric vehicles. He holds a Bachelor’s Degree in Mechanical Engineering from La Universidad del Zulia, Venezuela and both a Master’s Degree in Automotive and Manufacturing and a Ph.D. in Mechanical Engineering from the University of Michigan, USA. After obtaining his Bachelor's Degree, Miguel led multiple projects developing manufacturing and assembly systems for alternative fuel vehicle programs. During his graduate studies at the University of Michigan, Miguel developed new methods for modeling and control of manufacturing systems for multi-objective optimization of plant floor operations. After graduation, Miguel joined General Motors Research and Development in June 2018 as a researcher. In his current role, Miguel has been able to capitalize on his strong technical and leadership skills to develop new technology in the field of robotics. His work aims to enable coordinated movement of multi-arm systems using artificial vision and force sensing data fusion.

--------------------------------------------------------------------------------

Perception challenges for industrial applications

Remus Boca, Ph.D., ABB

Abstract

As the world moves towards autonomy, the sensing and perception are becoming more important if not necessary. Industrial applications have their own challenges as they operate in possible harsh environments, they require continuous and robust operation, need to accommodate unstructured environments, determine a wide range of states and unexpected events. This talk presents perception needs and challenges across many industries such as ports, mining, industrial equipment inspection, logistic and robotics.

Bio

Remus Boca joined ABB Corporate Research Center in 2010. He is a Senior Principal Scientist focusing on computer vision, sensing, perception, robotics and autonomy for industrial equipment and machines. He designs and implements strategies for machine perception and visual cognition targeting a wide range of ABB applications across different industrial segments such as robotics, ship yards, metallurgy, mining, electrical equipment, food & beverage, logistic and warehouse.

Prior to joining ABB, Remus worked at Braintech Inc as a Senior Robotic Vision Scientist on integrating perception solutions with industrial robots. He has a PhD, MS and bachelor’s degrees in Industrial Robotics and Automation from University Politehnica Bucharest, Romania.

--------------------------------------------------------------------------------

3D Calibration and Perception for Robotic Scan-and-Plan Applications

Joseph Schornak, SwRI

Abstract

Southwest Research Institute (SwRI) is a non-profit independent research and development institute located in San Antonio, TX. SwRI’s Manufacturing and Robotics Technologies Department specializes in custom robotic solutions for advanced manufacturing applications. These systems rely on a wide variety of 3D sensors, including LIDAR, stereo cameras, time-of-flight cameras, and structured light scanners. Many of our ongoing challenges are centered around the calibration of these sensors, both intrinsically and in relation to the other sensors and robots that comprise each system. While we possess NIST-standard calibration artifacts, many of our calibration techniques and our methods of assessing the quality of data produced by each sensor began as ad-hoc solutions to implementation challenges encountered on specific systems, such as spatial error in 3D data and noise introduced by reflective surfaces. This talk will explore several case studies of perception-based robotic systems, as well as our current toolset for calibration and performance benchmarking.

Bio

Joseph Schornak is a Research Engineer at Southwest Research Institute’s Manufacturing and Robotics Technologies Department in San Antonio, TX and a contributor to the open-source ROS-Industrial metaproject. He has a MS in Robotics Engineering from Worcester Polytechnic Institute. His areas of interest include 3D perception, surface reconstruction, and robotic motion planning.

--------------------------------------------------------------------------------

High-resolution, high-speed 3D perception and sensing data streaming

Song Zhang, Ph.D., Purdue University

Abstract

Advances in optical imaging and machine/computer vision have provided integrated smart sensing systems for intelligent systems; and advanced 3D perception techniques could have profound impact in the field of robotics. Our research addresses challenges in high-speed, high-resolution 3D perception and optical information processing. For example, we have developed a system that simultaneously captures, processes and displays 3D geometries at 30 Hz with over 300,000 measurement points per frame, which was unprecedented at that time (a decade ago). Our current research also explores novel means to stream/store enormously large 3D perception data by innovating geometry/video compression methods. The novel methods of converting 3D data to regular 2D counterparts offer the opportunity to leverage mature 2D data compression platform, achieving extremely high compression ratios without reinventing the whole data compression infrastructure. In this talk, I will present two platform technologies: 1) high-speed and high-resolution 3D perception; and 2) real-time 3D video compression and streaming. I will also cover some of the applications that we have been exploring including robotics, forensics, along with others.

Bio

Dr. Song Zhang joined Purdue in January 2015 as an associate professor and was promoted to full professor in 2019. He received his Ph.D. degree in mechanical engineering from Stony Brook University in 2005. He is currently serving as the Assistant Head for Experiential Learning at the School of Mechanical Engineering, Purdue University. He received his Ph.D. degree in mechanical engineering from Stony Brook University in 2005; spent three years at Harvard as a postdoctoral fellow; and then worked at Iowa State University for 6 years before joining Purdue in January 2015. Dr. Zhang has over 200 publications. 15 of his journal articles were selected as cover page highlights. His publications have been cited over 8,900 citations with an h-index of 45.  Besides being utilized in academia, the technologies developed by his team have been used by Radiohead (a rock band) to create a music video House of Cards; and by the law enforcement personnel to document crime scenes. He has received awards including AIAA Best Paper Award, IEEE ROBIO Best Conference Paper Award, Best of SIGGRAPH Disney Emerging Technologies Award, NSF CAREER Award, Stony Brook University’s “Forty under 40 Alumni Award”, and CoE Early Career Faculty Research Excellence Award. He is currently serving as an associate editor for Optics and Lasers in Engineering, and as a technical editor for IEEE/ASME Transactions on Mechatronics.  He is a fellow of SPIE and OSA.

RESULTS

  • A total of 39 standards needed were identified and prioritized
  • Titles, scopes, and key participants were identified for 6 proposed new standards
  • A standards roadmap based on the above results is being drafted
  • Download the report about the workshop here

MORE INFORMATION

About E57: https://www.astm.org/COMMITTEE/E57.htm
About ISD: https://www.nist.gov/el/intelligent-systems-division-73500
About 3D perception: https://www.nist.gov/programs-projects/perception-performance-robotic-systems

Please address any questions about this workshop to:
kamel.saidi [at] nist.gov (kamel[dot]saidi[at]nist[dot]gov), +1-301-975-6069

 

Contacts

Created October 21, 2019, Updated September 15, 2020