Workshop at the Robotics: Science and Systems (RSS) Conference
July 12, 2020
14:00 - 16:00 UTC
10:00 - 12:00 EDT
NOTE: The RSS 2020 conference will be held virtually during the originally scheduled dates. By registering for the conference, you will get access to the workshop Zoom link.
- Submission deadline: June 12, 2020
- Notification of acceptance: June 26, 2020
- Workshop date: July 12, 2020
Machine learning approaches are fostering impressive new capabilities for robots. The number of research projects and publications are growing quite rapidly, and ML-based product spending is increasing at a compound annual growth rate of 25%. It is an exciting time, but this rapid expansion is outpacing the definition of consensus and science-based methods of assessing approaches and best practices for applying these technologies. Supporting tools, such as datasets for training and benchmarking, are becoming widely available to assist in the development of ML-based systems, but there is a severe lack of such tools for manufacturing robotics applications.
This workshop will focus on addressing the needs of this important application domain that is significantly under-representing in research publications and support infrastructure. The goals of this workshop are:
- Raise awareness of the need for ML metrics, evaluations, and benchmarks, especially for manufacturing-relevant parts, operations, and environments.
- Convene stakeholders to define common language for discussing ML performance, characteristics, applicability and/or tools and measurement science necessary to advance the state of ML in manufacturing robotics and reduce the risk of adopting ML-based technologies and solutions.
- Produce initial document articulating challenges and gaps, ideas for directions to go for defining metrics and other measurement science to bring more rigor to the field.
- Form an ongoing community to develop, review, try out, mature, and contribute to the concepts and tools that can help mature the field and foster well-informed, successful adoption and implementation of ML-based manufacturing robotics capabilities.
workshop structure and invited speakers
The workshop will consist of a combination of invited talks that present user and developer perspectives, a panel discussion to bring out major themes or areas of need, a poster session, and a structured discussion with general participation intended to identify the priorities going forward for forming a community to define protocols, guidelines, metrics, test methods, datasets, and tools that will be useful for maturing the application of ML to manufacturing robotics.
Speakers represent different perspectives of this ecosystem: end users of ML-based solutions, developers of tools and implementors of solutions, researchers who seek ways to leverage existing resources and to present their results based on recognized benchmarks and metrics.
The schedule for the workshop will be as follows (all times are Eastern U.S. Time):
10:00am - 10:10am
• Introduction and Overview of Workshop
10:10am - 11:25am (Invited Talks, 15 minutes each)
- Dragos Margineantu, Boeing Research & Technology.
- Talk title: Research Questions and Ideas for Robust Machine Learning in Manufacturing
- Abstract: The machine learning techniques employed in robotics have proven valuable because they address the generalization capabilities required for virtually any robotics task. Meanwhile, we understand that once deployed, our learned models are need to handle unmodeled phenomena (such as unexpected events or obstacles) in a safe manner – by minimizing the risk of the potential outcomes. That hard to achieve. In this presentation, we will focus on the major question that opens up to the machine learning scientist in the process of architecting and implementing robust solutions: how to design learning based systems that achieve robustness when deployed. Research questions on machine learning robustness fall into two broad categories: [A] model correctness analysis, and [B] bounding the risk predictions and actions based handling observations on which the learner is not qualified to handle. We will focus on the latter set of questions: how do we learn models that ‘know when they don’t know’, what are the formalisms that we need, what is practically doable - both, for supervised learning and reinforcement learning. First we will explore methods for learning self-competence models in addition to the predictions. Next, we will discuss self-competence estimation approaches for reinforcement learning and decision making. Finally, we will explore multi-faceted learning and related techniques, that can be applied in decision systems with the goal of increased robustness.
- Adam Norton, University of Massachusetts Lowell, New England Robotics Validation and Experimentation (NERVE) Center
- Talk title: Inducing Variation into Test and Evaluation of Autonomous Robot Systems
- Abstract: The advent of autonomous robot systems in manufacturing as opposed to traditional automation is the ability to perform in the presence of variation, detecting local changes to the environment (e.g., the presence of obstacles) or global changes in task execution parameters (e.g., reprioritizing tasks) and reacting accordingly. Perception and machine learning capabilities enable robot systems to be robust, flexible, and agile to adapt to these changes. New evaluation methods and metrics are needed such that these advanced capabilities can be appropriately elicited from robot systems and measured accordingly. This talk will present considerations for characterizing variation in manufacturing settings and methods for inducing variation into test and evaluation plans that can be representative of those settings.
- Berk Calli, Worchester Polytechnic Institute
- Talk title: Facilitating Benchmarks for Robotic Manipulation
- Abstract: This talk will summarize the progression of the YCB Benchmarking project. This collaborative effort has been facilitating benchmarking efforts for over 5 years. We have been organizing workshops, forming workgroups, and establishing publication venues. In this talk, we will discuss which practices worked better for benchmark dissemination and how we can move forward.
- Megan Zimmerman, National Institute of Standards and Technology (NIST)
- Talk title: Considerations for Manufacturing Focused Collaborative Multiagent Dataset Generation
- Abstract: Though Collaborative Robots have been in the field for a number of years, they have yet to be truly collaborative in action. Rather than be collaborative team members, they often operate within self-contained workcells with little adaptability. To establish robots capable of operation and collaboration with human coworkers in manufacturing settings, models capable of understanding human action and manipulation tasks can be generated with the implications of the robot’s respective actions in mind. We will discuss the considerations and planning going into creating a dataset that would enable such a model being generated at NIST, and the factors that contribute to a quality dataset for the robotics community.
- Nathan Ratliff, NVIDIa
- Talk title: Geometric fabrics: Transparent tools for behavior engineering
- Abstract: Industry tends to shy away from the promising new tools of deep learning in favor of the well-understood model-based planners and controllers engineers are comfortable employing on mission critical problems. The importance of deep learning in robotics is real; we will never achieve proficient perception-driven behavior on human-interactive problems without data driven approaches. But this hesitation has good reason: engineers can’t engineer on excitement alone. Those data driven approaches will remain inappropriate for many industrial applications until we understand them as well as we understand the planners and controllers they aim to replace. In this talk, I will present a new mathematical framework for behavioral engineering called geometric fabrics designed to bridge this gap. Geometric fabrics are modular tools for engineering second-order differential equations which enable the flexible design of intelligent behaviors with well-understood stability properties such as smooth obstacle avoidance subsystems. Our goal is to add transparency to the design of the types of behavioral systems we want to build into deep learning architectures. This family of tools offers a level of design agency that has enabled us to engineer numerous real-world system demos at Nvidia. In this workshop talk, I will present some of our newest theoretical and experimental results.
11:25am - 11:45am Lightning Talks From Accepted Posters
11:45am - 12:00pm Wrap-Up and Next Steps
Topics speakers will be asked to address, from their relevant perspectives
- Industry perspectives on requirements to assist in evaluation and matching of solutions to implementations
- Available resources and lessons-learned
- Existing tools to automate dataset curation
- How to assess the quality, applicability, and trasferability of datasets or learned models
- Discussion on how to convene the community and create consensus metrics, common datasets, benchmarks, and tools.
Elena Messina, National Institute of Standards and Technology, email@example.com
Holly Yanco, University of Massachusetts, Lowell, firstname.lastname@example.org
Megan Zimmerman, National Institute of Standards and Technology, email@example.com
Craig Schlenoff, National Institute of Standards and Technology, firstname.lastname@example.org
Dragos Margineantu, Boeing Research and Technology, email@example.com
For further information or to join a mailing list that will continue to explore the issues discussed at the workshop, please contact the organizing committee at firstname.lastname@example.org.