Take a sneak peek at the new NIST.gov and let us know what you think!
(Please note: some content may not be complete on the beta site.).
PerMIS 2007 Plenary Addresses
Prof. Maria Gini
Title: Methodology for experimental research in multi-robot systems with case studies
Abstract: Fully repeatable and controllable experiments are essential to enable a precise comparison of multi-robot systems. Using different case studies, we describe a general methodology for conducting experimental activities for multi-robot systems. This is a first step toward the goal of fostering the practice of replicating experiments in order to compare different methods and assess their strengths and weaknesses.
In the first case study, we examine the problem of building a geometrical map of an indoor environment using multiple robots. The map is built by integrating partial maps made of segments without using any odometry information. We show how to improve the repeatability and controllability of the experimental results and how to compare different mapping systems.
We then present a case study of auction-based methods for the allocation of tasks to a group of robots. The robots operate in a 2D environment for which they each have a map. Tasks are locations in the map that must be visited by one robot. Robots bid to obtain tasks, but unexpected obstacles and other delays may prevent a robot from completing its allocated tasks. We show how to compare our experimental results with other published auction-based methods.
Biography: Maria Gini is a Professor at the Department of Computer Science and Engineering of the University of Minnesota. Before joining the University of Minnesota, she was a Research Associate at the Politecnico of Milan, Italy, and a Visiting Research Associate at Stanford University. Her work has included motion planning for robot arms, navigation of mobile robots around moving obstacles, unsupervised learning of complex behaviors, coordinated behaviors among multiple robots, and autonomous economic agents. She has coauthored over 200 technical papers. She is currently the chair of ACM Special Interest Group on Artificial Intelligence (SIGART), a member of the Association for the Advancement of Artificial Intelligence (AAAI) Executive Council and of the board of the International Foundation of Autonomous Agents and Multi-Agent Systems. She is on the editorial board of numerous journals, including Autonomous Robots, the Journal of Autonomous Agents & Multi-Agent Systems, Electronic Commerce Research and Applications, Integrated Computer-Aided Engineering, and Web Intelligence and Agent Systems.
Dr. Eric Krotkov
Title: Measuring Ground Robot Performance
Abstract: This talk first describes several approaches to measure the performance of ground robots. It is easy enough to measure quantities such as speed and reliability. It is more challenging to define metrics for perception, planning, and autonomy. The talk then presents selected results of applying the approaches to systems developed by several Goverment programs.
Biography: Dr. Krotkov is the President of Griffin Technologies, a consulting and software firm specializing in robotics and machine perception. Before founding Griffin, he worked in industry as an executive in a medical imaging technology start-up, in government as a program manager at DARPA, and in academia as a faculty member of the Robotics Institute at Carnegie Mellon University. Dr. Krotkov earned his Ph.D. degree in Computer and Information Science in 1987 from the University of Pennsylvania, for pioneering work in active computer vision.
Prof. Illah R. Nourbakhsh
Title: Formalizing Educational Human-Robot Collaboration
Abstract: Designing human-robot collaboration systems is an inherently multidisciplinary endeavor aimed at providing humans with rich, effective and satisfying interactions. Over the past ten years, my laboratory has focused on educational collaboration, wherein the purpose of the interaction is to provide measurable learning for humans through exploration and discovery.
We propose that the creation of a successful human-robot collaboration system requires innovation in several areas: robot morphology; robot behavior; social perception; interaction design; human cognitive models and evaluation of educational effectiveness.
Our iterative process for collaboration design extends evaluation techniques from the informal learning field together with underlying technical advances in robotics. This talk describes our research methodology, technical contributions and experimental outcomes for three fielded robot systems that push on developing a generalizable, formal approach to educational human-robot collaboration.
For the past several months, our group has been laying the groundwork for large-scale dissemination of our technology and curricular instruments. I will describe the robot “community” we wish to help spawn, and the ingredients that may help to catalyze a broad form of technologically empowered community, including the Telepresence Robot Kit and the Global Connection Project.
Biography: Illah R. Nourbakhsh is an Associate Professor of Robotics and head of the Robotics Masters Program in The Robotics Institute at Carnegie Mellon University. He was on leave for the 2004 calendar year and was at NASA/Ames Research Center serving as Robotics Group lead. He received his Ph.D. in computer science from Stanford University in 1996. He is co-founder of the Toy Robots Initiative at The Robotics Institute, director of the Center for Innovative Robotics and director of the Community Robotics, Education and Technology Empowerment (CREATE) lab. He is also co-PI of the Global Connection Project, home of the Gigapan project. He is also co-PI of the Robot 250 city-wide art+robotics fusion program in Pittsburgh. His current research projects include educational and social robotics and community robotics. His past research has included protein structure prediction under the GENOME project, software reuse, interleaving planning and execution and planning and scheduling algorithms, as well as mobile robot navigation. At the Jet Propulsion Laboratory he was a member of the New Millenium Rapid Prototyping Team for the design of autonomous spacecraft. He is a founder and chief scientist of Blue Pumpkin Software, Inc., which was acquired by Witness Systems, Inc. Illah recently co-authored the MIT Press textbook, Introduction to Autonomous Mobile Robots.
Dr. Alex Zelinsky
Title: Building Autonomous Systems of High Performance, Reliability and Integrity
Abstract: Commercial applications for the everyday deployment of autonomous systems based on robotic and intelligent systems technologies require the highest levels of performance, reliability and integrity. The general public expects intelligent machines to be fully operational 100% of the time. People expect autonomous technologies to operate at higher levels of performance and safety than people themselves exhibit. For example smart car technologies are expected to cause ZERO accidents while human errors kill more 150,000 people on our roads every year! This talk will describe the design principles that have been developed over of the last 10 years through exhaustive trial and error testing to underpin autonomous systems that are suitable for real-world deployment. Currently, it is not yet possible to realise an autonomous system that doesn't fail periodically. Even if the mean rate between failures is days or weeks, a single failure could have catastrophic consequences. The approach we have adopted to address this situation has been to build-in monitoring systems that continually check all key system parameters and variables. If the monitored parameters move outside tightly defined bounds the system will safely shutdown, and alert the human supervisor. The failure conditions are logged and then further testing and debugging is performed. The value and appropriateness of our approach will be shown by a number of real-world studies. We will show that how it is possible to design computer vision systems for human-machine applications can operate with over 99% reliability, in all lighting conditions, for all types of users irrespective of age, race or visual appearance. These systems have been used in automotive and sports applications. We have also show how this approach has been used to design field robotic systems that have deployed in automobile safety systems and 24/7 mining applications.
Biography: Dr. Alex Zelinsky is a well-known scientist, specialising in robotics and computer vision and is widely recognised as an innovator in human-machine interaction. Dr. Zelinsky is currently Group Executive, Information and Communication Sciences and Technology, and Director, CSIRO Information Communication Technology (ICT) Centre. Before joining CSIRO in July 2004 Dr. Zelinsky was CEO of Seeing Machines, a company dedicated to the commercialisation of computer vision systems. Dr. Zelinsky co-founded Seeing Machines in June 2000, the company is now publicly listed on the London Stock Exchange. The technology commercialised by Seeing Machines was developed at the Australian National University where Dr. Zelinsky was Professor and Head of the Department of Systems Engineering (1996-2000). Prior to joining the Australian National University, Dr. Zelinsky worked as an academic at the University Wollongong (1984-1991) and as a research scientist in the Electrotechnical Laboratory, Japan (1992-1995). Dr. Zelinsky is an active member of the robotics community and has served on the editorial boards of the International Journal of Robotics Research and IEEE Robotics and Automation Magazine, he also founded the Field & Services Robotics conference series. Dr. Zelinsky's contributions have been recognised by awards in Australia and internationally. These include the Australian Engineering Excellence Awards, US R&D magazine Top 100 Award and Technology Pioneer at the World Economic Forum.
Dr. Vladimir Lumelsky
Title: Human-Robot Interaction in Physical Proximity: Issues and Prospects
Abstract: After spectacular successes, in 1970s-1980s, in the use of robotics in highly structured environments - e.g. automotive assembly, welding, and painting lines - the penetration of "serious" robots (those large and powerful enough to be harmful) into new applications has slowed down markedly. User manuals of most robot arm manipulators warn that under no circumstance can people enter the workspace of an operating robot. The reason is simple - due to intended use these robots are strong enough to endanger a human, yet their sensing and intelligence is "too dumb" to be trusted for human safety. In the roboticists' parlance, today's robots are not designed to operate in unstructured environments, that is settings not created specifically for the robot's operation. It is not the function the robot is built for that is the problem - it is the robot's interaction with its environment. The problem is lesser with robot rovers but quite pronounced with arm manipulators.
The way to break this barrier is to design robots fully capable of operating in an unstructured environment, in places where things are unpredictable and must be perceived and decided upon on the fly. This is a new terrain - the required hardware and intelligence are to be more complex and sophisticated than what we know today. In this talk we will review related technical and scientific issues.
Biography: Dr. Vladimir Lumelsky is the head of the Laboratory of Robotics for Unstructured Environments at NASA-Goddard Space Center, and is Adjunct Professor of Computer Science at the University of Maryland-College Park. The long-term goal of the laboratory is to develop robots capable of operating in the uncertain and changing settings likely to arise in future NASA missions. This work builds upon Dr. Lumelsky's work on large sensitive robot skin systems prior to joining NASA in 2004, as a professor at Yale University and later at the University of Wisconsin-Madison (where he was The Consolidated Papers Professor of Engineering). Dr. Lumelsky is the author of three books and over 200 professional papers covering the areas of robotics, computational intelligence, human-machine interaction, human spatial reasoning, massive sensor arrays, bio-engineering, control theory, kinematics, pattern recognition, and industrial automation. He has held a variety of positions in both the public and private sectors: he was Program Director at the National Science Foundation, and has led large technical projects, including development of a universal industrial robot controller at General Electric (GE Research Center), and a joint robot skin development effort with Hitachi Corporation. Dr. Lumelsky also has held temporary positions at the Science University of Tokyo (Japan), Weizmann Institute (Israel) and US South Pole Station, Antarctica. He is the founding Editor-in-Chief of the IEEE Sensors Journal, and has served on editorial boards of other professional journals. He has been guest editor of special issues at professional journals; served on the Administrative Committees of IEEE Robotics Society and Sensors Council; chaired technical committees and working groups; and chaired and co-chaired major international conferences, workshops and special sessions. Dr. Lumelsky has served as a technical expert in legal cases, including multi-national litigation. He frequently gives talks at US and foreign universities, government groups, think tanks, and in industry. He is a member of several professional societies, and is a Fellow of IEEE.