September 14–18, 2014 Chicago, Illinois

IEEE/RSJ International Conference on Intelligent Robots and Systems Facebooktwitter

  • Lake Michigan
  • Buckingham Fountain
  • Millennium Park
  • Art Institute of Chicago (Banquet)
  • The Field Museum, Museum Campus
  • Grant Park
  • Navy Pier
  • Palmer House Lobby
  • Skyline Reflected by Cloud Gate (the Bean)
  • Chicago Skyline at Twilight

Plenaries and Keynotes

IROS 2014 features three plenary speeches and 39 session keynotes by leaders in the field.

Monday         Tuesday         Wednesday         Session Keynotes

The Quest for Robotic Vision

Peter Corke, Queensland University of Technology, Australia

Monday, September 15, 8:20 AM, Grand and State Ballrooms

The technologies of robotics and computer vision are each over 50 years old. Once upon a time they were closely related and investigated, separately and together, in AI labs around the world. Vision has always been a hard problem, and early roboticists struggled to make vision work using the slow computers of the day — particularly for metric problems like understanding the geometry of the world. In the 1990s affordable laser rangefinders entered the scene and roboticists adopted them with enthusiasm, delighted with the metric information they could provide. Since that time laser-based perception has come to dominate robotics, while processing images from databases, not from robots, has come to dominate computer vision. What happened to that early partnership between robotics and vision? Is it forever broken, or is now the time to reconsider vision as an effective sensor for robotics? This talk will trace the history of robotics and vision, examine the state of the art and discuss what may happen in the future.

Peter CorkePeter Corke is Professor of Robotics and Control at Queensland University of Technology, and Director of the Australian Centre of Excellence for Robotic Vision. His research spans topics including visual servoing, high-speed hardware for machine vision, field robotics, particularly for mining and environmental monitoring, and sensor networks. He has written two books: “Robotics, Vision & Control” (2011) and “Visual Control of Robots” (1997); developed the Robotics and Machine Vision Toolboxes for MATLAB; was Editor-in-Chief of the IEEE Robotics and Automation magazine (2010-13); was a founding editor of the Journal of Field Robotics; is a member of the editorial boards of the International Journal of Robotics Research and the Springer STAR series; and is a Fellow of the IEEE. He received a Bachelor of Engineering (Electrical), Masters and PhD all from the University of Melbourne.

Development of Neural Interfaces for Robotic Prosthetic Limbs

Todd A. Kuiken, Rehabilitation Institute of Chicago and Northwestern University, USA

Tuesday, September 16, 8:00 AM, Grand and State Ballrooms

The ability to control complex robot prostheses is evolving quickly. I will describe research at the Center for Bionic Medicine/Rehabilitation Institute of Chicago and Northwestern University to develop a neural-machine interface to improve the function of artificial limbs. We have developed a surgical technique called Targeted Reinnervation to use nerve transfers for improvement of robotic arm control and to provide sensation of the missing hand. By transferring the residual arm nerves in an upper limb amputee to spare regions of muscle it is possible to make new electromyographic (EMG) signals for the control of robotic arms. These signals are directly related to the original function of the lost limb and allow simultaneous control of multiple joints in a natural way. This work has now been extended by the use of pattern recognition algorithms that decode the user’s intent, enabling the intuitive control of many more functions of the prostheses. Similarly, hand sensation nerves can be made to grow into spare skin on the residual limb so that when this skin is touched, the amputee feels like their missing hand is being touched. This is a potential port to providing physiologically correct sensory feedback to amputees. Our team is now also developing a neural interface for powered leg prostheses that enables intuitive mobility based on a fusion of residual limb EMG and sensors in the robotic leg.

Todd A. KuikenTodd A. Kuiken received a B.S. degree in biomedical engineering from Duke University (1983), a Ph.D. in biomedical engineering from Northwestern University in Evanston, Illinois (1989) and an M.D. from Northwestern University Medical School (1990). He is a board-certified physiatrist at the Rehabilitation Institute of Chicago. He is the Director of the Center for Bionic Medicine at the Rehabilitation Institute of Chicago and a Professor in the Depts. of Physical Medicine and Rehabilitation, Bioomedical Engineering, and Surgery at Northwestern University. Dr. Kuiken is an internationally respected leader in the care of people with limb loss, both as an active treating physician and as a research scientist. He developed a novel surgical technique called Targeted Reinnervation which has now been successfully performed on over 100 people with amputations worldwide. His research is broadly published in journals including the New England Journal of Medicine, JAMA, Lancet and PNAS.

From Visual SLAM to Generic Real-time 3D Scene Perception

Andrew Davison, Imperial College London, UK

Wednesday, September 17, 8:00 AM, Grand and State Ballrooms

I will argue that a coherent stream of research in robotics and computer vision is leading us from the visual SLAM systems of the past 15+ years towards the generic real-time 3D scene understanding capabilities which will enable the next generation of smart robots and mobile devices.
SLAM is the problem of joint estimation of a robot's motion and the structure of the environment it moves through, and cameras of a variety of types are now the main outward looking sensors used to achieve this. While early visual SLAM systems concentrated on real-time localisation as their main output, the latest ones are now capable of dense and detailed 3D reconstruction and, increasingly, semantic labelling and object awareness. A crucial factor in this progress has been how continuing improvements in commodity processing performance has enabled algorithms previously considered "off-line" in computer vision research to become part of real-time systems. But we believe this is far from the whole story: when estimation of qualities such as object identity is undertaken during a real-time loop together with localisation, 3D reconstruction and possibly even interaction or manipulation, the predictions and context continuously available should make things much easier; leading to robustness and computational efficiency which feeds back and is self-reinforcing. This in our view is what keeps progress towards generic real-time scene understanding firmly in the domain of the SLAM ways of thinking, where incremental, real-time processing is used to make globally consistent scene estimates by repeatedly comparing live data against predictions and update probabilistic models accordingly.
I will describe and connect much of the research that I and others have conducted in Visual SLAM over the recent years, with examples from my own work from MonoSLAM through systems like DTAM, KinectFusion and SLAM++.

Andrew DavisonAndrew Davison received the B.A. degree in physics and the D.Phil. degree in computer vision from the University of Oxford in 1994 and 1998, respectively. In his doctorate with Prof. David Murray at Oxford's Robotics Research Group he developed one of the first robot SLAM systems using vision. He spent two years as a post-doc at AIST, Tsukuba, Japan, where he continued to work on visual robot navigation. In 2000 he returned to the University of Oxford and as an EPSRC Advanced Research Fellow from 2002 he developed the well known MonoSLAM algorithm for real-time SLAM with a single camera. He joined Imperial College London as a Lecturer in 2005, held an ERC Starting Grant from 2008 to 2013 and was promoted to Professor in 2012. His Robot Vision Research Group continues to focus on advancing the basic technology of real-time localisation and mapping using vision, publishing advances in particular on real-time dense reconstruction and tracking, large scale map optimisation, high speed vision and tracking, object-level mapping and the use of parallel processing in vision. He maintains a deep interest in exploring the limits of computational efficiency in real-time vision problems. In 2014 he became the founding Director of the new Dyson Robotics Laboratory at Imperial College, a lab working on the applications of computer vision to real-world domestic robots where there is much potential to open up new product categories.

 

Session Keynote Speakers

Each speaking session will be kicked off by a 20-minute invited keynote talk, not associated with any particular paper.

  • Alin Albu-Schaeffer, DLR: Robots for Interaction with Humans and Unknown Environments
  • Nancy Amato, Texas A&M: Sampling-Based Planning: Foundations & Applications
  • Fumihito Arai, Nagoya University: Micro and Nano Robotics for Biomedical Innovations
  • Antonio Bicchi, University of Pisa: Natural Machine Motion and Embodied Intelligence
  • Oliver Brock, TU Berlin: Grasping and Manipulation by Humans and by Robots
  • Etienne Burdet, Imperial College London: Overview of Motor Interaction with Robots and Other Humans
  • Cenk Cavusoglu, Case Western Reserve University: Towards Intelligent Robotic Surgical Assistants
  • Howie Choset, Carnegie Mellon University: From Biology to Robot and Back
  • Gamini Dissanayake, University of Technology, Sydney: The SLAM Problem - A Fifteen Year Journey …
  • Greg Dudek, McGill University: Human-guided Video Data Collection in Marine Environments
  • Pierre Dupont, Boston Children's Hospital, Harvard Medical School: Medical Robotics — Melding Clinical Need with Engineering Research
  • Ryan Eustice, University of Michigan:  Toward Persistent SLAM in Challenging Environments
  • Dario Floreano, EPFL: Bio-inspired Multi-modal Flying Robots
  • Clement Gosselin, Laval University: Innovative Mechanical Systems to Address Current Robotics Challenges
  • Greg Hager, Johns Hopkins University: Life in a World of Ubiquitous Sensing
  • Blake Hannaford, University of Washington: Surgical Robotics: Transition to Automation
  • Dennis Hong, UCLA:  Humanoids and Bipeds
  • Ayanna Howard, Georgia Tech: Robots and Gaming – Therapy for Children with Disabilities
  • Lydia Kavraki, Rice University: Planning for Complex High-Level Missions
  • Jana Kosecka, George Mason University: Semantic Parsing in Indoors and Outdoors Environments
  • Vijay Kumar, University of Pennsylvania: Aerial Robot Swarms
  • Christian Laugier, INRIA: Bayesian Perception and Decision From Theory to Real World Applications
  • Cecilia Laschi, Scuola Superiore Sant'Anna di Pisa: Soft Robotics
  • Steve LaValle, University of Illinois: From Robotics to VR and Back
  • John Leonard, MIT: Dense, Object-based 3D SLAM
  • Matt Mason, Carnegie Mellon University: What Is Manipulation?
  • Robin Murphy, Texas A&M: Lessons Learned in Field Robotics from Disasters
  • Paul Oh, UNLV: Material-Handling – Paradigms for Humanoids and UAVs
  • Allison Okamura, Stanford University: Haptics in Robot-Assisted Surgery
  • George Pappas, University of Pennsylvania: Formal Methods in Robotics
  • Frank Park, Seoul National University: Robot Motion Optimization
  • Jan Peters, TU Darmstadt: Machine Learning of Motor Skills for Robotics
  • Daniela Rus, MIT: Networked Robots
  • Brian Scassellati, Yale University: Human-Robot Interaction Socially Assistive Robotics
  • Stefan Schaal, University of Southern California: Perception-Action-Learning and Associative Skill Memories
  • Stefano Stramigioli, University of Twente: Highly Dynamic, Energy-aware, Biomimetic Robots
  • Manuela Veloso, Carnegie Mellon University: Symbiotic Mobile Robot Autonomy in Human Environments
  • Robert Wood, Harvard University: Soft, Printable, and Small: An Overview of Manufacturing Methods for Novel Robots at Harvard
  • Kazuhito Yokoi, AIST: What Is a Humanoid Robot Good For?