IROS 2014 features three plenary speeches and 39 session keynotes by leaders in the field.
Monday Tuesday Wednesday Session Keynotes
Monday, September 15, 8:20 AM, Grand and State Ballrooms
The technologies of robotics and computer vision are each over 50 years old. Once upon a time they were closely related and investigated, separately and together, in AI labs around the world. Vision has always been a hard problem, and early roboticists struggled to make vision work using the slow computers of the day — particularly for metric problems like understanding the geometry of the world. In the 1990s affordable laser rangefinders entered the scene and roboticists adopted them with enthusiasm, delighted with the metric information they could provide. Since that time laser-based perception has come to dominate robotics, while processing images from databases, not from robots, has come to dominate computer vision. What happened to that early partnership between robotics and vision? Is it forever broken, or is now the time to reconsider vision as an effective sensor for robotics? This talk will trace the history of robotics and vision, examine the state of the art and discuss what may happen in the future.
Peter Corke is Professor of Robotics and Control at Queensland University of Technology, and Director of the Australian Centre of Excellence for Robotic Vision. His research spans topics including visual servoing, high-speed hardware for machine vision, field robotics, particularly for mining and environmental monitoring, and sensor networks. He has written two books: “Robotics, Vision & Control” (2011) and “Visual Control of Robots” (1997); developed the Robotics and Machine Vision Toolboxes for MATLAB; was Editor-in-Chief of the IEEE Robotics and Automation magazine (2010-13); was a founding editor of the Journal of Field Robotics; is a member of the editorial boards of the International Journal of Robotics Research and the Springer STAR series; and is a Fellow of the IEEE. He received a Bachelor of Engineering (Electrical), Masters and PhD all from the University of Melbourne.
Tuesday, September 16, 8:00 AM, Grand and State Ballrooms
The ability to control complex robot prostheses is evolving quickly. I will describe research at the Center for Bionic Medicine/Rehabilitation Institute of Chicago and Northwestern University to develop a neural-machine interface to improve the function of artificial limbs. We have developed a surgical technique called Targeted Reinnervation to use nerve transfers for improvement of robotic arm control and to provide sensation of the missing hand. By transferring the residual arm nerves in an upper limb amputee to spare regions of muscle it is possible to make new electromyographic (EMG) signals for the control of robotic arms. These signals are directly related to the original function of the lost limb and allow simultaneous control of multiple joints in a natural way. This work has now been extended by the use of pattern recognition algorithms that decode the user’s intent, enabling the intuitive control of many more functions of the prostheses. Similarly, hand sensation nerves can be made to grow into spare skin on the residual limb so that when this skin is touched, the amputee feels like their missing hand is being touched. This is a potential port to providing physiologically correct sensory feedback to amputees. Our team is now also developing a neural interface for powered leg prostheses that enables intuitive mobility based on a fusion of residual limb EMG and sensors in the robotic leg.
Todd A. Kuiken received a B.S. degree in biomedical engineering from Duke University (1983), a Ph.D. in biomedical engineering from Northwestern University in Evanston, Illinois (1989) and an M.D. from Northwestern University Medical School (1990). He is a board-certified physiatrist at the Rehabilitation Institute of Chicago. He is the Director of the Center for Bionic Medicine at the Rehabilitation Institute of Chicago and a Professor in the Depts. of Physical Medicine and Rehabilitation, Bioomedical Engineering, and Surgery at Northwestern University. Dr. Kuiken is an internationally respected leader in the care of people with limb loss, both as an active treating physician and as a research scientist. He developed a novel surgical technique called Targeted Reinnervation which has now been successfully performed on over 100 people with amputations worldwide. His research is broadly published in journals including the New England Journal of Medicine, JAMA, Lancet and PNAS.
Wednesday, September 17, 8:00 AM, Grand and State Ballrooms
I will argue that a coherent stream of research in robotics and computer vision is leading us from the visual SLAM systems of the past 15+ years towards the generic real-time 3D scene understanding capabilities which will enable the next generation of smart robots and mobile devices.
SLAM is the problem of joint estimation of a robot's motion and the structure of the environment it moves through, and cameras of a variety of types are now the main outward looking sensors used to achieve this. While early visual SLAM systems concentrated on real-time localisation as their main output, the latest ones are now capable of dense and detailed 3D reconstruction and, increasingly, semantic labelling and object awareness. A crucial factor in this progress has been how continuing improvements in commodity processing performance has enabled algorithms previously considered "off-line" in computer vision research to become part of real-time systems. But we believe this is far from the whole story: when estimation of qualities such as object identity is undertaken during a real-time loop together with localisation, 3D reconstruction and possibly even interaction or manipulation, the predictions and context continuously available should make things much easier; leading to robustness and computational efficiency which feeds back and is self-reinforcing. This in our view is what keeps progress towards generic real-time scene understanding firmly in the domain of the SLAM ways of thinking, where incremental, real-time processing is used to make globally consistent scene estimates by repeatedly comparing live data against predictions and update probabilistic models accordingly.
I will describe and connect much of the research that I and others have conducted in Visual SLAM over the recent years, with examples from my own work from MonoSLAM through systems like DTAM, KinectFusion and SLAM++.
Andrew Davison received the B.A. degree in physics and the D.Phil. degree in computer vision from the University of Oxford in 1994 and 1998, respectively. In his doctorate with Prof. David Murray at Oxford's Robotics Research Group he developed one of the first robot SLAM systems using vision. He spent two years as a post-doc at AIST, Tsukuba, Japan, where he continued to work on visual robot navigation. In 2000 he returned to the University of Oxford and as an EPSRC Advanced Research Fellow from 2002 he developed the well known MonoSLAM algorithm for real-time SLAM with a single camera. He joined Imperial College London as a Lecturer in 2005, held an ERC Starting Grant from 2008 to 2013 and was promoted to Professor in 2012. His Robot Vision Research Group continues to focus on advancing the basic technology of real-time localisation and mapping using vision, publishing advances in particular on real-time dense reconstruction and tracking, large scale map optimisation, high speed vision and tracking, object-level mapping and the use of parallel processing in vision. He maintains a deep interest in exploring the limits of computational efficiency in real-time vision problems. In 2014 he became the founding Director of the new Dyson Robotics Laboratory at Imperial College, a lab working on the applications of computer vision to real-world domestic robots where there is much potential to open up new product categories.
Each speaking session will be kicked off by a 20-minute invited keynote talk, not associated with any particular paper.
Copyright © IROS 2014. All Rights Reserved.