IEEE/RSJ International Conference on
Intelligent Robots and Systems
Vancouver, BC, Canada
September 24–28, 2017

Menu

 

Plenaries
Dieter Fox, University of Washington
Fei-Fei Li, Stanford University/Google
Maja Mataric, University of Southern California
Keynotes
Nicholas Roy, MIT
Brian Gerkey, Open Source Robotics Foundation (OSRF)
Julie Shah, MIT
Steven Waslander, University of Waterloo
Lynne Parker, University of Tennessee
Josh Bongard, University of Vermont
Vincent Hayward, Institut des Systèmes Intelligents et de Robotique(ISIR)
David Hsu, National University of Singapore
Frank Chongwoo Park, Seoul National University
Edwin Olson, University of Michigan
Hiroshi Ishiguro, Advanced Telecommunications Research Institute International (ATR)
Oliver Brock, Technical University of Berlin
Cecilia Laschi, The BioRobotics Institute, Scuola Superiore Sant’Anna
Tim Salcudean, University of British Columbia
Joey Durham, Amazon Robotics
Aleksandr Kapitonov Airalab, ITMO University

SCHEDULE

Monday 25 September

Ballroom Room 211 Room 109 Room 118
09:00 - 10:00 Dieter Fox
13:00 - 13:45    Edwin Olson  Brian Gerkey  Frank Chongwoo Park
13:45 - 14:30    Julie Shah  Josh Bongard  David Hsu

 

Tuesday 26 September

Ballroom Room 211 Room 109 Room 118
09:00 - 10:00 Fei-Fei Li
13:00 - 13:45 Nick Roy Lynne Parker Oliver Brock
13:45 - 14:30  Hiroshi Ishiguro  Vincent Hayward  Steven Waslander

 

Wednesday 27 September

Ballroom Room 211 Room 109 Room 118
09:00 - 10:00 Maja Mataric'
13:45 - 14:30  Joey Durham Tim Salcudean  Cecelia Laschi
Ballroom B, C & D    
12:00 - 12:10 Aleksandr Kapitonov  

 


Plenary Speakers

 

Dieter Fox

Toward Robots that Understand People and Their Environments

To interact and collaborate with people in a natural way, robots must be able to recognize objects in their environments, accurately track the motion of humans, and estimate their goals and intentions. The last years have seen dramatic improvements in robotic capabilities to model, detect, and track non-rigid objects such as human bodies, hands, and their own manipulators. These recent developments can serve as the basis for providing robots with an unprecedented understanding of their environment and the people therein. I will use examples from our research on modeling, detecting, and tracking ​in 3D scenes to highlight ​some of ​these advances and discuss open problems that still need to be addressed. I will also use these examples to highlight the pros and cons of model-based approaches and deep learning techniques for solving perception problems in robotics.

 

Dieter Fox is a Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where he heads the UW Robotics and State Estimation Lab. From 2009 to 2011, he was also Director of the Intel Research Labs Seattle. Dieter obtained his Ph.D. from the University of Bonn, Germany. His research is in robotics and artificial intelligence, with a focus on state estimation and perception applied to problems such as mapping, object detection and tracking, manipulation, and activity recognition. He has published more than 180 technical papers and is the co-author of the textbook “Probabilistic Robotics.” He is a Fellow of the IEEE and the AAAI, and he received several best paper awards at major robotics, AI, and computer vision conferences. He was an editor of the IEEE Transactions on Robotics, program co-chair of the 2008 AAAI Conference on Artificial Intelligence, and program chair of the 2013 Robotics: Science and Systems conference.


Fei-Fei Li

A Quest for Visual Intelligence

It took nature and evolution more than five hundred million years to develop a powerful visual system in humans. The journey for AI and computer vision is about half of a century. In this talk, I will briefly discuss the key ideas and the cutting edge advances in the quest for visual intelligences in computers, focusing on work done in our lab over the years. 


Li Dr. Fei-Fei Li is currently on sabbatical as the Chief Scientist of AI/ML at Google Cloud. She is an Associate Professor in the Computer Science Department at Stanford, and the Director of the Stanford Artificial Intelligence Lab. Dr. Fei-Fei Li’s main research areas are in machine learning, deep learning, computer vision and cognitive and computational neuroscience. She has published more than 150 scientific articles in top-tier journals and conferences, including Nature, PNAS, Journal of Neuroscience, CVPR, ICCV, NIPS, ECCV, IJCV, IEEE-PAMI, etc. Dr. Fei-Fei Li obtained her B.A. degree in physics from Princeton in 1999 with High Honors, and her PhD degree in electrical engineering from California Institute of Technology (Caltech) in 2005.  She joined Stanford in 2009 as an assistant professor, and was promoted to associate professor with tenure in 2012. Prior to that, she was on faculty at Princeton University (2007-2009) and University of Illinois Urbana-Champaign (2005-2006). Dr. Li is the inventor of ImageNet and the ImageNet Challenge, a critical large-scale dataset and benchmarking effort that has contributed to the latest developments in deep learning and AI. In addition to her technical contributions, she is a national leading voice for advocating diversity in STEM and AI. She is co-founder of Stanford’s renowned SAILORS outreach program for high school girls and the national non-profit AI4ALL. For her work in AI, Dr. Li is a speaker at the TED2015 main conference, a recipient of the IAPR 2016 J.K. Aggarwal Prize, the 2016 nVidia Pioneer in AI Award, 2014 IBM Faculty Fellow Award, 2011 Alfred Sloan Faculty Award, 2012 Yahoo Labs FREP award, 2009 NSF CAREER award, the 2006 Microsoft Research New Faculty Fellowship and a number of Google Research awards. Work from Dr. Li's lab have been featured in a variety of popular press magazines and newspapers including New York Times, Wall Street Journal, Fortune Magazine, Science, Wired Magazine, MIT Technology Review, Financial Times, and more. She was selected as one of the “Great Immigrants: The Pride of America” in 2016 by the Carnegie Foundation, past winners include Albert Einstein, Yoyo Ma, and Sergey Brin.

 

Maja J. Mataric

Automation vs. Augmentation: Defining the Future of Socially Assistive Robotics

Robotics has been driven by the desire to automate work, but automation raises concerns about the impact on the future of work. Less discussed but no more important are the implications on human health, as the science on longevity and resilience indicates that having the drive to work is key for health and wellness.

However, robots, machines that were originally invented to automate work, are also becoming helpful by not doing any physical work at all, but instead by motivating and coaching us to do our own work, based on evidence from neuroscience and behavioral science demonstrating that human behavior is most strongly influenced by physically embodied social agents, including robots.The field of socially assistive robotics (SAR) focuses on developing intelligent socially interactive machine that that provide assistance through social rather than physical means. The robot’s physical embodiment is at the heart of SAR’s effectiveness, as it leverages the inherently human tendency to engage with lifelike (but not necessarily human-like or otherwise biomimetic) agents. People readily ascribe intention, personality, and emotion to robots; SAR leverages this engagement to develop robots capable of monitoring, motivating, and sustaining user activities and improving human learning, training, performance and health outcomes. Human-robot interaction (HRI) for SAR is a growing multifaceted research field at the intersection of engineering, health sciences, neuroscience, social, and cognitive sciences, with rapidly growing commercial spinouts. This talk will describe research into embodiment, modeling and steering social dynamics, and long-term adaptation and learning for SAR, grounded in projects involving multi-modal activity data, modeling personality and engagement, formalizing social use of space and non-verbal communication, and personalizing the interaction with the user over a period of months, among others. SAR systems have been validated with a variety of user populations, including stroke patients, children with autism spectrum disorders, elderly with Alzheimers and other forms of dementia; this talk will cover the short, middle, and long-term commercial applications of SAR, as well as the frontiers of SAR research.

 

Maja Mataric´ is Chan Soon-Shiong Professor of Computer Science, Neuroscience, and Pediatrics at the University of Southern California, founding director of the USC Robotics and Autonomous Systems Center, and Vice Dean for Research in the Viterbi School of Engineering. Her PhD and MS are from MIT, and BS from the University of Kansas. She is Fellow of AAAS, IEEE, and AAAI, and received the Presidential Award for Excellence in Science, Mathematics and Engineering Mentoring, Anita Borg Institute Women of Vision Award in Innovation, the Okawa Foundation, NSF Career, MIT TR35, and IEEE RAS Early Career Awards. She has published extensively and is active in K-12 STEM outreach. A pioneer of socially assistive robotics, her research enables robots to help people through social interaction in therapy, rehabilitation, training, and education, developing robot-assisted therapies for autism, stroke, Alzheimer's and other special needs, as well as wellness interventions https://robotics.usc.edu/interaction/. She is also founder and CSO of Embodied, Inc. www.embodied.me.


Keynote Speakers

Nicholas Roy

Representations vs Algorithms:  Symbols and Geometry in Robotics

In the last few years, the ability for robots to understand and operate in the world around them has advanced considerably. Examples include the growing number of self-driving car systems, the considerable work in robot mapping, and the growing interest in home and service robots. However, one obstacle to getting more widely useful robots is the difficulty that people have in interacting with robots. A major driver of this difficulty is that how robots reason about the world is still pretty different to how people reason. Robots think in terms of point features, dense occupancy grids and action cost maps. People think in terms of landmarks, segmented objects and tasks (among other representations). There are good reasons why these are different, and robots are unlikely to ever reason about the world in the same way that people do. However, for effective operation, robots must be able to interact naturally with people around them and act as real team-mates.

I will talk about recent work in joint reasoning about semantic representations and physical representations, especially how such reasoning relates to natural language understanding, and how we can bridge the gap between low-level sensing and control, and higher-level semantic representations to create more capable robots.

 

ILP Headshot croppedNicholas Roy is the Bisplinghoff Professor of Aeronautics & Astronautics at the Massachusetts Institute of Technology and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. He received his Ph. D. in Robotics from Carnegie Mellon University in 2003. His research interests include unmanned aerial vehicles, autonomous systems, human-computer interaction, decision-making under uncertainty and machine learning. He spent two years at Google [x] as the founder of Project Wing.


Brian Gerkey

Fun! Free! Awesome! Advanced robotics in the era of Open Source Software

After many years of it being "just around the corner," we are now witnessing the beginning of a robot revolution. We hear about robots daily, from awe-inspiring technical achievements to breath-taking investments and acquisitions. Why? And, why now? In this session, I'll explain how open source software, embedded computing, and new sensors have come together to change the landscape for robotics developers (and users).

 

Brian Gerkey is CEO of OSRF. Prior to joining OSRF, Brian was Director of Open Source Development at Willow Garage. Previously, Brian was a Computer Scientist in the Artificial Intelligence Center at SRI, and before that, a postdoctoral research fellow in the Artificial Intelligence Lab at Stanford University. Brian received his Ph.D. in Computer Science from the University of Southern California (USC) in 2003, his M.S. in Computer Science from USC in 2000, and his B.S.E. in Computer Engineering, with a secondary major in Mathematics and a minor in Robotics and Automation, from Tulane University in 1998. Brian is a strong believer in, frequent contributor to, and constant beneficiary of open source software. Since 2008, Brian has worked on the ROS Project, which develops and releases one of the most widely used robot software platforms in robotics research and education (and soon industry). He is founding and former lead developer on the open source Player Project, which continues to maintain widely used robot simulation and development tools. For his work on Player and ROS, Brian was recognized by MIT Technology Review with the TR35 award in 2011 and by Silicon Valley Business Journal with their 40 Under 40 award in 2016.


Steven Waslander

Static and Dynamic Multi-Camera Clusters for Localization and Mapping

Multi-camera clusters provide significant advantages over monocular and stereo configurations for localization and mapping, particularly in complex environments with moving objects. The wide or omni-directional field of view afforded by multiple cameras allows the mitigation of detrimental effects from feature deprivation or occlusion. Where possible, large baselines between camera centres afford good sensitivity for scale resolution, without the need for overlap. In this talk, I will describe our work on multi-camera localization and mapping for both static clusters with rigidly mounted cameras and dynamic clusters with gimballed cameras. We evaluate conditions for degeneracy of the state estimation process for each type of cluster. We demonstrate performance results on unmanned aerial vehicles and automotive benchmark data.

 

Prof. Waslander received his B.Sc.E. in 1998 from Queen's University, his M.S. in 2002 and his Ph.D. in 2007, both from Stanford University in Aeronautics and Astronautics. He was a Control Systems Analyst for Pratt & Whitney Canada from 1998 to 2001. In 2008, he joined the Department of Mechanical and Mechatronics Engineering at the University of Waterloo in Waterloo, ON, Canada, as an Assistant Professor. He is the Director of the Waterloo Autonomous Vehicles Laboratory (WAVELab,https://wavelab.uwaterloo.ca. His research interests are in the areas of autonomous aerial and ground vehicles, simultaneous localization and mapping, nonlinear estimation and control, and multi-vehicle systems.


Josh Bongard

Robots that Evolve, Develop, and Learn

Many organisms experience radical morphological and neurological change over evolutionary time, as well as their own lifetimes. Traditionally, this has been hard to do with rigid-body robots. The emerging field of soft robotics, however, is now making it relatively easy to create robots that change their body plans and controllers over multiple time scales. In this talk I will explore not just how to do this, but why one might choose to do so: I will show how such robots are more adaptable than robots that cannot adapt body and brain over time.

 

Josh Bongard is a roboticist and professor in the Department of Computer Science at the University of Vermont. He was a Microsoft New Faculty Fellow (2006), an MIT Technology Review “Top Innovator under the Age of 35” (2007), and the recipient of a PECASE award (2011). His funded research covers the crowdsourcing of robotics, embodied cognition, human-robot interaction, autonomous machines that recover functionality after unanticipated damage, soft robotics, and white box machine learning. His work has been funded by NSF, DARPA, ARO, AFRL, and NASA. He is the co-author of the book How the Body Shapes the Way We Think: A New View of Intelligence.


Vincent Hayward

Mechanics of Tactile Perception and Haptic Interface Design

The physics of contact differ in fundamental ways from the physics of acoustics and optics. It should therefore be expected that the processing of somatosensory information be very different from the processing in other sensory modalities. The talk will describe some salients facts regarding the physics of touch and will continue with the description recent findings regarding the processing of time-evolving tactile inputs in second-order neurones in mammals. These ideas can be applied to the design of cost effective efficient tactile displays and tactile sensors.

 

 Vincent Hayward is a professor (on leave) at the Université Pierre et Marie Curie (UPMC) in Paris. Before, he was with the Department of Electrical and Computer Engineering at McGill University, Montréal, Canada, where I became a full Professor in 2006 and was the Director of the McGill Centre for Intelligent Machines from 2001 to 2004. Hayward is interested in haptic device design, human perception, and robotics; and I am a Fellow of the IEEE. He was a European Research Council Grantee from 2010 to 2016. Since January 2017, Hayward is a Professor of Tactile Perception and Technology at the School of Advanced Studies of the University of London, supported by a Leverhulme Trust Fellowship.


David Hsu

Robust Robot Decision Making under Uncertainty: From Known Unknowns to Unknown Unknowns

In the near future, robots will "live" with humans, providing a variety of services at homes, in workplaces, or on the road. For robots to become effective and reliable human collaborators, a core challenge is the inherent uncertainty in understanding human intentions, in addition to imperfect robot control and sensor noise. To achieve robust performance, robots must hedge against uncertainties and sometimes actively elicit information to reduce uncertainties. I will briefly review Partially Observable Markov Decision Process (POMDP) as a principled general model for planning under uncertainty and present our recent work that tackles the intractable POMDP planning problem and achieves near real-time performance in dynamic environments for autonomous vehicle navigation among many pedestrians. In practice, an outstanding challenge of POMDP planning is model construction. I will also discuss how recent advances in deep learning can help bridge the gap and connect planning and learning.

 

David Hsu is a professor of computer science at the National University of Singapore (NUS), a member of NUS Graduate School for Integrative Sciences & Engineering, and deputy director of the Advanced Robotics Center. He received Ph.D. in computer science from Stanford University, USA. In recent years, he has been working on robot planning and learning under uncertainty.

He served as the General Co-Chair of IEEE International Conference on Robotics & Automation (ICRA) 2016, the Program Chair of Robotics: Science & Systems (RSS) 2015, a steering committee member of International Workshop on the Algorithmic Foundation of Robotics (WAFR), an editorial board member of Journal of Artificial Intelligence Research, and an associate editor of IEEE Transactions on Robotics. He, along with colleagues and students, won the Humanitarian Robotics and Automation Technology Challenge Award at ICRA 2015 and the RoboCup Best Paper Award at IEEE/RSJ International Conference on Intelligent Robots & Systems (IROS) 2015.


Julie Shah

Enhancing Human Capability with Intelligent Machine Teammates

Every team has top performers -- people who excel at working in a team to find the right solutions in complex, difficult situations. These top performers include nurses who run hospital floors, emergency response teams, air traffic controllers, and factory line supervisors. While they may outperform the most sophisticated optimization and scheduling algorithms, they cannot often tell us how they do it. Similarly, even when a machine can do the job better than most of us, it can’t explain how. In this talk I share recent work investigating effective ways to blend the unique decision-making strengths of humans and machines. I discuss the development of computational models that enable machines to efficiently infer the mental state of human teammates and thereby collaborate with people in richer, more flexible ways. Our studies demonstrate statistically significant improvements in people’s performance on military, healthcare and manufacturing tasks, when aided by intelligent machine teammates.

 

Julie Shah is an Associate Professor of Aeronautics and Astronautics at MIT and director of the Interactive Robotics Group, which aims to imagine the future of work by designing collaborative robot teammates that enhance human capability. As a current fellow of Harvard University's Radcliffe Institute for Advanced Study, she is expanding the use of human cognitive models for artificial intelligence. She has translated her work to manufacturing assembly lines, healthcare applications, transportation and defense. Before joining the faculty, she worked at Boeing Research and Technology on robotics applications for aerospace manufacturing. Prof. Shah has been recognized by the National Science Foundation with a Faculty Early Career Development (CAREER) award and by MIT Technology Review on its 35 Innovators Under 35 list. Her work on industrial human-robot collaboration was also in Technology Review’s 2013 list of 10 Breakthrough Technologies. She has received international recognition in the form of best paper awards and nominations from the ACM/IEEE International Conference on Human-Robot Interaction, the American Institute of Aeronautics and Astronautics, the Human Factors and Ergonomics Society, the International Conference on Automated Planning and Scheduling, and the International Symposium on Robotics. She earned degrees in aeronautics and astronautics and in autonomous systems from MIT.


Edwin Olson

Reliability and Robustness of Autonomous Systems

From self-driving cars to domestic robots, it's relatively easy to build a system that works well enough for the purposes of a video. Achieving high levels of reliability, on the other hand, is all-too-often viewed as an engineering step through which bugs are removed and corner cases are addressed. In some domains, however, the gap between the reliability demonstrated by today's system and the bar needed for real-world deployment remain many orders of magnitude apart. This is not a matter of engineering polish, but rather a need for fundamentally different ways of building our systems.

 

 Edwin Olson is an Associate Professor of Computer Science and Electrical Engineering at the University of Michigan, and co-founder/CEO of May Mobility, Inc., which develops autonomous cars. He earned his PhD from MIT in 2008 for work in robot mapping. 

 

He has worked on autonomous vehicles for the last 10 years, including the 2007 DARPA Urban Challenge, collaborating with Ford as a Principle Investigator on their autonomous vehicle program, serving as Co-Director for Autonomous Driving Development at Toyota Research Institute, and running the APRIL lab at the University of Michigan since 2008. His academic research includes work on perception, planning, and mapping. He was awarded a DARPA Young Faculty Award, named one of Popular Science's "Brilliant 10", and was winner of the 2010 MAGIC robotics competition. He is perhaps best known for his work on AprilTags, SLAM using MaxMixtures and SGD, and Multi-Policy Decision Making.


Hiroshi Ishiguro

Studies on Interactive Robots - Principles of conversation

We have developed interactive robots and androids and studied principles of interaction and conversation between humans and robots in Osaka University and ATR. This talk introduces the robots and androids and discusses on our future society supported by them. In addition, this talk discusses on the fundamentals of human-robot interaction and conversation focusing on the feeling of presence given by robots and androids and conversations with two robots and touch panels.

 

 顔 0789 Hiroshi Ishiguro (M) received a D.Eng. in systems engineering from the Osaka University, Japan in 1991. He is currently Professor of Department of Systems Innovation in the Graduate School of Engineering Science at Osaka University (2009-), and visiting Director (2014-) (group leader: 2002-2013) of Hiroshi Ishiguro Laboratories at the Advanced Telecommunications Research Institute and an ATR fellow. His research interests include distributed sensor systems, interactive robotics, and android science. He has published more than 300 papers in major journals and conferences, such as Robotics Research and IEEE PAMI. On the other hand, he has developed many humanoids and androids, called Robovie, Repliee, Geminoid, Telenoid, and Elfoid. These robots have been reported many times by major media, such as Discovery channel, NHK, and BBC. He has also received the best humanoid award four times in RoboCup. In 2011, he won the Osaka Cultural Award presented by the Osaka Prefectural Government and the Osaka City Government for his great contribution to the advancement of culture in Osaka. In 2015, he received the Prize for Science and Technology (Research Category) by the Minister of Education, Culture, Sports, Science and Technology (MEXT). He was also awarded the Sheikh Mohammed Bin Rashid Al Maktoum Knowledge Award in Dubai in 2015.

 


Oliver Brock

Robotics as the Path to Intelligence

The historical promise robotics is to devise technological artifacts that replicate all human capabilities. This includes physical capabilities like locomotion and dexterity, intellectual capabilities like reasoning and learning, and also social capabilities like collaboration and training. Are we, as a discipline, still pursuing this objective? Is it even worthwhile or promising to do so? And if so, are we making good progress? I will portray my views on the importance for robotics to understand and replicate intelligence, including physical intelligence and social intelligence. By juxtaposing views from related disciplines with recent accomplishments of our field, e.g. soft robotics and deep learning, I will sketch a path towards a future generation of robots with human-like abilities.

 

Oliver Brock is the Alexander-von-Humboldt Professor of Robotics in the School of Electrical Engineering and Computer Science at the Technische Universität Berlin in Germany. He received his Diploma in Computer Science in 1993 from the Technische Universität Berlin and his Master's and Ph.D. in Computer Science from Stanford University in 1994 and 2000, respectively. He also held post-doctoral positions at Rice University and Stanford University. Starting in 2002, he was an Assistant Professor and Associate Professor in the Department of Computer Science at the University of Massachusetts Amherst, before to moving back to the Technische Universität Berlin in 2009. The research of Brock's lab, the Robotics and Biology Laboratory, focuses on mobile manipulation, interactive perception, grasping, manipulation, soft material robotics, interactive machine learning, deep learning, motion generation, and the application of algorithms and concepts from robotics to computational problems in structural molecular biology. He is the president of the Robotics: Science and Systems foundation.


Lynne Parker

Cooperating without Communicating:  Achieving Teaming by Observation

A long-term goal of research in both multi-robot and human-robot teams is to achieve a natural and intuitive collaboration, similar to the type of implicit cooperation demonstrated by many well-practiced human-only teams.  In smoothly-operating human-only teams, individuals have trained together and understand intuitively how to interact with each other on the current task without the need for any explicit commands or conversations. A fundamental research question is whether a similar level of teaming fluency can be created in multi-robot and human-robot teams.  Such teams would consist of individuals that operate side-by-side in the same physical space, each performing physical actions based upon their individual skills and capabilities, while also collaborating seamlessly and implicitly with other teammates. Achieving this level of implicit interaction requires team member observation of the activities and actions of teammates, with appropriate action responses to ensure that the team collectively achieves its shared objectives.  This talk explores the challenges and possible solutions for achieving team-based implicit cooperation without explicit communication, focusing on the collaborations that are possible through observation, modeling, inference, and implicit activity coordination in multi-robot and human-robot teams.

 

Lynne Parker 2015 Dr. Lynne E. Parker is an Associate Dean in the Tickle College of Engineering at the University of Tennessee, Knoxville (UTK), and Professor in the Department of Electrical Engineering and Computer Science. She received her PhD in Computer Science from the Massachusetts Institute of Technology.  Lynne is the founder and director of the Distributed Intelligence Laboratory at UTK.  She has made significant research contributions in distributed and heterogeneous robot systems, machine learning, and human-robot interaction, and has received numerous awards for her research, teaching, and service, including the PECASE Award (U.S. Presidential Early Career Award for Scientists and Engineers), the IEEE RAS Distinguished Service Award, and many UTK Chancellors, College, and Departmental awards. Dr. Parker has been active in the IEEE Robotics and Automation Society for many years; she served as the General Chair for the 2015 IEEE International Conference on Robotics and Automation, as the Editor-in-Chief of the IEEE RAS Conference Editorial Board, as an Administrative Committee Member of RAS, and as Editor of IEEE Transactions on Robotics. She is a Fellow of IEEE.


Frank Chongwoo Park

Attention, Noise, and the Coordination of Robot Movements

As both robots and their tasks become more complex, strategies for perception, planning, and control must increasingly take into account the limits on a robot's computation and communication resources. Inspired by recent research on the role of attention in human motor control and visual perception, and more generally by the remarkable ability of humans to perform multiple tasks---both physical and cognitive---in a simultaneous manner, this talk will explore some ideas for quantifying attention, and how they can be used in robot perception, planning, and control.  Some relevant notions of attention and control cost from the human motor control and optimal control theory literature are introduced, and examples given of planning and control methods that make use of these concepts; these include simple kinematic feedback control laws based on the minimum variance principle for generating natural human-like motions, and combined feedback-forward control laws for ball catching and other skills that rely on sensory feedback.   The roles of feedforward and feedback in human and robot motor control, and whether attention and sparsity are meaningful optimality criteria for robot coordination and learning, are also examined.

 

FC Park Frank C. Park received his B.S. in EECS from MIT in 1985, and Ph.D. in applied mathematics from Harvard University in 1991. He joined the mechanical and aerospace engineering faculty at the University of California, Irvine in 1991, and since 1995 he has been professor of mechanical and aerospace engineering at Seoul National University, where he is currently serving as department chair. His research interests are in robot mechanics, planning and control, vision and image processing, machine learning, and related areas of applied mathematics. He has been an IEEE Robotics and Automation Society Distinguished Lecturer, and received best paper awards for his work on visual tracking and parallel robot design. He has served on the editorial boards of the Springer Handbook of Robotics, Springer Advanced Tracts in Robotics (STAR), Robotica, and the ASME Journal of Mechanisms and Robotics. He has held adjunct faculty positions at the NYU Courant Institute and the Interactive Computing Department at Georgia Tech, and is currently adjunct professor at the Robotics Institute at HKUST. He is a fellow of the IEEE, current editor-in-chief of the IEEE Transactions on Robotics, developer of the EDX course Robot Mechanics and Control I, II, and co-author (with Kevin Lynch) of Modern Robotics: Mechanics, Planning and Control (2017 Cambridge University Press).

 


Cecilia Laschi

Robotics goes soft: challenges and achievements, for new robotics scenarios

Soft robotics is a young yet promising approach to develop deformable robots that can adapt to theenvironment and exploit interaction for accomplishing real-world tasks. Widely growing worldwide, softrobotics has produced already interesting achievements in terms of technologies for actuation, sensing,control, and many more. In addition to allowing more applications for robots, soft robotics technologies areenabling robot abilities that were not possible before, like morphing, stiffening, growing, self-healing,evolving. They open up new scenarios for robotics that brings towards more life-like robots, effectively andefficiently adaptable to their environments and tasks.

 

laschi 6 Cecilia Laschi is Full Professor of Biorobotics at the the BioRobotics Institute of the Scuola Superiore Sant’Anna in Pisa, Italy, where she serves as Rector’s delegate to Research. She graduated in ComputerScience at the University of Pisa in 1993 and received the Ph.D. in Robotics from the University of Genoa in1998. In 2001-2002 she was JSPS visiting researcher at Waseda University in Tokyo.Her research interests are in the field of biorobotics and she is currently working on soft robotics,humanoid robotics, and neurodevelopmental engineering. She has been and currently is involved in manyNational and EU-funded projects, she was the coordinator of the ICT-FET OCTOPUS Integrating Project,leading to one of the first soft robots, and of the European Coordination Action on Soft Robotics RoboSoft.She has authored/co-authored more than 200 papers, she is Chief Editor of the Specialty Section on SoftRobotics of Frontiers in Robotics and AI and she is in the Editorial Board of Bioinspiration&Biomimetics, IEEERobotics and Automation Letters, Frontiers in Bionics and Biomimetics, Advanced Robotics, Social Robotics.She is member of the IEEE, of the Engineering in Medicine and Biology Society, and of the Robotics &Automation Society, where she served as elected AdCom member and currently is Co-Chair of the TC onSoft Robotics.


 Tim Salcudean

Ultrasound and ultrasound-mediated image guidance for robot assisted surgery 

Medical robotic systems present a great opportunity for integrating imaging with surgical navigation. Indeed, the instruments are localized and tracked in real time with respect to the camera view, so once registered to the patient, imaging can be used to display anatomy and pathology with respect to the robot camera and the instruments.  

We present our approaches to integrating ultrasound and magnetic resonance imaging with the da Vinci medical robotic system for prostate surgery. We will summarize our calibration and registration techniques and our experience from a first patient study (N=27) in which this system was used.         

Integration of ultrasound with medical robots enables new intra-operative ultrasound-based tissue characterization. We present our work in two areas: quantitative elastography to measure tissue shear storage and loss moduli (“objective palpation”), and photoacoustic imaging to measure blood oxygenation.  

 

salcudean Tim Salcudean received his bachelor’s and master’s from McGill University and the doctorate from U.C. Berkeley, all in Electrical Engineering. From 1986 to 1989, he was a Research Staff Member in the robotics group at the IBM T.J. Watson Research Center. He then joined the Department of Electrical and Computer Engineering at the University of British Columbia, Vancouver, Canada, where he holds a Canada Research Chair and the Laszlo Chair in Biomedical Engineering. 

Professor Salcudean’s research contributions have been in the areas of medical imaging, medical robotics, simulation and virtual environments, haptics, teleoperation and optimization-based design. Several companies have licensed his technology and his gland-contouring software for prostate cancer radiotherapy has become the standard of care in British Columbia, and has been used in well over 2000 patients. Prof. Salcudean has been a co-organizer of several research symposia and has served as a Technical and Senior Editor of the IEEE Transactions on Robotics and Automation. He is a Fellow of MICCAI, the IEEE, and the Canadian Academy of Engineering.


 Joey Durham

Assembling Orders in Amazon’s Robotic Warehouses

Every day, Amazon is able to quickly pick, pack, and ship millions of items to customers from a network of fulfillment centers all over the globe. Each Amazon warehouse holds millions of items of inventory, most customer orders represent a unique combination of several items, and many orders need to be shipped within a couple hours of being placed to meet delivery promises. This would not be possible without leveraging cutting-edge advances in technology. This talk will describe the mobile robotic fleet that powers an Amazon warehouse and delivers inventory shelves to associates, including how we approach the interrelated problems of assigning tasks and planning paths for thousands of robots in dynamic warehouse environments. I will also present the results of the 2017 Amazon Robotics Challenge in manipulation and grasping, as well as a couple of big open problems in robotic warehousing.

 

 joseph durham Joey Durham is Manager of Research and Advanced Development at Amazon Robotics. His team focuses on resource allocation algorithms, machine learning, and path planning for robotic warehouses. He is also the Contest Chair for the Amazon Robotics Challenge, an annual robotic manipulation contest held most recently as part of RoboCup 2017. Joey joined Kiva Systems after completing his Ph.D. at the University of California at Santa Barbara in distributed coordination for teams of robots. He has been with the company through its acquisition and growth into Amazon Robotics. Previously he worked on path planning for autonomous vehicles for the Stanford University team that won the DARPA Grand Challenge.

 

 


Keynote by Platinum Sponsor

Aleksandr Kapitonov Airalab

Robot Economics on Ethereum Blockchain: Tasks Formulation and Proposed Solutions

 

joseph durhamIt is inevitable that robots will be an essential part of every human’s life. Machines are capable of performing tasks impossible for humans, they are more effective in many types of business activities and they are already saving people time every day. The development of robotics has reached the point where the problem of communication has arisen between physically and logically separated autonomous agents (robots). Robots have capacity to decide which actions are appropriate within constantly changing environment.

Technologies that are now being used in the world of machines constantly expand the set of decisions available to a robot, which increases its level of autonomy. And soon we’ll control robots not at a low level but through capital flow and this’ll be “Robot economics”.

Biography: Aleksandr Kapitonov is a “Robot economics” academic society professor at Airalab. He is an assistant- professor of Control Systems and Computer Science at ITMO University, where he received his Ph.D. in industrial automation in 2014. His team focuses on navigation, computer vision, control of mobile robots and communication for multi-agents systems. He is also the Coach of RoboCup teams. Aleksandr is a regional coordinator of Erasmus+ IOT-OPEN.EU project for researching and developing an IoT education practices, and an engineer in “Nonlinear Adaptive Control Systems” international laboratory at ITMO University.