Keynote Talks

 

Tuesday, May 17 (morning)

Keynote TuK1T1, 11:00-11:20, Room A1
Robert Mahony (Australian National University, Australia)
Direct Homography Tracking from Video Data

Keynote TuK1T2, 11:00-11:20, Room A3
Darwin Caldwell (Istituto Italiano di Tecnologia – IIT, Italy)
Actuation and Compliance: Its influence on Legged Robotics

Keynote TuK1T3, 11:00-11:20, Room A2
Herman van der Kooij (University of Twente, The Netherlands)
Towards wearable exosuits that support balance and symbiotically interact with patients

Tuesday, May 17 (afternoon)

Keynote TuK2T1, 15:10-15:30, Room A1
Jun Morimoto (ATR Computational Neuroscience Labs, Japan)
Motor learning methods for humanoid control

Keynote TuK2T2, 15:10-15:30, Room A3
Dieter Fox (University of Washington, USA)
The 100-100 Tracking Challenge (link to video)

Keynote TuK2T3, 15:10-15:30, Room A2
Yu Sun (University of Toronto, Canada)
Robotic Cell Manipulation: Surgery, Diagnostics, and Drug Screen

Wednesday, May 18 (morning)

Keynote WeK1T1, 9:20-9:40, Room A1
Karl Iagnemma (Massachusetts Institute of Technology – MIT, USA)
Self-driving cars: From research to road

Keynote WeK1T2, 9:20-9:40, Room A3
Louis Whitcomb (Johns Hopkins University, USA)
Nereid Under-Ice: Development of a Remotely Operated Underwater Vehicle for Oceanographic Access Under Ice

Keynote WeK1T3, 9:20-9:40, Room A2
Domenico Prattichizzo (University of Siena, Italy)
When wearable haptics meets wearable robotics: The robotic sixth finger to compensate the hand function in subjects with upper limb impairments

Wednesday, May 18 (afternoon)

Keynote WeK2T1, 14:50-15:10, Room A1
André Seyfarth (Technical University of Darmstadt, Germany)
Bioinspired legged locomotion – from simple models to legged robots

Keynote WeK2T2, 14:50-15:10, Room A3
Christian Ott (German Aerospace Center – DLR, Germany)
Similarities in Manipulation and Locomotion

Keynote WeK2T3, 14:50-15:10, Room A2
Ken Goldberg (University of California at Berkeley, USA)
Deep Grasping: Can large datasets and reinforcement learning bridge the dexterity gap?

Thursday, May 19 (morning)

Keynote ThK1T1, 8:00-8:20, Room A1
Seth Hutchinson (University of Illinois at Urbana-Champaign, USA)
Progress Toward a Robotic Bat: Modeling, Design, Dynamics, and Control

Keynote ThK1T2, 8:00-8:20, Room A3
Emo Todorov (University of Washington, USA)
Physics-based optimization: A universal approach to robot control

Keynote ThK1T3, 8:00-8:20, Room A2
Anibal Ollero (University of Seville, Spain)
Aerial robotic manipulation. Where are we going?

Thursday, May 19 (morning, continued)

Keynote ThK2T1, 8:20-8:40, Room A1
Hajme Asama (University of Tokyo, Japan)
Societal Dissemination of Robot Technology for Disaster Response through Experience Accident of Fukushima Daiichi NPS

Keynote ThK2T2, 8:20-8:40, Room A3
Sami Haddadin (Leibniz University of Hannover, Germany)
Human-Centered Robotics: Toward the professional robot for everyone

Keynote ThK2T3, 8:20-8:40, Room A2
Hong Qiao (Chinese Academy of Science, China)
Brain inspired robotic vision, motion and planning

Abstracts and Biographies

mahony
Keynote TuK1T1, Tuesday, May 17, 11:00-11:20, Room A1
Robert Mahony (Australian National University, Australia)
Direct Homography Tracking from Video Data

Chair: Jana Kosecka (George Mason University, USA)

Abstract: This talk presents a new algorithm for online estimation of a sequence of homographies applicable to image sequences obtained from robotic vehicles equipped with vision sensors. The approach taken exploits the underlying Special Linear group structure of the set of homographies along with gyroscope measurements and direct point-feature correspondences between images to develop temporal filter for the homography estimate. Experiments demonstrate excellent performance even in the case of very fast camera motion (relative to frame rate), severe occlusion, and in the presence of specular reflections.

Bio: Robert Mahony is a Professor in the Research School of Engineering at the Australian National University. He received his BSc in 1989 (applied mathematics and geology) and his PhD in 1995 (systems engineering), both from the Australian National University. He worked firstly as a marine seismic geophysicist and an industrial research scientist before completing a postdoctoral fellowship in France at the Université de Technologie de Compiègne and a Logan Fellowship at Monash University in Australia. He has held his post at ANU since 2001. His research interests are in nonlinear systems theory and optimization with applications in robotics, geometric optimization techniques and computer vision.

Caldwell

Keynote TuK1T2, Tuesday, May 17, 11:00-11:20, Room A3
Darwin Caldwell (Istituto Italiano di Tecnologia – IIT, Italy)
Actuation and Compliance: Its influence on Legged Robotics

Chair: Rainer Bischoff (KUKA Roboter GmbH, Germany)

Abstract: Walking/running/jumping involves transitions in limb stiffness, from highly rigid when in a stance phase, to low rigidity when swinging or during potential contact with an external body/ground. This ability to use and regulate stiffness/compliance is one of the cornerstones of human and animal locomotion, helping to provide; stability, disturbance rejection, safety protection, and improving energy efficiency, but, traditionally its use is uncommon in humanoid and quadruped robots were the maxim “stiffer is better” dominates. By considering the designs within a “family” of robots (iCub, COMAN, WalkMan, HyQ, and HyQ2Max) developed at IIT over the past 10 years, this presentation will examine the development of torque controlled, Series Elastic, Variable Impedance, Damped and Compliant actuation technologies. There will be a description of the hardware and software structures that provide the robots with controlled and inherent impact tolerance, and it will be shown that this enhances the ability to walk on non-flat surfaces, on/across unstable slopes, reject simultaneous upper and lower body impacts, and reject disturbance impacts during walking. This presentation will also show the potential of compliant actuation systems to reduce energy consumption by creating large energy storage capacity elements, efficient actuation drivers and energy recycling techniques.

Bio: Darwin G Caldwell is Deputy Scientific Director of the Italian Institute of Technology (IIT), and Director of the Dept. of Advanced Robotics at IIT. He is an Honorary Professor at the Universities of Manchester, Sheffield, Bangor, Kings College London and Tianjin University China. His research interests include humanoid and quadrupedal robotics (iCub, cCub, COMAN, WalkMan, HyQ, HyQ2Max, Centauro), innovative actuators, force augmentation exoskeletons, dexterous manipulators, haptics, and medical robotics. He is the author or co-author of over 400 academic papers, and 19 patents and has received awards from international journals and conferences (ICRA, IROS, Humanoids, CASE, Robio, WorldHaptics, ICAR). He is a Fellow of the Royal Academy of Engineering.

 kooij

Keynote TuK1T3, Tuesday, May 17, 11:00-11:20, Room A2
Herman van der Kooij (University of Twente, The Netherlands)
Towards wearable exosuits that support balance and symbiotically interact with patients

Chair: Allison Okamura (Stanford University, USA)

Abstract: A little more than a decade ago, wearable exoskeletons were science fiction but now several companies have introduced exoskeletons for the lower extremities into the market and into the clinic, thereby changing the lives of many spinal cord injured and other patients. However, the functions those exoskeletons offer to patients are still incomparable with the functions they lost due to trauma or disease. For example, existing exoskeletons do not support balance and therefore patients have to rely on crutches, they have a limited ability to move quickly and easily, and a limited ability to adapt to activities and environments. In this presentation I will give an overview of our recent attempts to make exoskeletons more stable, versatile and agile. The approach we take is to understand the neuromechanics of human balance and gait and translate this knowledge into the design and control of exoskeletons and exosuits.

Bio: Prof. Dr. ir. Herman van der Kooij (1970) received his PhD with honors (cum laude) in 2000 and is professor in Biomechatronics and Rehabilitation Technology at the Department of Biomechanical Engineering at the University of Twente (0.8 fte), and Delft University of Technology (0.2fte), the Netherlands. His expertise and interests are in the field of human motor control, adaptation, and learning, rehabilitation robots, diagnostic, and assistive robotics, virtual reality, rehabilitation medicine, and neuromechanical computational modeling. He has published over 150 peer reviewed journal and conference papers. His team designed several therapeutic and assistive robots such as the LOPES and Mindwalker. He was awarded the prestigious Dutch VIDI and VICI personal grants in 2001 and 2015 respectively. He is the coordinator of the FP7 project Symbitron. He is associate editor of IEEE TBME and IEEE Robotics and Automation Letters, member of IEEE EMBS technical committee of Biorobotics, and was member of several scientific program committees in the field of rehabilitation robotics, bio robotics, and assistive devices.

morimoto

Keynote TuK2T1, Tuesday, May 17, 15:10-15:30, Room A1
Jun Morimoto (ATR Computational Neuroscience Labs, Japan)
Motor learning methods for humanoid control

Chair: Alin Albu-Schäffer (German Aerospace Center – DLR, Germany)

Abstract: We discuss how we can take advantage of the recent development of powerful computational resources to improve the policy used to control humanoid robots. First, we introduce a hierarchical motor learning framework for humanoid robot control. Specifically, we develop a computationally efficient Model Predictive Control (MPC) method for real-time control of humanoid robots. Although MPC is a highly useful approach to deriving a policy for control of nonlinear dynamical systems, its application to a robot having many degrees of freedom is still a challenging problem because MPC is quite computationally intensive. To cope with this issue, we developed an MPC method that implements a hierarchical optimization procedure where a lower layer in the hierarchy uses a short time horizon and a small time-step size. We evaluated the proposed method on simulated robot models. Second, we introduce a concept of using previous experiences to form environmental models. Actually, biological systems have the ability to efficiently reuse previous experiences to change their behavioral strategies. Our concept is adopted so that the movement policy of a humanoid robot can be efficiently improved with a limited number of sample experiences from the real environment and by making use of computational resources. We apply the proposed learning method to our humanoid robot and show that it can learn target movements for a given task in a real environment. (This study is supported by AMED-SRPBS and NEDO)

Bio: Jun Morimoto received the Ph.D. degree in information science in 2001 from the Nara Institute of Science and Technology, Japan. From 2001 to 2002, he was a Postdoctoral Fellow at the Robotics Institute, Carnegie Mellon University. Since 2002, he has been at the Advanced Telecommunications Research (ATR) Institute International in Kyoto, where he was a Researcher in the Computational Brain Project, the International Cooperative Research Project of the Japan Science and Technology Agency from 2004 to 2009. He is currently the Head of the Department of the Brain Robot Interface, Computational Neuroscience Laboratories, Nara, Japan.

fox

Keynote TuK2T2, Tuesday, May 17, 15:10-15:30, Room A3
Dieter Fox (University of Washington, USA)
The 100-100 Tracking Challenge (link to video)

Chair: Seth Hutchinson (University of Illinois at Urbana-Champaign, USA)

Abstract: Robots must be able to accurately track objects and people in their environments in order to navigate through dynamic scenes, manipulate objects, or collaborate with humans. Recent advances in depth sensing along with highly optimized algorithms now enable real-time detection and tracking of articulated objects such as human bodies, hands, and robot manipulators. Building on these developments, the 100-100 Tracking Challenge aims at identifying and tracking 100% of the objects and activities in a scene, with 100% accuracy. Solving this challenge will provide robots with an unprecedented understanding of their environment and the people therein. Using examples from our research on modeling, detecting, and tracking articulated objects, I will discuss why I believe this challenge will soon be within reach. I will also discuss the lessons we learned so far and the main technical challenges ahead.

Bio: Dieter Fox is a Professor in the Department of Computer Science & Engineering at the University of Washington, Seattle, where he heads the UW Robotics and State Estimation Lab. From 2009 to 2011, he was also Director of the Intel Research Labs Seattle. Dieter obtained his Ph.D. from the University of Bonn, Germany. Before joining the faculty of UW, he spent two years as a postdoctoral researcher at the CMU Robot Learning Lab. Fox’s research is in robotics and artificial intelligence, with a focus on state estimation and perception applied to various problems in robotics and activity recognition. He has published over 150 technical papers and is co-author of the text book “Probabilistic Robotics”. Dieter is an IEEE Fellow, a Fellow of the AAAI, and he received several best paper awards at major robotics, AI, and computer vision conferences. He was an editor of the IEEE Transactions on Robotics, program co-chair of the 2008 AAAI Conference on Artificial Intelligence, and program chair of the 2013 Robotics: Science and Systems conference.

sun

Keynote TuK2T3, Tuesday, May 17, 15:10-15:30, Room A2
Yu Sun (University of Toronto, Canada)
Robotic Cell Manipulation: Surgery, Diagnostics, and Drug Screen

Chair: Jeannette Bohg (Max Planck Institute for Intelligent Systems, Germany)

Abstract: Advances in medicine require robotic technologies for automated manipulation and characterization of cells and sub-cellular structures. Robotic cell surgery and automated characterization of cells enable new frontiers in medicine. Robotic deposition of foreign materials into cells is poised to revolutionize drug efficacy tests for drug repurposing and personalized medication. In this talk, example technical challenges in robotic cell manipulation will be introduced. I will summarize recent progress in clinical trial of our robotic cell surgery technology. I will then present a robotic system for automated mechanical characterization of voided urine cells to bolster clinical bladder cancer diagnostics. Drug screen for cardiovascular disease management, enabled by robotic cell manipulation, will also be discussed.

Bio: Yu Sun is a Professor in the Department of Mechanical and Industrial Engineering, with joint appointments in the Institute of Biomaterials and Biomedical Engineering and the Department of Electrical and Computer Engineering at the University of Toronto. He obtained his Ph.D. in mechanical engineering from the University of Minnesota in 2003 and did his postdoctoral research at ETH-Zürich. Sun has served and serves on the editorial boards of several IEEE Transactions and two Nature sponsored journals (Scientific Reports; Microsystems & Nanoengineering). Among the awards he received were the IEEE RAS Early Career Award, McLean Award, the First Prize in Technical Achievement of ASRM (American Society for Reproductive Medicine), and an NSERC E.W.R. Steacie Memorial Fellowship. He was elected Fellow of ASME (American Society of Mechanical Engineers), IEEE (Institute of Electrical and Electronics Engineers), and CAE (Canadian Academy of Engineering) for his work on micro-nano devices and robotic systems.

Iagnemma

Keynote WeK1T1, Wednesday, May 18, 9:20-9:40, Room A1
Karl Iagnemma (Massachusetts Institute of Technology – MIT, USA)
Self-driving cars: From research to road

Chair: Ville Kyrki (Aalto University, Finland)

Abstract: Self-driving vehicles have captured the attention of the public and attracted massive worldwide R&D attention, despite the fact that certain core technologies are (arguably) not fully mature. This rapid “research to road” transition has made it difficult to predict the evolution of commercial services that are enabled by self-driving vehicle technology. It has also created intense competition among large corporations, startups, and universities for talented researchers in relevant spaces such as perception, planning, and control. This talk will discuss the implications of the rapid transition from university research labs to corporate product development timelines, and present as a case study the formation of nuTonomy, an MIT spin-off focused on software development for fully autonomous passenger vehicles.

Bio: Karl Iagnemma is a principal research scientist in the Mechanical Engineering department at the Massachusetts Institute of Technology, where he directs the Robotic Mobility Group. He holds a B.S. from the University of Michigan, and an M.S. and Ph.D. from MIT, where he was a National Science Foundation Graduate Fellow. He has performed postdoctoral research at MIT, and has been a visiting researcher at the NASA Jet Propulsion Laboratory and the National Technical University of Athens (Greece). Karl is also co- founder and CEO of nuTonomy, a venture-backed startup company focused on developing software for fully autonomous passenger vehicles.

whitcomb

Keynote WeK1T2, Wednesday, May 18, 9:20-9:40, Room A3
Louis Whitcomb (Johns Hopkins University, USA)
Nereid Under-Ice: Development of a Remotely Operated Underwater Vehicle for Oceanographic Access Under Ice

Chair: Nikos Tsagarakis (Istituto Italiano di Tecnologia – IIT, Italy)

Abstract: This talk reports recent advances in underwater robotic vehicle research to enable novel oceano- graphic operations in extreme ocean environments, with focus on a recent novel vehicle developed by a team comprised of the speaker and his collaborators at the Woods Hole Oceanographic Institution. I will present the development and the first sea trials of the new Nereid Under-Ice (NUI) underwater vehicle. NUI is a novel remotely-controlled underwater robotic vehicle capable of being teleoperated under ice under remote real-time human supervision. We report the results of NUI’s first under-ice deployments during a July 2014 expedition aboard F/V Polarstern at 83° N 6 W° in the Arctic Ocean –approximately 200 km NE of Greenland.

Bio: Louis L. Whitcomb is Professor and Chair at the Department of Mechanical Engineering, with secondary appointment in Computer Science, at the Johns Hopkins University’s Whiting School of Engineering. He completed a B.S. in Mechanical Engineering in 1984 and a Ph.D. in Electrical Engineering in 1992 at Yale University. From 1984 to 1986 he was a R&D engineer with the GMFanuc Robotics Corporation in Detroit, Michigan. He joined the Department of Mechanical Engineering at the Johns Hopkins University in 1995, after post-doctoral fellowships at the University of Tokyo and the Woods Hole Oceanographic Institution. His research focuses on the navigation, dynamics, and control of robot systems – including industrial, medical, and underwater robots. Whitcomb is a co-principal investigator of the Nereus and Nereid Under-Ice Projects. He is former (founding) Director of the JHU Laboratory for Computational Sensing and Robotics. He received teaching awards at Johns Hopkins in 2001, 2002, 2004, and 2011, was awarded a NSF Career Award, and an ONR Young Investigator Award. He is a Fellow of the IEEE. He is also Adjunct Scientist, Department of Applied Ocean Physics and Engineering, Woods Hole Oceanographic Institution.

Prattichizzo

Keynote WeK1T3, Wednesday, May 18, 9:20-9:40, Room A2
Domenico Prattichizzo (University of Siena, Italy)
When wearable haptics meets wearable robotics: The robotic sixth finger to compensate the hand function in subjects with upper limb impairments

Chair: David Hsu (National University of Singapore, Singapore)

Abstract: Wearable haptics is an emerging research trend that will enable novel forms of communication and cooperation between humans and robots. The literature on wearable haptics has been mainly focused on vibrotactile stimulation and only recently wearable devices conveying richer stimuli, like force vectors, have been proposed. In this keynote, I will introduce design guidelines for wearable haptics and will review our research in this field. When wearable haptics meets wearable robotics the paradigm shift in human-robot cooperation is extraordinary, as it is for the robotic sixth finger, a wearable robot designed for rehabilitating the function of the paretic human hand. The robotic extra finger and the paretic hand act like the two parts of a gripper working together to stabilize the grasp of objects and let the subject, with upper limb impairments, to use both hands in bimanual tasks. The wearable robotic extra finger works together with a wearable haptic device playing the role of a sensorimotor interface augmenting the level of embodiment of the device. This is a case of synergistic use of wearable haptics and wearable robotics to support people with impairments in every day life.

Bio: Domenico Prattichizzo is Professor of Robotics at the University of Siena and Senior Scientist at the Istituto Italiano di Tecnologia in Genova, Italy. IEEE Fellow since 2016. PhD from the University of Pisa In 1994. Visiting Scientist at the MIT AI Lab. Guest Co-Editor of the Special Issue “Robotics and Neuroscience” of the Brain Research Bulletin (2008). Co-author of the chapter on “Grasping” of the Handbook of Robotics Springer, 2008, awarded with two PROSE Awards presented by the American Association of Publishers. Since 2014, Associate Editor of Frontiers of Biomedical Robotics. From 2007 to 2013 Associate Editor-in- Chief of the IEEE Transactions on Haptics. From 2003 to 2007, Associate Editor of the IEEE Transactions on Robotics and IEEE Transactions on Control Systems Technologies. Chair of the Italian Chapter of the IEEE RAS (2006-2010), awarded with the IEEE 2009 Chapter of the Year Award. Co-editor of two books by STAR, Springer Tracks in Advanced Robotics, Springer (2003, 2005). Best Demonstration Award with the research on the ”Robotic Sixth Finger” at the Haptic Symposium 2016. Author of more than 200 papers in haptics, grasping, wearable robotics, and robotic rehabilitation. From March 2013 Coordinator of the IP collaborative project “WEARable HAPtics for Humans and Robots” (WEARHAP). TEDx talk on wearable haptics in 2014 https://tinyurl.com/prattichizzo-tedx.

Seyfarth

Keynote WeK2T1, Wednesday, May 18, 14:50-15:10, Room A1
André Seyfarth (Technical University of Darmstadt, Germany)
Bioinspired legged locomotion – from simple models to legged robots

Chair: Antonio Bicchi (University of Pisa and IIT, Italy)

Abstract: In this talk I will present a research approach for investigating concepts on legged locomotion with the help of bio-inspired hardware systems such as legged robots and prosthesis. I will explain why hardware models of biomechanical gait concepts are not just a nice feature but do also provide a scientific tool to demonstrate and prove the value of theconceptual insights. Hardware models share properties of conceptual models (being simplified and human made) and of the biological system (being realistic). Thus they are connecting biological locomotion from conceptual computer simulation models and theories on legged locomotion. With assistive devices this approach can be taken one step further, namely by applying the constructed concepts to the human body and by investigating the interactions between human movement and hardware implemented motion concept.

Bio: André Seyfarth has studied physics at Friedrich-Schiller-Universität Jena, Germany, and at the Free University Berlin. He received his Ph.D. degree in 2000 in the Biomechanics Group in Jena. After his postdoctoral studies in Boston (2001-2002) and Zurich (2002-2003), he founded the Lauflabor Locomotion Laboratory in Jena in 2003. Since 2011 he is professor for Sports Biomechanics at the Technische Universität Darmstadt. His research interests comprise dynamics of locomotion on conceptual, simulation, experimental, and robotics level.

Ott

Keynote WeK2T2, Wednesday, May 18, 14:50-15:10, Room A3
Christian Ott (German Aerospace Center – DLR, Germany)
Similarities in Manipulation and Locomotion

Chair: Petter Ögren (KTH, Sweden)

Abstract: In this talk I will discuss several similarities and differences between research in manipulation and legged locomotion. According to Newton’s law of classical mechanics every motion generation is related to the acting forces. Manipulating a grasped object thus is done by a precise control of the contact forces at the end-effectors. In locomotion the contact forces by which the robot moves itself forward are often represented by abstract quantities like the zero-moment-point. Seen from a bird’s eye view both problems have a very similar structure and can be solved by the same mathematical concepts. The generalized contact forces are also the basis for controlling angular momentum, which is known to have an important influence on the balance of legged robots, in particular when underactuated motion phases are considered. While the momentum conservation law has been extensively utilized in research fields like space robotics, the additional possibility to actively control the momentum via the contact forces is a so-far underused opportunity for achieving more dynamic motions in humanoid robots. In this talk I will give a closer discussion of these aspects and show some related experiments with the torque controlled humanoid robot TORO.

Bio: Christian Ott received his doctoral degree in Automatic Control from the University of Saarland, Germany, in 2005. In his PhD he worked on interaction control of robot manipulators under consideration of joint elasticity. He was a visiting researcher at the University of Twente, and a project assistant professor at the University of Tokyo. Currently he is the Head of Department for “Analysis and Control of Advanced Robotic Systems” and is leading a Helmholtz Young Investigators Research Group on the control of legged humanoid robots in the DLR Institute of Robotics and Mechatronics. He received several scientific awards including the “Conference Best Paper Award” at HUMANOIDS 2011, the Industrial Robot Outstanding Paper Award 2007, and Best Video Awards at ICRA 2007 and HUMANOIDS 2014. His research interests are the application of nonlinear control methods to robotic systems, in particular for force and impedance control, and control of bipedal humanoids.

goldberg

Keynote WeK2T3, Wednesday, May 18, 14:50-15:10, Room A2
Ken Goldberg (University of California at Berkeley, USA)
Deep Grasping: Can large datasets and reinforcement learning bridge the dexterity gap?

Chair: Pieter Abbeel (University of California at Berkeley, USA)

Abstract: Data-driven learning has made surprising advances in computer vision and speech recognition. Drawing on Deep Learning and Cloud Robotics, can data-driven learning address the vast gap between human and robot dexterity, specifically for the elusive problem of robust grasping? I’ll review several exciting approaches and speculate on future directions.

Bio: Ken Goldberg is an artist and UC Berkeley professor. He and his students investigate robotics, automation, art, and social media. He is Director of the People and Robots Initiative CITRIS program (since 2015) and UC Berkeley’s Automation Sciences Research Lab (since 1995). Ken earned dual degrees in Electrical Engineering and Economics from the University of Pennsylvania (1984) and MS and PhD degrees from Carnegie Mellon University (1990). He joined the UC Berkeley faculty in 1995 where he is Professor of Industrial Engineering and Operations Research, with secondary appointments in EE/CS, Art Practice, the School of Information, and at the UCSF Medical School. Ken has published over 200 peer-reviewed technical papers on algorithms for robotics, automation, and social information filtering; his inventions have been awarded eight US Patents. He is Editor-in-Chief of the IEEE Transactions on Automation Science and Engineering (T-ASE), Co-Founder of the Berkeley Center for New Media (BCNM), the African Robotics Network (AFRON), the Center for Automation and Learning for Medical Robotics (CAL-MR), the CITRIS Data and Democracy Initiative (DDI), and Moxie Institute. Ken was awarded the Presidential Faculty Fellowship in 1995 by Bill Clinton, the NSF Faculty Fellowship in 1994, the Joseph Engelberger Robotics Award in 2000, and elected IEEE Fellow in 2005. His Erdos-Bacon number is 6.

Hutchinson

Keynote ThK1T1, Thursday, May 19, 8:00-8:20, Room A1
Seth Hutchinson (University of Illinois at Urbana-Champaign, USA)
Progress Toward a Robotic Bat: Modeling, Design, Dynamics, and Control

Chair: Alessandro De Luca (Sapienza University of Rome, Italy)

Abstract: In this presentation, I will describe our progress toward a biologically inspired robot bat. After an overview of the mechanical design (including morphology, kinematics, actuators, and avionics), I will describe the underactuated flight dynamics, and present a first approach to nonlinear control design. Specifically, I will present a mathematical framework that evaluates the holonomically-constrained Lagrangian model of a flapping robot with specified active and passive degrees of freedom in order to locate physically feasible and biologically meaningful periodic solutions using optimization. These solutions define a set of virtual constraints on the morphing behavior of the robot. Stable aerial locomotion is achieved when an event-based control scheme modifies the virtual constraints, which are interpreted as the control inputs in the morphing soft robot.

Bio: Seth Hutchinson received his Ph.D. from Purdue University in 1988. In 1990 he joined the faculty at the University of Illinois in Urbana-Champaign, where he is currently a Professor in the Department of Electrical and Computer Engineering, the Coordinated Science Laboratory, and the Beckman Institute for Advanced Science and Technology. He served as Associate Department Head of ECE from 2001 to 2007. He currently serves on the editorial boards of the International Journal of Robotics Research and the Journal of Intelligent Service Robotics, and chairs the steering committee of the IEEE Robotics and Automation Letters. He was Founding Editor-in-Chief of the IEEE Robotics and Automation Society’s Conference Editorial Board (2006- 2008) and Editor-in-Chief of the IEEE Transaction on Robotics (2008-2013). He has published more than 200 papers on the topics of robotics and computer vision, and is coauthor of the books “Principles of Robot Motion: Theory, Algorithms, and Implementations,” published by MIT Press, and “Robot Modeling and Control,” published by Wiley. Hutchinson is a Fellow of the IEEE.

Todorov

Keynote ThK1T2, Thursday, May 19, 8:00-8:20, Room A3
Emo Todorov (University of Washington, USA)
Physics-based optimization: A universal approach to robot control

Chair: Jean-Paul Laumond (LAAS-CNRS, France)

Abstract: Robot autonomy remains an elusive goal. Future robots must be capable of accepting high-level commands and figuring out the details needed to implement those commands within the constraints of their body and environment. Without such motor intelligence, robotics will remain limited to scripted performances that require constant human intervention and manual tuning. Our approach is to encode high-level commands as cost functions, which the robot then optimizes without any simplifications or model reductions, yielding all the details needed to act in the physical world. This optimization is inherently physics-based. It is made particularly challenging by the physics of contact, which have discontinuous nature and yet are the foundation of the interactions between the robot and its environment. We have recently made a lot of progress in this direction. We now have a physics simulator (MuJoCo) based on a new mathematical model of the physics of contact, making it suitable for use within an optimization loop. This has enabled neural networks to learn control policies and value functions for complex dynamical systems, as well as trajectory optimizers to construct sequences of robot states and controls that accomplish the task. Trajectory optimization can even be done in real-time, generating novel behaviors on the fly. While a lot of the work has been done in simulation, we have already transferred some of the simulation-based controllers to real systems. The talk will illustrate rich motor behaviors on a range of simulated and real systems including getting up from the floor, walking, running, kicking, swimming, flying, riding a unicycle, as well as a number of dexterous hand manipulation tasks.

Bio: Emo Todorov obtained his PhD in Cognitive Neuroscience from MIT in 1998. Since then he has worked as Postdoctoral Felow in the Gatsby Computational Neuroscience Unit at UCL, Research Scientist in Biomedical Engineering at USC, Assistant Professor in Cognitive Science at UCSD, and is now Associate Professor in Applied Mathematics and Computer Science & Engineering at UW. He is generally interested in intelligent control of complex systems, in both engineering and biology. His current focus is autonomous robot control through high-performance numerical optimization. He is also the founder of Roboti LLC and developer of the MuJoCo physics simulator.

Ollero

Keynote ThK1T3, Thursday, May 19, 8:00-8:20, Room A2
Anibal Ollero (University of Seville, Spain)
Aerial robotic manipulation. Where are we going?

Chair: Jianwei Zhang (University of Hamburg, Germany)

Abstract: Starting with some motivation applications, such as inspection, maintenance, search and rescue or space, this keynote will discuss the origin, current state and trends of aerial robotic manipulation and closely related topics of aerial robots physically interacting with their environment, such as the transportation and deployment of instruments and equipment. The talk will summarize some results obtained in the projects led by the author, such as the European Commission projects FP7 ARCAS and the H2020 AEROARMS, the applications, and the objectives to be achieved in the next years. In the presentation, I will overview methods and technologies for aerial robotic manipulation control, perception and planning. Coordination and cooperation of multiple aerial robotic manipulators will be also included. The talk will also analyze the gaps and limitations of aerial robotic manipulation and complementary approaches to deal with applications such as inspection and maintenance.

Bio: Full professor, head of GRVC (75 members), University of Seville, and Scientific Advisor of the Center for Advanced Aerospace Technologies in Seville (Spain). He has been full professor at the Universities of Santiago and Malaga (Spain), researcher at the Robotics Institute of Carnegie Mellon University (Pittsburgh, USA) and LAAS-CNRS (Toulouse, France). He authored more than 635 publications, including 9 books and 135 SCI journal papers and led about 140 projects, transferring results to many companies. He has participated in 22 European Projects being coordinator of 6, including the recently concluded FP7-EC integrated projects ARCAS and EC-SAFEMOBIL and the on-going H2020 AEROARMS. He is recipient of 15 awards, has supervised 32 PhD Thesis and is currently co-chair of the IEEE Technical Committee on Aerial Robotics and Unmanned Aerial Vehicles, member of the Board of Directors and coordinator of the Aerial Robotics Topic Group of euRobotics and president of the Spanish Society for Research and Development in Robotics.

Asama

Keynote ThK2T1, Thursday, May 19, 8:20-8:40, Room A1
Hajme Asama (University of Tokyo, Japan)
Societal Dissemination of Robot Technology for Disaster Response through Experience Accident of Fukushima Daiichi NPS

Chair: Patric Jensfelt (KTH, Sweden)

Abstract: The Great Eastern Japan Earthquake and Tsunami occurred in March 11, 2011, and as a result, the accident of Fukushima Daiichi Nuclear Power Plant occurred. Remote-controlled machine technology including Robot Technology (RT) was essential for the response against the accident to accomplish various tasks in the high-radiation environment. However, the robot technology developed could not be introduced smoothly in the emergent situation. In this presentation, robot and remotely-controlled systems technology for accident response and decommissioning of Fukushima Daiichi Nuclear Power Stations which have been utilized so far are introduced, and the issues are discussed how we should prepare for the future possible disasters and accidents, including not only technological development but also maintenance of technology, training of operators, establishment of mockups and test fields, and political strategy. Especially, the plans to promote societal dissemination of RT for disaster response, which are proposed by Council of Competitiveness-Nippon (COCN), and various recent trends towards their realization promoted by Japanese Government are introduced.

Bio: Hajime Asama received his B.S., M.S., and Dr.Eng. in Engineering from the University of Tokyo, in 1982, 1984 and 1989, respectively. He was a Research Scientist in RIKEN Japan from 1986 to 2002. He became a professor of RACE, University of Tokyo, in 2002, and a professor of the School of Engineering, the University of Tokyo since 2009. He received JSME Robotics and Mechatronics Award in 2009, RSJ Distinguished Service Award in 2013. He has been the vice-president of Robotics Society of Japan in 2011- 2012, an AdCom member of the IEEE Robotics and Automation Society in 2007-2009, the president of International Society for Intelligent Autonomous Systems from 2014, an associate editor of Journal of Field Robotics, Journal of Robotics and Autonomous Systems, and Control Engineering Practice. He is a Fellow of JSME and RSJ. Currently, he is a member of technical committee of Nuclear Damage Compensation and Decommissioning Facilitation Corporation (NDF), a member of technical committee of International Research Institute for Nuclear Decommissioning (IRID), a member of technical committee on mockup testing facility of Japan Atomic Energy Agency (JAEA). His main research interests are distributed autonomous robotic systems, smart spaces, service engineering, embodied brain systems, and service robotics.

Haddadin

Keynote ThK2T2, Thursday, May 19, 8:20-8:40, Room A3
Sami Haddadin (Leibniz University of Hannover, Germany)
Human-Centered Robotics: Toward the professional robot for everyone

Chair: Maren Bennewitz (University of Bonn, Germany)

Abstract: Enabling robots for direct physical interaction and cooperation with humans and with potentially unknown environments has been one of the primary goals of robotics research over decades. I will outline how human-centered robot design, control and planning may let robots for humans become a commodity in our near-future society and has already led to the first commercial robots capable of interaction. The primary objective of a robot’s action around humans is to ensure that “a robot may not injure a human being or (through inaction) allow a human being to come to harm”. For this, compliant and force/impedance controlled ultra-lightweight systems capable of full collision handling serve as the “safe robot body”, enabling high- performance human assistance over a wide variety of application domains. In this context, I will outline the concepts behind the joint torque controlled robot FRANKA EMIKA that is fully connected to the cloud and can be programmed by anyone via its APP framework DESK. Besides being extremely cost-efficient and showing high performance in both accuracy and force control, it is presumably the first robot that actually builds itself; therefore it is perfectly suited for mass production.

Bio: Sami Haddadin is full Professor and Director at IRT, Leibniz Universität Hannover and CEO of KBee AG, the company that developed FRANKA EMIKA. He holds a Dipl.-Ing. degree in EE and a M.Sc. in CS from TUM, as well as an Honours degree in Technology Management from TUM and LMU. He obtained his PhD from RWTH Aachen in 2011. His main research interests are physical Human-Robot Interaction, nonlinear control, real-time motion, task and reflex planning, robot learning, optimal control, variable impedance actuation, brain controlled assistive robots and safety in robotics. He was in program/organization committees of several international robotics conferences and Guest Editor of IJRR. Currently, he is Associate Editor of the IEEE Transactions on Robotics, was Local Arrangement Chair at IROS 2015, and Associate Editor at ICRA 2015. He published more than 110 peer reviewed papers in international journals, books, and conferences and received the 2015 IEEE/RAS Early Career Award, the Early-Career Spotlight at RSS 2015, 8 best paper/video awards, the euRobotics Technology Transfer Award 2011, the 2012 George Giralt Award, and was honoured with the 2015 Alfried Krupp Award for young professors. He was strongly involved in the development and technology transfer of the DLR Lightweight robot to KUKA and is the founder and former CEO of Kastanienbaum, a company that develops safety software solutions for collaborative robots.

Qiao

Keynote ThK2T3, Thursday, May 19, 8:20-8:40, Room A2
Hong Qiao (Chinese Academy of Science, China)
Brain inspired robotic vision, motion and planning

Chair: Bruno Siciliano (University of Naples Federico II, Italy)

Abstract: The interdisciplinary research between neuroscience and information science has greatly promoted the developments of both fields. These studies can help human to understand the essence of biological systems, provide computational platforms for biological data and experiments, and improve the intelligence and performance of the algorithms in information science. Brain-inspired intelligence has become an important cutting edge of artificial intelligence. In recent years, many countries have launched brain projects and also established brain-inspired research centers. The interdisciplinary research between robotics and neuroscience will play an important role in the development of robotics, especially in “human- like” (intelligence, learning ability, flexibility and compliance in manipulation) and “interaction” (deeply understanding human motion and emotion, cooperating with human in a close and intimate manner) aspects. Based on biological mechanisms, structures and functions, we carried out research activities in three directions in brain-inspired robotics and in their integration: (1) Vision: Introducing “memory and association”, “active cognition adjustment” and “spontaneous and dynamical cognition” into computational visual cognition models, the new models could achieve robust cognition, higher level cognition and dynamic updating, which could extend the applicable scenarios of robotic cognition, and provide basis of personalized services of robots. (2) Motion: Introducing structure of “central-peripheral” nervous system into the motion planning and movement control of robots, we simulate human-like “multi-input and multi-output” movement execution structure and build the neural “encoding-decoding” model for the generation of movement control signals, which could improve the precision of the movement and learning ability of robots without sacrificing response speed. (3) Planning: Introducing human decision process and visuomotor coordination mechanisms into robots, we combined the visual recognition network model with movement control and calibration model to achieve robust recognition and precise movement in a coordinated manner. The proposed model could be generalized and applied to other systems, such as mechanical and electrical systems in robotics, to achieve fast response, high precision movement with learning ability.

Bio: Hong Qiao is a professor with the Institute of Automation, Chinese Academy of Sciences (CAS). In 2004, Prof. Qiao gave up the tenure position and returned to China under the grant of the “100 Talent” Program of CAS. After her return, she founded the “Robotic Theory and Application” group (now with 50+ researchers). In 2007, she was supported by the National Science Fund for Distinguished Young Scholars. She holds currently several leading positions, including Deputy Director & Principal Investigator of Neuro- robotics of the Research Centre for Brain-inspired Intelligence at the CAS Institute of Automation, and Director of the Centre of Intelligent Robotics, University of Science and Technology of Beijing. Prof. Qiao won several awards for her research, including Beijing Science and Technology Award for fundamental research in 2012 and the Second Prize of National Natural Science Awards in 2014. In 2013, she was elected in the IEEE RAS AdCom. She has long led the group in research areas of robotic “Hand-Eye-Brain”, publishing more than 200 papers in top international journals and conferences. Prof. Qiao is serving as Editor-in-Chief of the Assembly Automation journal, and as an Associate Editor of the IEEE Transactions on Cybernetics, IEEE/ASME Transactions on Mechatronics, IEEE Transactions on Automation Science and Engineering, and IEEE Transactions on Cognitive and Developmental Systems. Prof. Qiao has undertaken a series of national key projects, including National Science Fund for Distinguished Young Scholars, Key Project of Natural Science Foundation of China, 863 Project of Ministry of Science and Technology of China, ’04 Project’ of National Science and Technology Major Project, Project of intelligent equipment of National Development and Reform Commission. With her group, she conducted cooperation initiatives in the automotive, CNC, national defense and home service areas with industry and IT companies, including XCMG Group, Shaanxi Qinchuan Machinery Development Co. Ltd, Huawei, Samsung, and Midea Group. Two industrialization bases for industrial robots have been established in the Guangdong Province, with $10 million support from local government.