Tuesday - May 30, 2017

Venue: Grand Ballroom (Level 5)

TIME SESSIONS
0845 – 0945 PLENARY 1
Modeling the possibilities: From the Chalkboard to the Race Track to the World Beyond

Chris Gerdes, Stanford University, USA
1345 – 1415 KEYNOTE 1
EndoMaster: A Surgical Robot’s Journey from the Research Lab to the Operating Theatre
Louis Phee, Nanyang Technological University, Singapore
1415 – 1445 KEYNOTE 2
Capturing Vivid 3D Models of the World from Video
Lourdes Agapito, University College London, UK

Wednesday - May 31, 2017

Venue: Grand Ballroom (Level 5)

TIME SESSIONS
0820 – 0920 PLENARY 2
Nobel Turing Challenge: Grand Challenge of AI, Robotics, and Systems Biology​

Hiroaki Kitano, Sony Computer Science Laboratories, Inc., Japan
1230 – 1300 KEYNOTE 3
Industry 4.0 – Automation and Robotics
Peter Luh
, University of Connecticut, US
1300 – 1330 KEYNOTE 4
Research at the Intersection Between Robots and Play: Designing Robots for Children’s Healthcare
Ayanna Howard
, Georgia Tech, USA

Thursday - June 1, 2017

Venue: Grand Ballroom (Level 5)

TIME SESSIONS
0820 – 0920 PLENARY 3
Framing the International Discussion on the Weaponization of Increasingly Autonomous Technologies
Kerstin Vignard
, United Nations Institute for Disarmament Research
1450 – 1520 KEYNOTE 5
An Operational Platform of Cloud Robotics
Bill Huang
, Cloud Minds, China
1520 – 1550 KEYNOTE 6
Model-based Optimization for Humanoid and Wearable Robots​
Katja Mombaur
, University of Heidelberg, Germany

Plenary Speakers

Chris Gerdes, Stanford University, USA

Modeling the possibilities: From the Chalkboard to the Race Track to the World Beyond

Simple mathematical models of physical systems give us tremendous insight into the nature of their underlying dynamics and the control challenges they present. Stripping away unnecessary details can illuminate fundamental dependencies and focus engineering efforts on the most critical problems. The challenge comes when models become so familiar that they are no longer taken as simplifications but mistaken for reality itself. Opportunities and possibilities that lie outside the bounds of those simple models are subsequently missed.

Ground vehicle dynamics represent an ideal illustration of these principles. While vehicles are complex multi-body systems with uncertain and nonlinear dynamic properties due to tire mechanics, many simplifications of these dynamics exist. By choosing the right level of abstraction for a model, anything from parallel parking to a race car driver’s choice of trajectory to drifting with smoking rear tires can be explained clearly and concisely. The choice of model is important, however, since what one model illuminates, another may obscure. This talk demonstrates through mathematics and video from experiments how simple models can be used to accurately control automated vehicles through even the most extreme maneuvers on the race track. Lap times comparable to expert drivers and drifting maneuvers beyond the precision of a human are possible with models consisting of only a few state variables. 

Just as models can guide or limit us in our work as researchers and engineers, our models of what it means to be a researcher or academic can sometimes artificially limit our impact in the world. The talk concludes with some simple models of how the robotics community can provide necessary leadership and technical guidance as society wrestles with the changes arising from our technologies. 

Bio: Chris Gerdes is a Professor of Mechanical Engineering and, by courtesy, of Aeronautics and Astronautics at Stanford University. His laboratory studies how cars move, how humans drive cars and how to design future cars that work cooperatively with the driver or drive themselves. When not teaching on campus, he can often be found at the racetrack with students, instrumenting historic race cars or trying out their latest prototypes for the future. Vehicles in the lab include X1, an entirely student-built test vehicle; Shelley, an automated Audi TT-S that can lap a racetrack as quickly as an expert driver; and MARTY, an electrified DeLorean capable of controlled drifts. Chris and his team have been recognized with a number of awards including the Presidential Early Career Award for Scientists and Engineers, the Ralph Teetor award from SAE International and the Rudolf Kalman Award from the American Society of Mechanical Engineers.

From February 2016 to January 2017, Chris served as the first Chief Innovation Officer at the United States Department of Transportation. In this role, he worked with Secretary Anthony Foxx to foster the culture of innovation across the department and find ways to support transportation innovation taking place both inside and outside of government. He was part of the team that developed the Federal Automated Vehicles Policy and represented the Department on the National Science and Technology Committee Subcommittee on Machine Learning and Artificial Intelligence. He continues to serve U.S. DOT as Vice Chair of the Federal Advisory Committee on Automation in Transportation. 

Chris is a co-founder of truck platooning company Peloton Technology and served as Peloton’s Principal Scientist before joining U.S. DOT. 


Hiroaki Kitano, Sony Computer Science Laboratories, Inc., Japan

Nobel Turing Challenge: Grand Challenge of AI, Robotics, and Systems Biology

Nobel Turing Challenge is one of the ultimate challenges the scientific community can tackle. It aims at (1) developing AI system including substantial robotics components that can make major scientific discoveries some of which worth Nobel Prize (called as “Scientific Discovery Challenge”), and (2) actually win the prize without the selection committee noticed that it is actually an AI system, not a human researcher (Cybernetic Personality Challenge). Primary focus on this challenge will be biomedical science area for Physiology and Medicine Award (Kitano, H., AI Magazine, 37(1) 2016).

This grand challenge project shall take a form of globally distributed “Virtual Big Science” project (Kitano, H., et al., Nature Chemical Biology, 7, 323-326, 2011). A part of the project shall resemble RoboCup (Kitano, H., et al., AI Magazine, 18(1) 73-85, 1997), but it will have substantially different aspects reflecting the difference of domains and objectives.

In the mid 90s, I advocated “Systems Biology” with the aim of promoting systems-oriented view in biology and to introduce more systematic measurements, proper applications of engineering, mathematical, and information science principles into life science (Kitano, H., Science, 295, 1662-1664, 2002; Kitano, H., Nature, 420, 206-210, 2002). This endeavor has been successful and systems biology is one of normal approach in biomedical and pharmaceutical sciences. The progress in systems biology revealed new limitations in life science that stems from our cognitive limitations to understand complex, non-linear, high dimensional, and dynamical systems, with overwhelming data and publications each of which unveils only a fragment of systems.

With recent breakthroughs in AI, exponentially increasing data production capabilities, and massive computing power, disruptive innovations in biomedical sciences are on the horizon. Time is ripe to embark on a new aggressive challenge. The fundamental breakthrough will come at the stage AI to generate hypotheses and quickly verify them using their knowledge bases, simulation, and robotics experimental systems. It means that AI systems can keep discovering new knowledge with minimal or zero human interventions. Even a mid-term achievement of Scientific Discovery Challenge alone will be a game changer. It will trigger fundamental transformations of industry and more largely on the shape of our civilization.

Bio: Hiroaki Kitano is President and CEO at Sony Computer Science Laboratories, Inc., Corporate Executive at Sony Corporation, President at The Systems Biology Institute, Tokyo, Professor at Okinawa Institute of Science and Technology Graduate University, Okinawa, and Director at Laboratory for Disease Systems Modeling, RIKEN Center for Integrative Medical Sciences, Kanagawa, and a member of AI and Robotics Council of the World Economic Forum.

He received a B.A. in physics from the International Christian University, Tokyo, and a Ph.D. in computer science from Kyoto University. Since 1988, he has been a visiting researcher at the Center for Machine Translation at Carnegie Mellon University. His research career includes a Project Director at Kitano Symbiotic Systems Project, ERATO, Japan Science and Technology Corporation followed by a Project Director at Kitano Symbiotic Systems Project, ERATO-SORST, Japan Science and Technology Agency where numbers of spin-offs were created including ZMP Inc., iXs Research, RT Coporation, Flower Robotics Inc., Xiborg Inc., etc.  

Kitano is a Founding President of The RoboCup Federation, a founder and president of International Society for Systems Biology (ISSB), and an Editor-in-Chief of Nature Partner Journal (npj) Systems Biology and Applications. He served as a president of International Joint Conference on Artificial Intelligence (IJCAI) during 2011-2013. Kitano received The Computers and Thought Award from the International Joint Conferences on Artificial Intelligence in 1993, Prix Ars Electronica 2000, and Nature Award for Creative Mentoring in Science 2009, as well as being an invited artist for Biennale di Venezia 2000 and Museum of Modern Art (MoMA) New York in 2001.


Kerstin Vignard, United Nations Institute for Disarmament Research

Framing the International Discussion on the Weaponization of Increasingly Autonomous Technologies

There are a multitude of positive military applications for increasingly autonomous technologies. However, their potential weaponization raises a host of legal, technical, operational and ethical questions. Since 2013, member states of the United Nations have been discussing the weaponization of increasingly autonomous technologies (Lethal Autonomous Robots, Lethal Autonomous Weapon Systems, or so-called “killer robots”) in both human rights and arms control fora. Four years in, there is still great division on definitions, how to ensure human control over these future weapon systems, and the appropriate policy responses.

These political discussions are held in the near absence of the technical community. As the rate of technological innovation far outpaces the policy discussion, how might engagement with the technical experts enable international policy-makers to better think, discuss and make informed decisions about increasing autonomy in weapon systems?

Bio: Kerstin Vignard, a dual US-French national, is an international security policy professional with over 20 years’ experience at the United Nations. As Deputy Director and Chief of Operations at the UN Institute for Disarmament Research, she advises the Director on strategic direction and oversees all activities of the Institute.

Since 2013, she has led UNIDIR’s work on the weaponization of increasingly autonomous technologies, which has focused on advancing the multilateral discussion on weaponized autonomy by refining the areas of concern, identifying relevant linkages, and learning from approaches from other domains, including the private sector, that may be of relevance. This work has provided insights and conceptual frameworks to enable international policy-makers to better think, discuss and make informed decisions about autonomy in weapon systems, for example within the framework of the Convention on Certain Conventional Weapons and the UN Human Rights Council. In addition, Vignard has served as consultant to four UN Groups of Governmental Experts on cyber warfare.


Keynote Speakers

Louis Phee, Nanyang Technological University, Singapore

EndoMaster: A Surgical Robot’s Journey from the Research Lab to the Operating Theatre​

I will share my experiences in developing a novel flexible robotic system that removes gastric and colon tumours using natural orifices as points of access. I will discuss the technical and medical challenges faced to push the research to successfully test the robot on human subjects. Thereafter, a company (EndoMaster) was incorporated to commercialise the product. The challenges faced in the translation of the robotic technology to be used in a clinical setting were entirely different from the research phase. By sharing my experiences, I hope to inspire more researchers to translate their research and inventions to actual useful products.

Bio: Dr Louis Phee is a Professor at Nanyang Technological University (NTU), Singapore. He is Chair of the School of Mechanical & Aerospace Engineering at NTU. He graduated from NTU with the B.Eng (Hons) and M.Eng degrees in 1996 and 1999 respectively. He obtained his PhD from Scuola Superiore Sant’Anna, Pisa, Italy in 2002 on a European Union scholarship. His research interests include Medical Robotics and Mechatronics in Medicine. He was the founding CEO of EndoMaster Pte Ltd, a company he co-founded to commercialize a surgical robotic system he developed.

Dr Phee was awarded the Young Scientist Award (2006), the Outstanding Young Persons of Singapore Award (2007), the Nanyang Outstanding Young Alumni Award (2011), Nanyang Innovation and Entrepreneurship Award (2013) and the President’s Technology Award (2012). In 2005, he was awarded the Best Paper Award at the prestigious IEEE International Conference on Robotics and Automation.


Lourdes Agapito, University College London, UK

Capturing Vivid 3D Models of the World from Video

As humans we take the ability to perceive the dynamic world around us in three dimensions for granted. From an early age we can grasp an object by adapting our fingers to its 3D shape; we can understand our mother’s feelings by interpreting her facial expressions; or we can effortlessly navigate through a busy street. All of these tasks require some internal 3D representation of shape, deformations and motion.

Building algorithms that can emulate this level of human 3D perception has proved to be a much harder task. In this session, I will show progress from early systems which captured sparse 3D models with primitive representations of deformation towards the most recent algorithms which can capture every fold and detail of hands or faces in 3D using as input video sequences taken with a single consumer camera. There is now great short-term potential for commercial uptake of this technology, and I will show applications to robotics, augmented and virtual reality and minimally invasive surgery.

Bio: Professor Lourdes Agapito obtained her BSc, MSc and PhD (1996) degrees from the Universidad Complutense de Madrid (Spain). She held an EU Marie Curie Postdoctoral Fellowship at The University of Oxford's Robotics Research Group before being appointed as a Lecturer at Queen Mary, University of London in 2001. In 2008 she was awarded an ERC Starting Grant to carry out research on the estimation of 3D models of non-rigid surfaces from monocular video sequences. In July 2013 she joined the Department of Computer Science at University College London (UCL) as a Reader where she leads a research team that focuses on 3D dynamic scene understanding from video and became full Professor of 3D Computer Vision in 2015.

Lourdes was Program Chair for CVPR 2016, the top annual conference in computer vision; in addition she was Programme Chair for 3DV'14 and Area Chair for CVPR'14, ECCV'14, ACCV'14 and Workshops Chair for ECCV'14. She has been keynote speaker for CVMP'15 and for several workshops associated with the main computer vision conferences (ICCV, CVPR and ECCV). Lourdes is Associate Editor for IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) and the International Journal of Computer Vision (IJCV), a member of the Executive Committee of the British Machine Vision Association and a member of the EPSRC Peer Review College.


Peter Luh, University of Connecticut, US

Industry 4.0 – Automation and Robotics​

Industry 4.0 is a confluence of trends and technologies pushed by the digital revolution and the “Internet of Things,” and driven by customer demand for high quality and customized products at reasonable prices. With (1) the ubiquitous connection and interaction of machines, things and people, (2) the mixing of physical and virtual worlds, and (3) the emerging of disruptive technologies such as 3D printing and robotics, the ways we design and manufacture products and provide services will be fundamentally changed. In this talk, Industry 4.0 will be introduced, including its design principles and key technologies.  An important but difficult technology often missing in discussions - mathematical optimization, especially for problems involving discrete decision variables or for methods to coordinate distributed and autonomous “things,” will be highlighted. Applications to “Clean Energy Smart Manufacturing” will be presented, and implications on automation and robotics will be discussed. Finally, Industry 4.0 in the US, Europe, China and Japan will also be introduced. 

Bio: Peter B. Luh received his B.S. from National Taiwan University, M.S. from M.I.T., and Ph.D. from Harvard University. He has been with the University of Connecticut since 1980, and currently is the SNET Professor of Communications & Information Technologies. He is also a member of the Chair Professors Group, Center for Intelligent and Networked Systems (CFINS) in the Department of Automation, Tsinghua University, Beijing, China.

Professor Luh is a Fellow of IEEE, and a member of IEEE TAB Periodicals Committee. He was the VP of Publications of RAS (2008-2011), the founding Editor-in-Chief of the IEEE Transactions on Automation Science and Engineering (2003-2007), and the Editor-in-Chief of IEEE Transactions on Robotics and Automation (1999-2003). His research interests include intelligent manufacturing systems, smart power systems, and smart and green buildings. He received IEEE Robotics and Automation Society (RAS) 2013 Pioneer Award for his pioneering contributions to the development of near-optimal and efficient planning, scheduling, and coordination methodologies for manufacturing and power systems. He is the 2017 recipient of the RAS George Saridis Leadership Award for his exceptional vision and leadership in strengthening and advancing Automation in the RAS. 


Ayanna Howard, Georgia Tech, USA

Research at the Intersection Between Robots and Play: Designing Robots for Children’s Healthcare

There are an estimated 150 million children worldwide living with a disability. For many of these children, physical therapy is provided as an intervention mechanism to support the child’s academic, developmental, and functional goals from birth and beyond. With the recent advances in robotics, therapeutic intervention protocols using robots is now ideally positioned to make an impact in this domain. There are numerous challenges though that still must be addressed to enable successful interaction between patients, clinicians, and robots - developing interfaces for clinicians to communicate with their robot counterparts; developing learning methods to endow robots with the ability to playfully interact with the child; and ensuring that the robot can provide feedback to the parent and clinician in a trustworthy manner.

I will discuss the role of robotics and related technologies for pediatric therapy and highlight our methods that bring us closer to this goal. I will present our approaches and preclinical studies in which these technologies address real-life developmental goals for children with special needs.

Bio: Ayanna Howard, Ph.D. is Professor and Linda J. and Mark C. Smith Endowed Chair in Bioengineering in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. She also holds the position of Associate Chair for Faculty Development in ECE. She received her B.S. in Engineering from Brown University, her M.S.E.E. from the University of Southern California, and her Ph.D. in Electrical Engineering from the University of Southern California.

Her area of research is centered around the concept of humanized intelligence, the process of embedding human cognitive capability into the control path of autonomous systems. This work, which addresses issues of autonomous control as well as aspects of interaction with humans and the surrounding environment, has resulted in over 200 peer-reviewed publications in a number of projects – from scientific rover navigation in glacier environments to assistive robots for the home. To date, her unique accomplishments have been highlighted through a number of awards and articles, including highlights in USA Today, Upscale, and TIME Magazine, as well as being named a MIT Technology Review top young innovator and recognized as one of the 23 most powerful women engineers in the world by Business Insider.

In 2013, she also founded Zyrobotics, which is currently licensing technology derived from her research and has released their first suite of therapy and educational products for children with differing needs. From 1993-2005, Dr. Howard was at NASA's Jet Propulsion Laboratory, California Institute of Technology. She has also served a term as the Associate Director of Research for the Georgia Tech Institute for Robotics and Intelligent Machines and a term as Chair of the multidisciplinary Robotics Ph.D. program at Georgia Tech.


Bill Huang, Cloud Minds, China

An Operational Platform of Cloud Robotics

Cloud Robotics is a kind of robot whose “brains” are on cloud, with a global high-speed backbone network as the “nerve”. In the near future, all the intelligent robots will be Cloud Robotics. At this session, we will introduce the architecture of Cloud Robotics, and how to build an operational platform for millions of robots with human assistant robot intelligence, mobile intranet cloud services, and robot control units.

Bio: Bill Huang is Founder and CEO of CloudMinds Inc. Before Cloudminds, he was GM of China Mobile Research Institute, SVP and CTO of UTStarcom, and previously worked at AT&T Bell Labs. Bill has creatively proposed the soft-switch concept of "the Network is the Switch", developed the first mobile soft-switch system in the world, and developed the first carrier-class streaming media exchange and IPTV system. He proposed the strategic concept of constructing the three major infrastructures (network, applications and terminals) of the next-generation mobile internet for the carriers, and promoted TD-LTE to be an internationally mainstream B3G standard, thus raising the technological influence of China in the communications industry.

In 2016, Bill was honoured IEEE CQR award. Besides, He is one of the first group of “Talent 1000” plan of China, a professor of University of Electronic Science and Technology of China, and a board member of GPS International Advisory Board of UC San Diego.

Bill received his master degree in Electrical Engineering and Computer Science from the University of Illinois in 1984. He graduated from the Huazhong University of Science and Technology in 1982 with a bachelor degree in Electrical Engineering.


Katja Mombaur, University of Heidelberg, Germany

Model-based Optimization for Humanoid and Wearable Robots​

In this talk, I give an overview of our research on motion optimization for humanoid and wearable robots. On the one hand, we are interested to improve the walking capabilities of humanoid robots in different terrains. Optimization based on realistic mechanical models of the robots is a very helpful tool since it can generate motions for such redundant, underactuated systems with multiple degrees of freedom and changing contacts that are feasible, stable and optimal. Optimization can also be applied to compliant robots.

On the other hand, we are interested to improve the design and control of wearable robots for the lower limbs and the lower back and other assistive devices. Using combined models of humans and the devices and optimal control, we can predict human movement in different conditions and determine the best possible support actions selecting passive and active components.

One important approach for both research fields is the solution of inverse optimal control problems based on recorded motion data which allows to identify objective functions underlying human movement. These optimality criteria can then be transferred to humanoid robots or be used for human movement prediction. For both fields, I will discuss the modeling levels to be used for describing humans and robots to address specific research questions. In addition, I will discuss possible combinations of optimal control methods with reinforcement learning and movement primitive approaches to reduce computation times and improve robot control.

Bio: Katja Mombaur is a full professor at the Institute of Computer Engineering (ZITI) of Heidelberg University and head of the Optimization in Robotics & Biomechanics (ORB) group as well as the Robotics Lab. She holds a diploma degree in Aerospace Engineering from the University of Stuttgart and a Ph.D. degree in Mathematics from Heidelberg University. She was a postdoctoral researcher in the Robotics Lab at Seoul National University, South Korea. She also spent two years as a visiting researcher in the Robotics department of LAAS-CNRS in Toulouse.

Katja is the coordinator of the newly founded Heidelberg Center for Motion Research. She also is PI in the European H2020 project SPEXOR and the Graduate School HGS MathComp as well as in several national projects. Until recently, she has coordinated the EU FP7 project KoroiBot and was PI in the EU projects MOBOT and ECHORD–GOP. She is founding chair of the IEEE RAS technical committee Model-based optimization for robotics.