Sabine Hauert; Yi Guo; Kim Hambuchen; Mady Delvaux-Stehres; Peter Stone; Alan Manning; Oussama Khatib; Ayanna Howard; Paul Oh; Kay Firth-Butterfield; Hiroshi Ishiguro.
Abstracts and Bios
Kim Hambuchen
Talk Title:
Abstract: Dr. Kimberly Hambuchen is the Deputy Manager for the Human Robotic Systems project, funded through NASA’s Space Technology Mission Directorate. Since 2004, she has been a robotics engineer in the Software, Robotics and Simulation division at NASA Johnson Space Center. In 2004, she received her Ph.D. in Electrical Engineering from Vanderbilt University. She is a former NASA Graduate Student Research Program fellow and previously held a postdoctoral position at NASA through the National Research Council. Dr. Hambuchen is an expert in developing novel methods for remote supervision of space robots over intermediate time delays. She has proven the validity of these methods through multiple NASA analog field tests with varying NASA robots, including the JSC Space Exploration Vehicles, Centaur platforms and Robonaut 2, the ATHLETE rovers from the Jet Propulsion Laboratory, and the Ames Research Center K-10s. She was the User Interface Lead for JSC’s entry into the DARPA Robotics Challenge, using her expertise in remote supervision of robots to guide operator interface development for the bipedal humanoid robot, Valkyrie. She currently manages development of telerobotic interfaces to operate robots in deep-space for the Autonomous Systems and Operations Project.
Peter Stone
Talk Title: Artificial Intelligence and Life in 2030
Abstract: The One Hundred Year Study on Artificial Intelligence, launched in the fall of 2014, is a long-term investigation of the field of Artificial Intelligence (AI) and its influences on people, their communities, and society. As its core activity, the Standing Committee that oversees the One Hundred Year Study forms a Study Panel every five years to assess the current state of AI. The first Study Panel report, published in September 2016, focusses on eight domains the panelists considered to be most salient: transportation; service robots; healthcare; education; low-resource communities; public safety and security; employment and workplace; and entertainment. In each of these domains, the report both reflects on progress in the past fifteen years and anticipates developments in the coming fifteen years. The report also includes recommendations concerning AI-related policy. This talk by the Study Panel Chair, will briefly describe the process of creating the report and summarize its contents. The floor will then be opened for questions and discussion Attendees are strongly encouraged to read at least the executive summary, overview, and callouts (in the margins) of the report before the session: https://ai100.stanford.edu/2016-report
Biography: Dr. Peter Stone is the David Bruton, Jr. Centennial Professor and Associate Chair of Computer Science, as well as Chair of the Robotics Portfolio Program, at the University of Texas at Austin. In 2013 he was awarded the University of Texas System Regents’ Outstanding Teaching Award and in 2014 he was inducted into the UT Austin Academy of Distinguished Teachers, earning him the title of University Distinguished Teaching Professor. Professor Stone’s research interests in Artificial Intelligence include machine learning (especially reinforcement learning), multiagent systems, robotics, and e-commerce. Professor Stone received his Ph.D in Computer Science in 1998 from Carnegie Mellon University. From 1999 to 2002 he was a Senior Technical Staff Member in the Artificial Intelligence Principles Research Department at AT&T Labs – Research. He is an Alfred P. Sloan Research Fellow, Guggenheim Fellow, AAAI Fellow, Fulbright Scholar, and 2004 ONR Young Investigator. In 2003, he won an NSF CAREER award for his proposed long term research on learning agents in dynamic, collaborative, and adversarial multiagent environments, in 2007 he received the prestigious IJCAI Computers and Thought Award, given biannually to the top AI researcher under the age of 35, and in 2016 he was awarded the ACM/SIGAI Autonomous Agents Research Award.
Mady Delvaux-Stehres
Talk Title: The European parliament’s views on robotics.
Abstract: No less than seven committees in the European Parliament were involved in the current considerations on robotics and its potential consequences on different areas of society. The report resulting from those reflections issues recommendations to the European Commission and the Union’s Member States. It gives advice on measures to be undertaken to establish a legal framework, which allows for the deployment of robots in the EU. With these recommendations, the European Parliament not only intends to support the European industry by insisting on standardisation, which reinforces the internal market, but it mainly wants to guarantee safety and security, both indispensable conditions for consumers’ trust. In its discussions, the Parliament tried to take into account not only the great expectations some have for robotics – mankind liberated from dangerous or unpleasant work -, but also the fears of many citizens to lose their job or even that we move towards dehumanisation. The report especially focuses on two subjects: first, the question of responsibility for the robot’s actions, combined with the need to compensate or repair the caused damages, and, second, the subject of ethical questions, asserting that the values enshrined in the Charta of fundamental rights have to be respected. The report mainly wants to trigger a large public debate, encouraging the dialogue between experts, policy makers and the public. This debate is crucial and needs to be permanently maintained to ensure that robots are and will stay in the service of humans.
Biography: Mady Delvaux-Stehres has occupied several important positions at both national (Luxembourg) and European levels, including Minister for Transport between 1994 and 1999, and Minister for Education from 2004 till 2014. She recognises the importance of Information & Communication Technology (ICT) Skills in increasing people’s employability prospects: “If Europe wants to stay innovative and competitive at international levels, the acquisition of ICT competences should be deeply embedded in the school system.” Indeed, Europe is expected to experience a shortfall of up to 825,000 ICT professionals by 2020, so it is essential to increase the digital competences of Europeans in all sectors of the workforce.
Alan Manning
Talk Title: The Future of Work in a Robot Economy
Abstract: There are currently many fears about the impact of robots on the labour market. These fears are the latest incarnation of past fears about the impact of new technology on workers. This lecture will explain the likely economic effects of robots, which fears are likely to be well-founded, and which are likely to be groundless.
Biography: Renowned UK labour economist Professor Alan Manning (LSE) will share his insights and policy experiences on income inequality and labour market polarisation, and the options for managing these trends. Alan Manning is Professor of Economics at the London School of Economics. He also serves as Director of the Labour Markets Programme for the Centre for Economic Performance. He is a former editor of the Journal of Labor Economics and Economica, a member of the editorial board of the Applied Economics Journal, and an associate editor of the Labour Economics. He is a leading labour economist and an expert on wage inequality, low wage and female labour markets, unemployment, minimum wages, and monopsony in labour markets.
Sabine Hauert
Panel Title: The Future of Robotics and Its Global Impacts
Biography: Professor Sabine Hauert is a Swarm Engineer at the University of Bristol and Bristol Robotics Laboratory, where she designs swarms of nanobots for biomedical studies. She is president and Co-founder of Robohub.org, a nonprofit dedicated to connecting the robotics community to the public. As an expert in science communication, she is often invited to discuss the future of robotics, including in the journal Nature, at the European Parliament, and as a member of the Royal Society’s Working Group on Machine Learning.
Yi Guo
Talk Title: Future Environmental Impact of Robotics
Abstract: The Deepwater Horizon oil spill has caused long-term damage to the marine environment. This event exposed the challenges of understanding even the most basic aspects of such a disaster. It took months to estimate the extent of the underwater plume, and the accuracy of these estimates will likely be debated for years to come. These challenges will continue to grow as energy production continues to happen in ever deeper water. The robotics and controls communities have responded to the need for technologies to monitor these operations and respond to future events. Successful applications are seen for forest fire surveillance, environmental monitoring, and underwater sampling. In this talk, I will present advanced robotic techniques to monitor and track the propagation of ocean pollution plumes. Field testing experiments of unmanned surface vessels at Makai Research Pier in Hawaii will be shown.
Biography: Dr. Yi Guo is an Associate Professor in the Department of Electrical and Computer Engineering at Stevens Institute of Technology. She obtained her Ph.D. degree in Electrical and Information Engineering from University of Sydney, Australia, in 1999. She was a postdoctoral research fellow at Oak Ridge National Laboratory from 2000 to 2002, and a visiting Assistant Professor at University of Central Florida from 2002 to 2005. Her research interests are mainly in autonomous mobile robotics, and control of multi-scale complex systems. Dr. Guo directs Robotics and Automation Laboratory at Stevens. She authored one book entitled “Distributed Cooperative Control: Emerging Applications” (John Wiley & Sons 2017), and edited one book on Micro/Nano-robotics for Biomedical Applications (Springer 2013). She currently serves as Editor of IEEE Robotics and Automation Magazine, Associate Editor of IEEE Access, and Technical Editor of IEEE/ASME Transactions on Mechatronics.
Kay Firth-Butterfield
Talk Title: Ethical Considerations for Prioritizing Human Well-Being with AI
Abstract: In December 2015, with 12 people, the IEEE formed an Industry Connections Group to consider the ethical design of autonomous systems and whether standards could be set to assist engineers in this task. By April 2016, the membership of the Initiative had grown to over 100 experts working in the field of the design of autonomous systems and artificial intelligence. Our work culminated in the publication of our Report and Brochure on 13th December 2016 and the three putative standards which are currently going through the process to become IEEE Standards. However, we are not finished, by the end of 2016 we had expanded the number of topic committees from 8 to 12 and we are expecting to add more this year. We have at least one more putative standard and we are in the next iteration process for a new Report to be published in December of 2017. Our reports make recommendations for the way in which work can be done to create ethical design of AI/AS as well as standards recommendations. This talk will explain the process, recommendations and standards in detail and allow for discussion. Please read the brochure, and if possible, the report which can be found here https://standards.ieee.org/develop/indconn/ec/autonomous_systems.html . The putative standards can be accessed here. You are invited to join our efforts by contacting our ED – JohncHavens.us@ieee.org
Biography: Kay Firth-Butterfield is a Barrister and part-time Judge who has worked as a mediator, arbitrator, business owner and professor in the United Kingdom. In the United States, she is the Executive Director and Founding Advocate of AI-Austin which is a non-profit dedicated to the development of laws and ethics around the development and use of AI and socially beneficial use of AI the community specifically in Healthcare and Education. She is the former Chief Officer of the Lucid.ai Ethics Advisory Panel. Kay is a Senior Fellow and Distinguished Scholar at the Robert S. Strauss Center for International Security and Law, University of Texas, Austin and Vice-Chair, The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems Additionally, she is a Partner in the Cognitive Finance group and an adjunct Professor of Law. Kay is a humanitarian with a strong sense of social justice and has advanced degrees in Law and International Relations. She advises governments, think tanks, businesses, inter-governmental bodies and non-profits about artificial intelligence, law and policy. Kay co-founded the Consortium for Law and Policy of Artificial Intelligence and Robotics at the University of Texas and taught its first course: Artificial Intelligence and Emerging Technologies: Law and Policy. She thinks about and works on how AI and other technologies will impact society and business. Kay regularly speaks to international audiences addressing many aspects of these challenging changes. Twitter: @KayFButterfield