SPONSORS

IES IEEE UiA

TECHNICAL CO-SPONSORS

IEEJ-IAS MIL VDI/VDE-GMA

CONTACT

email organizers
email office
Discussion on sensing and actuating to support human activities from a view point of intelligent space

Prof Hideki Hashimoto
Chuo University
When I started my research works with a name of Intelligent Space, the main idea was "We should use our technologies to support our human activities with keeping good health conditions". It means that the intelligent space should understand human behaviors and to provide proper physical support by using robotics. The Intelligent Space is focusing to fuse IT and Robotics in our daily life. I believe that such research direction is still important even if actual state of ongoing intelligent space is remaining in lower stages. In this talk I will show some current results of monitoring human health conditions, assisting human mobilities and elementary technologies for actuators from a view point of Intelligent Space to discuss our important research issues.

Hideki Hashimoto (IEEE Fellow, SICE Fellow, RSJ Fellow) received the B.S., M.S. and Dr. of Engineering from the Department of Electrical Engineering, University of Tokyo in 1981, 1984 and 1987 respectively. He joined the Institute of Industrial Science of the University of Tokyo as a lecturer in April of 1987. He was an associate professor from July of 1990 until March of 2011. He has been a professor at Dept. of Electrical, Electronics and Communication Engineering, Chuo University, Tokyo, Japan since April of 2011. He was a visiting scientist at LIDS (Laboratory for Information and Decision System) and LEES (Laboratory for Electromagnetic and Electronic Systems) of MIT from September of 1989 to August of 1990. He was an Invited Distinguished Professor at Seoul National University from 2009 to 2012, and a Visiting Professor at Budapest University of Technology and Economics from 2009 to 2011. He is a visiting professor at Budapest University of Technology and Economics from 2014. He was the founding general chair of 1997 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM). He was a program chair of IEEE/RSJ IROS in 1988 and 2000, and a general chair of IEEE ITS Conference in 2002 and IECON 2015. His research topics are Intelligent Space, Intelligent Systems, Mechatronics, Robotics and Control.
http://www.elect.chuo-u.ac.jp/hlab/en/

Parameter estimation and gradient descent-based observers: application to mechanical and electromechanical systems

Dr Romeo Ortega
Laboratoire de Signaux et Systèmes, CentraleSupelec
In the first part of the talk we present a new approach to state observation, called Parameter Estimation-based Observers (PEBO) whose main idea is to translate the state estimation problem into one of estimation of constant, unknown parameters. The class of systems for which is applicable is identified via two assumptions related to the transformability of the system into a suitable cascaded form and our ability to estimate the unknown parameters. The first condition involves the solvability of a partial differential equation while the second one requires some persistency of excitation-like conditions. We present also PEBO in a unified framework together with the – by-now classical – Kasantzis-Kravaris-Luenberger and Immersion and Invariance observers. In the second part we show that, for systems for which a linear regression-like relation is available, it is possible to combine PEBO with a new estimation technique called Dynamic Regressor Extension and Mixing (DREM). This new technique, called DREMBAO, is used to generate adaptive observers. PEBO and DREMBAO are shown to be applicable to position estimation of a class of electromechanical systems – including motors and MagLev systems – and for speed observation of a class of mechanical systems. The performance of these observers is compared with high-gain and sliding mode observers. As expected, it is shown that – in the presence of noise – the performance of the two latter designs is significantly below par with respect to the other techniques.

Romeo Ortega was born in Mexico. He obtained his BSc in Electrical and Mechanical Engineering from the National University of Mexico, Master of Engineering from Polytechnical Institute of Leningrad, USSR, and the Docteur D'Etat from the Politechnical Institute of Grenoble, France in 1974, 1978 and 1984 respectively. He then joined the National University of Mexico, where he worked until 1989. He was a Visiting Professor at the University of Illinois in 1987-88 and at McGill University in 1991- 1992, and a Fellow of the Japan Society for Promotion of Science in 1990-1991. He has been a member of the French National Research Council (CNRS) since June 1992. Currently he is a Directeur de Recherche in the Laboratoire de Signaux et Systèmes (CentraleSupelec) in Gif-sur-Yvette, France. His research interests are in the fields of nonlinear and adaptive control, with special emphasis on applications. Dr Ortega has published three books and more than 300 scientific papers in international journals, with an h-index of 79. He has supervised 35 PhD thesis. He is a Fellow Member of the IEEE since 1999 and an IFAC Fellow since 2016. He has served as chairman in several IFAC and IEEE committees and participated in various editorial boards of international journals. Currently he is the Editor in Chief of International Journal on Adaptive Control and Signal Processing and Senior Editor of Asian Journal of Control.

Vision for robotics

Prof Annette Stahl
Norwegian University of Science and Technology
Reproducing the capabilities of visual sensing that one can find in nature would provide a very powerful and highly desired tool for robots. Ideally this enables a robot to perceive and interpret its surrounding so that it can use this information to execute different tasks within a real world environment. As robots operate in various environments (indoors, in space, in air, underwater) equipped with different sets of visual sensing devices (standard cameras, time-of-flight cameras, structured light cameras, hyperspectral imager) this makes the generic "interpretation of the world around a robot" very challenging. In this presentation I wish to introduce you to certain aspects within the world of "robotic vision" - where we try to teach machines to understand, plan and act in an intelligent way. In particular we will be concerned with how robots might build concepts about objects, understand relations between objects and understand the 3D structure of the surrounding world. The analysis of motion in the world of a robot is also an integral part of this understanding and important for many robotic control tasks.

Annette Stahl is Head of the Robotic Vision Group at the Department of Engineering Cybernetics at the Norwegian University of Science and Technology – NTNU, Norway. She is also an Affiliated Scientist of the Center of Excellence for Autonomous Marine Operations and Systems – NTNU AMOS. She received her PhD degree from the University of Heidelberg, Germany in applied mathematics with the main focus on computer vision applications in relation to variational methods for motion estimation using physical prior knowledge. She spent two years as a postdoc at the School of Computing, Dublin City University – DCU, Ireland and three years at the Department of Mathematical Sciences, NTNU, Norway, where she worked on isogeometric analysis based methods for graphics and visualization. After this period she worked as a researcher at the High Performance Computing Group at NTNU and at SINTEF Ocean, Norway, where she was concerned with computer vision based aquaculture applications. In 2016, her was awarded an Onsager Fellowship from NTNU's Research Excellence. She is currently working within the field of robotic vision targeting underwater, on sea surface, on land, in air and space as well as indoor and industrial related robotic applications.