Cognition, perception and postural control for humanoids
Workshop at 2014 IEEE-RAS International Conference on Humanoid Robots

Dr. Manuel Armada, Director Centre for Automation and Robotics – CAR (CSIC-UPM), Spain

Dr. Santiago Martinez, Assistant Professor, University Carlos III of Madrid, Spain
Ye Zhao, Graduate Research Assistant, University of Texas at Austin, USA 

SUMMARY AND OBJECTIVES

Understanding how people mentally represent their experience and then use these representations to operate effectively defines, in a simple way, the concept of human cognition. Thus, perceiving, imagining, thinking, remembering, forming concepts, and solving problems define the domain of cognitive exploration.

The goal of perception research, as part of the cognition process,  is to understand how stimuli from the world interact with the robot sensory systems and how the computing system represents the world. Research in perception spans a wide range of problems extending from the sensing devices, through the processing of sensory information, to the methods by which an accurate description of sensations is applied to humanoids.

Many studies have been performed to figure the human operation out and to apply their results in robotics. All this knowledge concerning cognitive concepts is applied to the enhancement of manipulation and locomotion in environments where social interaction plays a major role. In the case of the humanoid robots, more and more complex electromechanics and software systems have been developed for mimicking human behaviour.

This workshop is focused in presenting achievements in this three interrelated fields: cognition, perception and postural control.  Some, but not the only, topics to be addressed in the workshop are:         

        - Bio-inspired cognitive systems

        - Learning applied to postural control

        - Perception systems: vision, tactile, etc.

        - Hardware developments for improving postural control

CONFIRMED SPEAKERS

Lorenzo Jamone (VisLab)

Intelligent whole-body reaching based on a learned internal models

I will present some of the results I obtained during the last five years in providing humanoid robots with the ability to learn sensori-motor internal models i) autonomously and ii) incrementally during the goal-directed exploration of the environment. The approach I have been following focuses on some distinctive aspects: life-long continuous learning (accounting for both gradual and abrupt modifications in the system); goal-directed exploration of the environment (i.e. learning a general model by trying to accomplish specific tasks); developmental framework (the acquisition of a motor skill may allow to gather data to learn a new motor skill); bio-inspired (human-inspired) learning and control strategies.

I will sketch a developmental path in which a robot starts from basic visual perception to finally achieve goal-directed visually-guided locomotion and intelligent whole-body reaching capabilities, including the ability to reach with tools.

 

Javier Felip (UJI)

Contact Driven Robotic Manipulation (Perception)

Robotic manipulation in unstructured environments is far from being as dexterous, robust, fast and reliable as human manipulation is. Looking at how humans perform manipulation tasks is interesting to extract ideas on how to implement such a skill in a humanoid robot.

Human manipulation has been studied by neuroscience for a while. A manipulation task typically involves series of "manipulation primitives" (e.g. grasp, lift, release) that are specific controllers. The different "manipulation primitives" are bound by mechanical events that are sub-goals of the task: the "contact events". These events involve the making or breaking of contact between either the fingertips and the grasped object or the object and another object or surface.

In this work we present our current advances in integrating visual, tactile, force, prediction, context and control for the detection of contact events. Moreover, the detected contact events are used to monitor the task comparing the timing and the type of the detected contact events with the predicted ones. The detected mismatches are used to trigger corrective reflexes that will deal with unexpected situations and hopefully bring back the task execution to the predicted plan.

Zhibin Li (IIT)

Zhibin Li's foto

On The Control Of Push Recovery For Humanoids (Postural Control)

Despite the Linear Inverted Pendulum Model (LIPM) has the defect of an unnatural bent-knee walking style which is inefficient for the robots, surprisingly, it is still widely used for bipedal gait generation and push recovery mainly because of its analytic solutions. The LIPM is perhaps also more compatible with the existing ZMP control scheme. However, it can be seen that the humanoid controlled by this this approach takes unnecessary multiple steps to stop even after receiving a small disturbance.

On the contrary, the inverted pendulum model (IPM) has better representation of a compass-like gait, which is more human-like gait and energy efficient. In this talk, it will be shown that the nonlinearity introduced by the impact during change of support leg of IPM can be resolved analytically and thus the impact can be used for augmenting the push recovery.  The IPM can predict the timing and position of the foot placement, the same as LIPM, but at the same time more natural, effective and feasible.

Vittorio Lippi (University of Freiburg)

Modeling Human Postural Control of Support Surface Translations

Our research is focused on human postural control mechanisms from a neurological point of view. We perform humanoid robot experiments in order to test our hypotheses of this control in a real world set up under the same conditions as used in the human experiments.  

Bio-inspired sensorimotor control systems may be appealing to roboticists who try to solve problems of multi-DOF humanoids and human-robot interactions. Human sensorimotor behavior outperforms by far what currently is possible in humanoid robots. This likely owes not so much to better sensors in humans, but rather to the fact that humans are still better in using cognitive mechanisms when making use of the information provided by their sensors.  

Current work deals with the biped balancing during support surface translation perturbations. I will present preliminary results on human sway responses of and their modification by prediction. The human data is compared to simulation data using using a sensory feedback model.

 

Adolfo Rodriguez Tsouroukdissian (PAL Robotics)

Whole body control using Robust & Online hierarchical quadratic optimization 

Recently, several formulations have been proposed to add inequality constraints to multi objective prioritized optimization problems. How to solve this problems with only equality constraints is a well known topic in robotics.

Inequality constraints behave as equalities when they reach their bounds, so ideally we shouldn’t bother about them before we reach them. How we take them into account and what to do when they are reached affects drastically the computational effort of solving the problem.

We present and derive an efficient way to deal with these problems using an off the shelf Quadratic Programming solver and choosing an appropriate solving strategy. We finally apply the proposed method to do whole body control in real time on a humanoid robot with 44 DoF using a hierarchy of objectives.

 

Further speakers and presentation coming soon.
CALL FOR PAPERS
We invite authors who work on topics related to the workshop to submit 2-4 page abstracts in Humanoids 2014 format for poster presentation. 
Submissions should be send to scasa@ing.uc3m.es  before October 15th, 2014.
Communication of acceptance November 7th, 2014.

SCHEDULE

Time

Description

5 min.

Introduction

60 min.

Cognition: Invited talks

60 min.

Perception: Invited talks

15 min.

Short posters presentation

20 min.

Coffee Break and interactive session

60 min.

Postural Control: Invited talks

30 min.

Panel with Invited Speakers and Conclusions

(*) Tentative schedule

ACKNOWLEDGEMENTS

This workshop is organised by RoboCity2030 Programme (funded by Comunidad de Madrid, Spain, and cofunded by Structural Funds of the EU; www.robocity2030.org; ref: S2013/MIT-2748).