• January 18, 2016
  • 0

Intention Recognition in HRI

============================================
Call for papers
Workshop on Intention Recognition in HRI
March 7, 2016

in conjunction with
The 11th ACM / IEEE International Conference
on Human-Robot Interaction (HRI 2016)

Christchurch, New Zealand
============================================

—[Dates]—
Submission deadline: January 20, 2016 (23:59 PST)
Notification of acceptance: January 28, 2016
Workshop date: March 7, 2016

(HRI2016 early registration deadline: January 31, 2016)

—[Website]—

https://www.intentions.xyz

—[Scope]—

The workshop will be centred around three main activities:
(i) keynote presentations to highlight the overall state of the art
(ii) paper presentations that deal with specific aspects of the work
carried out by workshop participants
(iii) a round-table discussion that will allow all participants to
contribute their thoughts on the open and most pressing research challenges.

Suitable topics for the workshop address intention recognition; for
instance:

* mechanisms of intention recognition in natural interaction
* machine recognition of human intentions
* human recognition/attribution of robot intentions
* implications for the evaluation of HRI.

We particularly encourage papers that consider mutual intention
recognition (i.e. that consider both human recognition of robot
intentions and robot recognition of human intentions in given
application contexts), but will also consider papers that deal with
uni-directional intention recognition. Papers can be pure position
papers, or can substantiate their message with empirical work.

Papers will be peer-reviewed and we emphasise that papers must make an
interesting, relevant, and novel contribution (whether theoretical or
empirical) to the state of the art.

—[Keynotes]—

Keynote 1:
Prof Tony Belpaeme, University of Plymouth, UK

Keynote 2:
TBC

—[Format]—

We expect papers to be 4 – 8 pages using the IEEE conference templates
(available at
https://www.ieee.org/conferences_events/conferences/publishing/templates.html).

—[Submission Instructions]—

Please e-mail your paper to serge.thill at his.se and tom.ziemke at his.se by
the submission deadline.

—[Publication]—

Preliminary proceedings will be published in the “Skövde University
Studies in Informatics” series (ISSN 1653-2325). Authors have the right
to opt out of these proceedings. Depending on the quality of the
submissions, a special issue will be organised in a suitable journal.

—[Background and motivation]—

Research in the cognitive sciences, not least social neuroscience, has
in the last 10-20 years made substantial progress in elucidating the
mechanisms underlying the recognition of actions and intentions in
natural human-human social interactions and in developing computational
models of these mechanisms. However, there is much less research on the
mechanisms underlying the human interpretation of the behaviour of
artefacts, such as robots or automated vehicles, and the attribution of
intentions to such systems.

Given the state of the art in psychology and neuroscience, there are
also at least two very different intuitions that one might have:

On the one hand it has been well known for decades from psychological
experiments that people tend to interpret even simple moving shapes in
terms of more or less human-like actions and intentions. So the first
intuition could be that this should also apply to robots and other
autonomous systems.

On the other hand, much (social) neuroscience research in the last 10-20
years, not least the discovery of the so-called mirror (neuron) system,
also points to the importance of embodiment and morphological
differences, which might lead to the intuition that humans might be able
to more or less easily understand the behaviour of very human-like
robots, but not necessarily the behaviour of, for example, autonomous
lawnmowers or automated vehicles.

To what degree, and how precisely, each of these mechanisms might be
involved when interacting with artificial agents remains unknown. It
may, for instance, depend at least in part on the human perception of
the agent: previous research has shown that humans adapt their behaviour
according to their beliefs of the cognitive abilities of another (even
artificial) agent and we have previously suggested that such agents need
to be understood in terms of how socially interactive they are, and how
tool-like their purpose is. Conversely, the same insights and intuitions
are also relevant for robot recognition of human intentions, which is a
arguably a prerequisite for pro-social behaviour, and necessary to
engage in, for instance, instrumental helping or mutual collaboration.
To develop robots that can interact naturally and effectively with
people therefore requires the creation of systems that can perceive and
comprehend intentions in other agents.

For research on human interaction with artificial agents such as robots
in general, and mutual action/intention recognition in particular, it is
therefore important to be clear about the theoretical framework(s) and
inherent assumptions underlying technological implementations. This has
further ramifications for the evaluation of the quality of the
interaction (as opposed to the functioning of the robot itself) between
humans and robots. Overall, this remains very much an active research
area in which further development is necessary, and the purpose of this
workshop is to advance the state of the art in that respect.

—[Organisers]—
Serge Thill, University of Skövde, Sweden
Alberto Montebelli, University of Skövde, Sweden
Tom Ziemke, Linköping University & University of Skövde, Sweden