Accepted Full Papers

Session #1: Creating Expressive Robots

 

Session Chair: Adriana Tapus

Tue, Mar 7

Expressing Emotions through Color, Sound, and Vibration with an Appearance-Constrained Social Robot

Sichao Song, Seiji Yamada

Many researchers are now dedicating their efforts to studying interactive modalities such as facial expressions, natural language, and gestures. This phenomenon makes communication between robots and individuals become more natural. However, many robots currently in use are appearance constrained and not able to perform facial expressions and gestures. In addition, although humanoid-oriented techniques are promising, they are time and cost consuming, which leads to many technical difficulties in most research studies. To increase interactive efficiency and decrease costs, we alternatively focus on three interaction modalities and their combinations, namely color, sound, and vibration. We conduct a structured study to evaluate the effects of the three modalities on a human’s emotional perception towards our simple-shaped robot “Maru”. We found that those modalities could offer a basis for intuitive emotional interaction between human beings and robots, which can be particularly suitable for appearance-constrained social robots. The contribution of this work is not so much the explicit parameter settings but rather deepening the understanding of how to express emotions through simple modalities color, sound, and vibration while providing a set of recommended expressions which HRI researchers and practitioners could readily employ.


Making Sound Intentional: A Study of Servo Sound Perception

Dylan Moore, Hamish Tennent, Nik Martelaro, Wendy Ju

How do sounds shape interaction with robots? The present study explores aural impressions associated with servo motors commonly used to prototype robotic motion. This exploratory analysis constructs a framework to objectively and subjectively characterize sound using acoustic analyses and novice evaluators on Amazon Mechanical Turk. Participants evaluated unfamiliar sounds through pairwise comparison, resulting in subjective ratings of servo motor sounds. In this study, subjective measures of sound correlated well internally, but correlated weakly with objective measures. Moreover, qualitative commentary offered by participants suggests both anthropomorphic associations with sounds as well as negative impressions of the sounds overall. We conclude with a roadmap for exploration into the field of consequential sonic interaction design.


Expressive Robot Motion Timing

Allan Zhou, Dylan Hadfield-Menell, Anusha Nagabandi, Anca Dragan

Our goal is to enable robots to time their motion in a way that is purposefully expressive of their internal states, making them more transparent to people. We start by investigating what types of states motion timing is capable of expressing, focusing on robot manipulation and keeping the path constant while systematically varying the timing. We find that users naturally pick up on certain properties of the robot (like confidence), of the motion (like naturalness), or of the task (like the weight of the object that the robot is carrying). We then conduct a hypothesis-driven experiment to tease out the directions and magnitudes of these effects, and use our findings to develop candidate mathematical models for how users make these inferences from the timing. We find a strong correlation between the models and real user data, suggesting that robots can leverage these models to autonomously optimize the timing of their motion to be expressive.


Using Facially Expressive Robots to Calibrate Clinical Pain Perception

Maryam Moosaei, Sumit Das, Dan Popa, Laurel Riek
Nominated for Best Paper Award

In this paper, we introduce a novel application of social robotics in healthcare: high fidelity, facially expressive, robotic patient simulators (RPSs), and explore their usage within a clinical experimental context. Current commercially-available RPSs, the most commonly used humanoid robots worldwide, are substantially limited in their usability and fidelity due to the fact that they lack one of the most important clinical interaction and diagnostic tools: an expressive face. Using autonomous facial synthesis techniques, we synthesized pain both on a humanoid robot and comparable virtual avatar. We conducted an experiment with 51 clinicians and 51 laypersons (n = 102), to explore differences in pain perception across the two groups, and also to explore the effects of embodiment (robot or avatar) on pain perception. Our results suggest that clinicians have lower overall accuracy in detecting synthesized pain in comparison to lay participants. We also found that all participants are overall less accurate detecting pain from a humanoid robot in comparison to a comparable virtual avatar, lending support to other recent findings in the HRI community. This research ultimately reveals new insights into the use of RPSs as a training tool for calibrating clinicians’ pain detection skills


Towards Robot Autonomy in Group Conversations: Understanding the Effects of Body Orientation and Gaze

Marynel Vázquez, Elizabeth Carter, Braden McDorman, Jodi Forlizzi, Aaron Steinfeld, Scott Hudson

We conducted an experiment to examine the effects of varying orientation and gaze behaviors on interactions between a mobile robot and groups of people. For this experiment, we designed a novel protocol to induce changes in the robot’s conversational group and study different social contexts. In addition, we implemented a perception system to track participants and control the robot’s orientation and gaze with little human intervention. The results showed that the gaze behaviors under consideration affected the participants’ perception of the robot’s motion, and this motion affected the perception of gaze as well. This mutual dependency implied that gaze and body motion must be designed and controlled jointly, rather than independently of each other. We also found that the two orientation behaviors we studied led to similar feelings of inclusion and sense of belonging to the robot’s group. These outcomes suggested that both can be used as primitives for more complex orientation behaviors.


Transgazer: Improving Impression by Switching Direct and Averted Gaze Using Optical Illusion

Yuki Kinoshita, Masanori Yokoyama, Shigeo Yoshida, Takayoshi Mochizuki, Tomohiro Yamada, Takuji Narumi, Tomohiro Tanikawa, Michitaka Hirose

Both direct gaze and averted gaze have an important effect in one-to-many communication. The purpose of this study is to discover the gaze corn of each gaze and precision of gaze direction, and to improve listeners’ impression on robots. We propose robotic eyes that control gaze cone, the area wherein the receivers feel as if they are looked at. Robotic eyes can send averted and direct gaze to multiple people simultaneously by changing the shape of the eyes, using an optical illusion. We developed a system of this concept, “Transgazer”, which can use convex or hollow eyes. We measured the broadness of gaze cone and correctness of conveying gaze direction of each eye type. The result showed that hollow eyes have a broader gaze cone whereas convex eyes can convey gaze direction more correctly. In the main experiment, Transgazer gave a lecture to two participants simultaneously using convex and hollow eyes and evaluated impression improvement: one of the effects of direct and averted gaze. The result showed that Transgazer could improve the impression of multiple listeners simultaneously without precisely retrieving listeners’ directions. We believe that this concept will improve one-to-many communication and make a significant contribution to human-robot communication in the future.

Session #2: Human-Robot Dialog

 

Session Chair: Kerstin Fischer

Tue, Mar 7

Persistent Lexical Entrainment in HRI

Jürgen Brandstetter, Eduardo Sandoval, Clay Beckner, Christoph Bartneck

In this study, we set out to ask three questions. First, does lexical entrainment with a robot interlocutor persist after an interaction? Second, how does the influence of social robots on humans compare with the influence of humans on each other? Finally, what role is played by personality traits in lexical entrainment to robots, and how does this compare with the role of personality in entrainment to other humans? Our experiment shows that first, robots can indeed prompt lexical entrainment that persists after an interaction is over. This finding is interesting since it demonstrates that speakers can be linguistically influenced by a robot, in a way that is not merely motivated by a desire to be understood. Second, we find similarities between lexical entrainment to the robot peer and lexical entrainment to a human peer, although the effects are stronger when the peer is human. Third, we find that whether the peer is a robot or a human, similar personality traits contribute to lexical entrainment. In both peer conditions, participants who score higher on “Openness to experience” are more likely to adopt less likely terminology.


Conversational Bootstrapping and Other Tricks of a Concierge Robot

Shang Guo, Jonathan Lenchner, Jonathan Connell, Mishal Dholakia, Hidemasa Muta

We describe the effective use of online learning to enhance the conversational capabilities of a greeter robot that we have been developing over the last two years. The robot was designed to interact naturally with visitors and uses a speech recognition system in conjunction with a natural language classifier. The online learning component monitors interactions and collects explicit and implicit user feedback from a conversation and feeds it back to the classifier in the form of new class instances and adjusted threshold values for triggering the classes. In addition, it enables a trusted master to teach it new question-answer pairs via question-answer paraphrasing, and solicits help with maintaining question-answer-class relationships when needed, obviating the need for explicit programming. The system has been completely implemented and demonstrated using the SoftBank Robotics humanoid robots Pepper and NAO, and the telepresence robot known as Double from Double Robotics.


Child Speech Recognition in Human-Robot Interaction: Evaluations and Recommendations

James Kennedy, Severin Lemaignan, Caroline Montassier, Pauline Lavalade, Bahar Irfan, Fotios Papadopoulos, Emmanuel Senft, Tony Belpaeme

An increasing number of human-robot interaction (HRI) studies are now taking place in applied settings with children. These interactions often hinge on verbal interaction to effectively achieve their goals. Great advances have been made in adult speech recognition and it is often assumed that these advances will carry over to the HRI domain and to interactions with children. In this paper, we evaluate a number of automatic speech recognition (ASR) engines under a variety of conditions, inspired by real-world social HRI conditions. Using the data collected we demonstrate that there is still much work to be done in ASR for child speech, with interactions relying solely on this modality still out of reach. However, we also make recommendations for child-robot interaction design in order to maximise the capability that does currently exist.


Creating Prosodic Synchrony for a Robot Co-player in a Speech-controlled Game for Children

Najmeh Sadoughi, André Pereira, Rishub Jain, Iolanda Leite, Jill Lehman
Nominated for Best Paper Award

Synchrony is an essential aspect of human-human interactions. In previous work, we have seen how synchrony manifests in low-level acoustic phenomena like fundamental frequency, loudness, and the duration of keywords during the play of child-child pairs in a fast-paced, cooperative, language-based game. The correlation between the increase in such low-level synchrony and increase in enjoyment of the game suggests that a similar dynamic between child and robot co-players might also improve the child’s experience. We report an approach to creating on-line acoustic synchrony by using a dynamic Bayesian network learned from prior recordings of child-child play to select from a predefined space of robot speech in response to real-time measurement of the child’s prosodic features. Data were collected from 40 new children, each playing the game with both a synchronizing and non-synchronizing version of the robot. Results show a significant order effect: although all children grew to enjoy the game more over time, those that began with the synchronous robot maintained their own synchrony to it and achieved higher engagement compared with those that did not.


Telling Stories to Robots: The Effect of Backchanneling On A Child’s Storytelling

Hae Won Park, Mirko Gelsomini, Jin Joo Lee, Cynthia Breazeal

We developed a nonverbal backchanneling model to improve the ability for a social robot to interact with a child as an attentive listener. We provide an extensive analysis of young children’s nonverbal behavior with respect to how they encode and decode listener responses and speaker cues. Through a data collection of child dyads in peer-to-peer storytelling interactions, we identify attentive listener behaviors as well as speaker cues that prompt opportunities for listener backchannels. Based on our findings, we developed a backchannel opportunity prediction (BOP) model that detects four main speaker cue events based on prosodic features in speech. The rule-based model is capable of accurately predicting backchanneling opportunities in our corpora. We evaluate this model in a human-subjects study where children told stories to an audience of two robots, each with a different backchanneling strategy. We find that our BOP model produces contingent backchanneling responses that convey more attentive listening behavior, and children prefer telling stories to the BOP model robot.


Navigational Instruction Generation as Inverse Reinforcement Learning with Neural Machine Translation

Andrea Daniele, Mohit Bansal, Matthew Walter

Modern robotics applications that involve human-robot interaction require robots to be able to communicate with humans seamlessly and effectively. Natural language provides a flexible and efficient medium through which robots can exchange information with their human partners. Significant advancements have been made in developing robots capable of interpreting free-form instructions, but less attention has been devoted to endowing robots with the ability to generate natural language. We propose a navigational guide model that enables robots to generate natural language instructions that allow humans to navigate a priori unknown environments. We first decide which information to share with the user according to their preferences, using a policy trained from human demonstrations via inverse reinforcement learning. We then “translate” this information into a natural language instruction using a neural sequence-to-sequence model that learns to generate free-form instructions from natural language corpora. We evaluate our method on a benchmark route instruction dataset and achieve a BLEU score of 72.18% when compared to human-generated reference instructions. We additionally conduct navigation experiments with human participants that demonstrate that our method generates instructions that people follow as accurately and easily as those produced by humans.

Session #3: Robots in Education

 

Session Chair: Tony Belpaeme

Tue, Mar 7

Cellulo: Versatile Handheld Robots for Education

Ayberk Özgür, Séverin Lemaignan, Wafa Johal, Maria Beltran, Manon Briod, Léa Pereyre, Francesco Mondada, Pierre Dillenbourg
Nominated for Best Paper Award

In this article, we present Cellulo, a novel robotic platform that investigates three new ideas for robotics in education: designing the robots to be versatile and generic tools (instead of tools that focus on STEM teaching only), blending robots into the classroom by designing them to be pervasive objects and by creating tight interactions with (already pervasive) paper; and finally considering the practical constraints of real classrooms at every stage of the design. Our platform results from these considerations and builds on a unique combination of technologies: groups of handheld haptic-enabled robots, tablets and activity sheets printed on regular paper. The robots feature holonomic motion, haptic feedback capability and high accuracy localization through a microdot pattern overlaid on top of the activity sheets, while remaining affordable (robots cost about 125 Euros at the prototype stage) and classroom-friendly. We present the platform and report on our first interaction studies, involving about 230 children.


Adaptive robot language tutoring based on Bayesian knowledge tracing and predictive decision-making

Thorsten Schodde, Kirsten Bergmann, Stefan Kopp

In this paper, we present an approach to adaptive language tutoring in child-robot interaction. The approach is based on a dynamic probabilistic model that represents the inter-relations between the learner’s skills, her observed behavior in tutoring interaction, and the tutoring action taken by the system. Being implemented in a robot language tutor, the model enables the robot tutor to trace the learner’s knowledge and to decide which skill to teach next and how to address it in a game-like tutoring interaction. Results of an evaluation study are discussed demonstrating how participants in the adaptive tutoring condition successfully learned foreign language words.


Growing Mindset with a Social Robot Peer

Hae Won Park, Rinat Rosenberg-Kima, Maor Rosenberg, Goren Gordon, Cynthia Breazeal
Nominated for Best Paper Award

Mindset has been shown to have a large impact on people’s academic, social, and work achievements. A growth mindset, i.e., the belief that success comes from effort and grit, is a better indicator of higher achievements as compared to a fixed mindset, i.e., the belief that things are set and cannot be changed. Interventions aimed at promoting a growth mindset in children range from teaching about the brain’s ability to learn and change to playing computer games that grant brain-points for effort rather than success. This work explores a novel paradigm to foster a growth mindset in young children where they play a puzzle-solving game with a peer-like social robot. The social robot is fully-autonomous and programmed with behaviors suggestive of it having either a growth mindset or a neutral mindset as it plays puzzle games with the child. We measure the mindset of children before and after interacting with the peer-like robot, in addition to measuring their problem-solving behavior when faced with a challenging puzzle. We found that children who played with a growth-mindset robot 1) self-reported having a stronger growth mindset, and 2) tried harder during a challenging task, as compared to children who played with the neutral-mindset robot. These results suggest that interacting with peer-like social robot with a growth mindset can promote the same mindset in children.


Give Me a Break! Personalized Timing Strategies to Promote Learning in Robot-Child Tutoring

Aditi Ramachandran, Chien-Ming Huang, Brian Scassellati

A common practice in education to accommodate the short attention spans of children during learning is to provide them with non-task breaks for cognitive rest. Holding great promise to promote learning, robots can provide these breaks at times personalized to individual children. In this work, we investigate personalized timing strategies for providing breaks to young learners during a robot tutoring interaction. We build an autonomous robot tutoring system that monitors student performance and provides break activities based on a personalized schedule according to performance. We conduct a fi eld study to explore the eff ects of diff erent strategies for providing breaks during tutoring. By comparing a fixed timing strategy with a reward strategy (break timing personalized to performance gains) and a refocus strategy (break timing personalized to performance drops), we show that the personalized strategies promote learning gains for children more effectively than the fixed strategy. Our results also show immediate bene fits in enhancing efficiency and accuracy in completing math questions after personalized breaks, providing evidence for the restorative effects of the breaks when administered at the right time.


Windfield: Learning Wind Meteorology with Handheld Haptic Robots

Ayberk Özgür, Wafa Johal, Francesco Mondada, Pierre Dillenbourg

This article presents a learning activity and its user study involving the Cellulo platform, a novel versatile robotic tool designed for education. In order to show the potential of Cellulo in the classroom as part of standard curricular activities, we designed a learning activity called Windfield that aims to teach the atmospheric formation mechanism of wind to early middle school children. The activity involves a didactic sequence, introducing the Cellulo robots as hot air balloons and enabling children to feel the wind force through haptic feedback. We present the user study, designed in the form of a real hour-long lesson, conducted with 24 children in 8 groups who had no prior knowledge in the subject. Collaborative metrics within groups and individual performances about the learning of key concepts were measured with only the hardware and software integrated in the platform in a completely automated manner. The results show that almost all participants showed learning of symmetric aspects of wind formation while about half showed learning of asymmetric vectoral aspects that are more complex.


(Ir)relevance of Gender? On the Influence of Gender Stereotypes on Learning with a Robot

Natalia Reich-Stiebert, Friederike Eyssel

Education research has documented a trend that reflects gender-based differences in the choice of fields of study. This, in turn, contributes to an imbalance in the representation of men and women in particular professions: In the school context, female teachers predominantly teach stereotypically female areas of study like socials sciences, whereas male teachers are mainly represented in stereotypically male domains like mathematics. Research further provides evidence for the fact that this gender-stereotyped division of labor in education and higher education significantly impacts students’ learning and motivation. Would gender-related stereotypes also bias learning processes with robots? This is plausible in light of the fact that as social robots become steadily more popular in learning settings. Thus, should the next generation of education robots be ‘gendered’ and what impact would robot gender have on task performance, particularly in the context of a gender-stereotypical human-robot interaction (HRI) task? To investigate these issues, we examined the influence of robot gender on learning when completing either stereotypically female or stereotypically male learning tasks. 120 participants (60 females and 60 males) completed either stereotypically female or stereotypically male tasks with the support of an instructor robot for which we experimentally manipulated robot gender. The manipulation check indicated that participants recognized the robot’s alleged gender correctly. Importantly, our results suggest that prevailing gender stereotypes associated with learning do not apply to robots that perform gender-stereotypical tasks. Interestingly, though, our findings indicate that a mismatch of robot gender and task gender typicality leads to increased willingness to engage in prospective learning processes with the robot. Our results will be discussed with respect to future research on HRI and learning, and with regard to practical implications associated with the introduction of robots into educational settings.

alt.HRI

Session Chairs: Séverin Lemaignan and Heather Knight

Tue, Mar 7

A Robot Forensic Interviewer – The BAD, the GOOD, and the Undiscovered

Zachary Henkel, Cindy Bethel

The goal of this paper is to begin a discussion of the benefits, challenges, and ethical concerns related to the use of robots as intermediaries for obtaining sensitive information from children within the human-robot interaction (HRI), criminology, sociology, legal, and psychological communities. As active HRI researchers, trained by the National Child Advocacy Center in Child Forensic Interview Protocols, we would like to invoke a discussion on when it is appropriate to use a robot to gather sensitive information from children as part of a forensic investigative process. A detailed account of potential negative and positive impacts are presented as it relates to the use of robots as forensic interview partners. Open research questions, proposed research studies, and pathways toward deployment of robots as forensic interviewers are provided. We are not trying to eliminate the use of humans as forensic interviewers in cases of child maltreatment, but exploring whether robots can be used to enhance this process and provide additional methods of investigation.


Development of an Emotion-Competent SLAM Agent

Johannes Feldmaier, Martin Stimpfl, Klaus Diepold

Emotions are a fundamental part of everyday life and an important topic in the development of artificial intelligence. We combine a Simultaneous Localization and Mapping algorithm with an model of emotion. The model of emotion is able to generate a mapping from the quantitative figures of the SLAM process to human-like emotions. This enables the robot to communicate its current state towards a human observer using emotional expressions. The paper reports on the design of the model of emotion, the result of the affective evaluation during an autonomous path finding process and its comparison to experimental data of a survey.


Wizard of Awwws : Exploring Psychological Impact on the Researchers in Social HRI Experiments

Daniel Rea, Denise Geiskkovitch, James Young

In social Human-Robot Interaction (sHRI) people have studied social interactions with awkward, confrontational, or unsettling robots. In order to create these situations, researchers often secretly control the robot (the “Wizard of Oz”, WoZ, technique), use confederates (researchers pretending to be participants), or the researchers themselves create the desired social condition. While these studies may be antagonistic, they are designed to be ethical; when conducting a study, IRB (Institutional Review Board) processes are in place to assess the study design for potential risk to participants, and to ultimately protect the public. However, these processes do not generally involve assessment of impact on the researchers conducting the study. In our own work, we have noted how researcher “wizards” in social HRI experiments, particularly those which place participants in awkward or confrontational situations, can themselves be negatively impacted from the experience when their experiment protocol has them antagonize, deceive, or argue with participants. In this paper, we explore how experimental design can impact the well-being of the researchers, particularly for wizards in social HRI experiments. By building a psychological grounding for the impact on people who do socially stressful actions, we evaluate the potential for researcher social stress in recent sHRI studies. Our summary and discussion of this survey results in recommendations for future HRI research to reduce the burden on wizards in their own experiments.


Robot-Human Interaction: A human speaker experiment

David St-Onge, Nicolas Reeves, Nataliya Petkova

This paper presents a novel reflection on interaction devices between human and robots. Robots are most currently seen as tools, extensions of the human body to serve its needs. However, the development of artificial intelligence forces us to reconsider that paradigm, and to ask ourselves the question of who, in the course of the human-machine dialogue, is really in control. In the overwhelming majority of situations where robots and users are meant to collaborate or interrelate, users are required to fully trust the machine and its reactions. The authors propose a methodology to design interfaces questioning the very core of this trust issue by a reversed interface allowing the machine to physically control the human beings as mere peripherals. After laying down the design principles of this approach, the authors present an art performance during which the consequences of this reversal are explored to their very limits. The radical approach of this research is made possible through the unconventional study context provided by the artistic nature of the attempt. The experimental feedback of the artist is discussed. It is followed by a survey of the audience’s reactions that allows to withdraw significant conclusions from the work.

Session #4: Personal Factors in HRI

 

Session Chair: Kerstin Dautenhahn

Wed, Mar 8

Do Sensory Preferences of Children with Autism Impact an Imitation Task with a Robot?

Pauline Chevalier, Gennaro Raiola, Brice Isableu, Jean-Claude Martin, Christophe Bazile, Adriana Tapus

In this paper, we sought to assess through an experimental imitation task protocol using a robot Nao whether sensory profiles of children with Autistic Spectrum Disorder (ASD) influence their capabilities to imitate or to initiate gestures. We based our work on the hypothesis that children with an overreliance on proprioceptive cues and hyporeactivity to visual cues will have greater difficulty imitating and will improve their skills more slowly than children with an overreliance on visual cues and hyporeactivity to proprioceptive cues. Our subject pool composed of 12 children and teenagers with ASD participated in seven imitation sessions over eight weeks. As expected, we observed that children with an overreliance on proprioceptive cues and hyporeactivity to visual cues had more difficulty imitating the robot than the other children. Moreover, the children exhibited positive effect in the social behavior (gaze to the partner, imitations) toward a human partner after the sessions with the robot.


How to open a robot guided museum tour? Strategies to establish a focused encounter in HRI.

Raphaela Gehle, Karola Pitsch, Timo Dankert, Sebastian Wrede

On the basis of a wizard-of-oz-videocorpus in a museum guide scenario, we address the challenge of choosing, when to open an interaction with a potential interactional partner. Using different sets of sensory data (external HD cams, internal robot vision and kinect data) for analysis, we focus on describing the visitors’ behavior and the implications for opening an interaction with (a) a visitor who is mainly interested in looking at the exhibition and (b) a visitor who is accessable to interact during his roundcourse. We therefore use a wizard-of-oz scenario to gather insights into the process of decision making to establish the stepwise process of opening. Analysis shows, that the wizards decisions highly depend on monitoring of the situated visitor activities.


Predicting and Regulating Participation Equality in Human-robot Conversations: Effects of Age and Gender

Gabriel Skantze
Nominated for Best Paper Award

In this paper, we investigate participation equality in multi-party human-robot conversations. We analyse a dataset where pairs of users (540 in total) interact with a conversational robot exhibited at a technical museum. The data encompass a wide range of different users in terms of age (adults/children) and gender, in different combinations. The results show that the participation equality differs a lot depending on the different demographic combinations. We also show that it is possible for the robot to regulate the turn-taking in order to reduce the imbalance, but that different user groups react differently to these signals. Finally, we show that it is possible to predict the imbalance at an early stage in the interaction, in order to mitigate the imbalance as early as possible, and that knowledge about the users’ age and gender helps in this prediction.


Threatening Flocks and Mindful Snowflakes: How Group Entitativity Affects Perceptions of Robots

Marlena Fraune, Yusaku Nishiwaki, Selma Sabanovic, Eliot Smith, Michio Okada

Robots are expected to become present in society in increasing numbers, yet few studies in human-robot interaction (HRI) go beyond one-to-one interaction to examine how characteristics of robot groups will affect HRI. In particular, people may show more negative or aggressive behavior toward entitative (i.e., cohesive) robot groups, like they do toward entitative human groups, compared to diverse groups. Furthermore, because people in collectivist (e.g. Japan) and individualistic (e.g. US) cultures respond to groups and cues of entitativity differently, entitative robot groups may affect people differently across such cultures. This study examines how robot Entitativity Condition (Single Robots, Diverse Group, Entitative Group) and Country (USA, Japan) affect emotions toward, mind attributions to, and willingness to interact with robots. Results indicate that Entitative robot groups, compared to Single robots, were viewed more negatively. Entitative robots were also found to be more threatening than Diverse robots. Diverse robot groups, compared to Single robots, were viewed as having more mind, and participants were more willing to interact with them. These findings were similar in the USA and Japan. This indicates that robot group entitativity and diversity is critical to keep in mind when designing robots.


S/he’s too Warm/Agentic! The Influence of Gender on Uncanny Reactions to Robots

Jahna Otterbacher, Michael Talias

Gender-based stereotypes are strong influences on human-human interactions. Given our tendency to anthropomorphize, it is not surprising that incorporating gender cues into a robot’s design can influence its perception and acceptance by humans. However, little is known about the interaction between human and robot gender. We focus on the role of gender in eliciting negative, affective reactions from observers (i.e., the “uncanny effect”). We create a corpus of YouTube videos featuring robots with female, male and no gender cues. Our experiment is grounded in Gray and Wegner’s (2012) model, which holds that uncanny reactions are driven by one’s interpretation of robot agency (i.e., ability to plan and control) and experience (i.e., ability to feel), which in turn, is driven by robot appearance and behavior (i.e., humanlikeness). Participants watched videos and completed questionnaires to gauge perceptions of robots as well as affective reactions. We used Structural Equation Modeling to test whether the model explains reactions of both men and women. For gender-neutral robots, it does; however, we find a salient human-robot gender interaction. Men’s uncanny reactions to robots with female cues are due to the perception of their ability to experience, while women’s negativity toward masculine robots is driven by the perception of agency. The result is interpreted in light of the “Big Two” dimensions of person perception, which underlie expectations for women to be warm and men to be agentic. When a robot meets these expectations too well, it increases the chances of an uncanny reaction in the other-gender observer.


Why Do They Refuse to Use My Robot?: Reasons of Non-Use Derived from a Long-Term Home Study

Maartje de Graaf, Somaya Ben Allouch, Jan van Dijk
Nominated for Best Paper Award

Research on why people refuse or abandon the use of technology in general, and robots specifically, is still scarce. Consequently, the academic understanding of people’s underlying reasons for non-use remains weak. However, refusers and abandoners are just as important as users because they can delay the acceptance and development process of innovative technologies. Investigating user experiences with robots in people’s private spaces over a longer period of time provides vital information about the design of these robots including their acceptance and refusal or abandonment by its users. The results of our long-term home study show that each group of non-users provided different reasons why they refused or abandoned the use of the robot. Understanding the thoughts and motives behind non-use may identify obstacles for acceptance, and therefore enable designers to better adapt technological designs to the benefits of the users.

Session #5: New Methodologies and Techniques

 

Session Chair: Paul Baxter

Wed, Mar 8

Marionette: Enabling On-Road Wizard-of-Oz Autonomous Driving Studies

Peter Wang, Srinath Sibi, Brian Mok, Wendy Ju

There is a growing need to study the interactions between drivers and their increasingly autonomous vehicles. This paper describes a method of using low-cost, portable, and versatile driver interaction system that can be used in conjunction with commercial passenger vehicles for on-road partial and fully autonomous driving interaction studies. By conducting on-road Wizard-of-Oz studies in naturalistic settings, we can explore a range of driving conditions and scenarios far beyond what can be conducted in a laboratory simulator environments. The Marionette system uses off-the-shelf components to create bidirectional communication between the driving controls of a Wizard-of-Oz vehicle operator and a driving study participant. It signals to the study participant what the car is doing and enables researchers to study participant intervention in driving activity. Marionette is designed to be easily replicated for researchers studying partially autonomous driving interaction. This paper describes the design and evaluation of this system.


Steps Toward Participatory Design of Social Robots: Mutual Learning with Older Adults with Depression

Hee Rin Lee, Selma Sabanovic, Wan-ling Chang, Shinichi Nagata, Jennifer A. Piatt, Casey Bennett, David Hakken

Here we present the results of research aimed at developing a methodology for the participatory design of social robots, which are meant to be incorporated into social contexts (e.g. home, work) and to establish social relations with humans. In contrast to the dominant technologically driven robot development process, we aim to develop a socially meaningful and responsible approach to robot design using Participatory Design (PD), which starts with participants’ issues and concerns and develops robot concepts based on their socially constructed interpretations of the capabilities and applications of robotic technologies. We present the methodological insights from our ongoing PD field study aimed at developing design concepts for socially assistive robots with older adults diagnosed with depression and their therapists, and also identify remaining challenges in this project. In particular, we discuss how to support mutual learning between researchers and participants as well as bringing out more active participation of older adults as “designers” in the process as foundational aspects of the PD process. We conclude with our thoughts regarding how work in this application area can contribute to the further development of social robots and PD methodologies for developing technologies for domestic environments.


The Robotic Social Attributes Scale (RoSAS): Development and Validation

Colleen Carpinella, Alisa Wyman, Michael Perez, Steven Stroessner
Nominated for Best Paper Award

Accurately measuring perceptions of robots has become increasingly important as technological progressions permit more frequent and extensive interaction between people and robots. Across four studies, we develop and validate a scale to measure social perception of robots. Drawing from the Godspeed Scale (Bartneck et al., 2009) and from the psychological literature on social perception, we develop an 18-item scale (The Robotic Social Attribute Scale; RoSAS) to measure people’s judgments of the social attributes of robots. Factor analyses reveal three underlying scale dimensions—warmth, competence, and discomfort. We then validate the RoSAS and show that the discomfort dimension does not reflect a concern with unfamiliarity. Using images of robots that systematically vary in their machineness and gender-typicality, we show that the application of these social attributes to robots varies based on their appearance.


Affective Grounding in Human-Robot Interaction

Malte Jung

Participating in interaction not only requires coordination on content and process, as previously proposed, but also on affect. The term affective grounding is introduced to refer to the coordination of affect in interaction with the purpose of building shared understanding about what behavior can be exhibited, and how behavior is interpreted emotionally and responded to. Affective Ground is achieved when interactants have reached shared understanding about how behavior should be interpreted emotionally. The paper contributes a review and critique of current perspectives on emotion in HRI. Further it outlines how research on emotion in HRI can benefit from taking an affective grounding perspective and outlines implications for the design of robots capable of participating in the coordination on affect in interaction.


It’s Not What You Do, It’s How You Do It: Grounding Uncertainty for a Simple Robot

Julian Hough, David Schlangen

For effective HRI, robots must go beyond having good legibility of their intentions shown by their actions, but also ground the degree of uncertainty they have. We show how in simple robots which have spoken language understanding capacities, uncertainty can be made common by principles of grounding in dialogue interaction even without the need for natural language generation. We present a model which makes this possible for simple robots with limited communication channels beyond the execution of task actions themselves. We implement our model in a simple pick-and-place robot, and experiment with two simple strategies for grounding uncertainty. In an observer study, we show that participants observing interaction with the robot run by the two different strategies were able to infer the degree of understanding the robot had internally, and in the more uncertainty expressive system, the internal uncertainty the robot had.


Implicit Communication in a Joint Action

Ross Knepper, Christoforos Mavrogiannis, Julia Proft, Claire Liang
Nominated for Best Paper Award

Actions performed in the context of a joint activity comprise two aspects: functional and communicative. The functional component achieves the goal of the action, whereas its communicative component, when present, expresses some information to the actor’s partners in the joint activity. The interpretation of such communication requires leveraging information that is public to all participants, known as common ground. Much of human communication is performed through this implicit mechanism, and humans cannot help but infer some meaning — whether or not it was intended by the actor — from most actions. Robots must be cognizant of how their actions will be interpreted in context. We present a framework for robots to utilize this communicative channel on top of normal functional actions to work more effectively with human partners. We consider the role of the actor and the observer, both individually and jointly, in implicit communication, as well as the effects of timing. We also show how the framework maps onto various modes of action, including natural language and motion. We consider these modes of action in various human-robot interaction domains, including social navigation and collaborative assembly.

Session #6: Robots and People Adapting to Each Other

 

Session Chair: Anca Dragan

Wed, Mar 8

Human-Robot Mutual Adaptation in Shared Autonomy

Stefanos Nikolaidis, Yu Xiang Zhu, David Hsu, Siddhartha Srinivasa

In shared autonomy, user inputs and robot autonomy are combined to control a robot to achieve the user’s intended goal. However, if the operator is unaware of the robot’s capabilities and limitations, they may guide the robot towards a suboptimal goal. On the other hand, the robot may know the optimal way of completing the task. Our objective is to improve team performance by having the robot guide the operator towards a new goal, while retaining their trust. We achieve this through a human-robot mutual adaptation formalism. We integrate a bounded-memory adaptation model of the human into a partially observable stochastic model, which enables robot adaptation to the human: when the human is adaptable, the robot will guide the human towards an optimal goal, unknown to them in advance. Otherwise, it will adapt to the human retaining their trust. Contrary to the collaborative scenarios examined in previous work, we account for partial observability of human and robot goals, and we explicitly penalize disagreement between the operator and the robot. We show in a human subject experiment that the proposed formalism significantly improved human-robot team performance, compared to the robot following participants’ preference, while retaining a high level of operator trust in the robot.


Improving Robot Controller Interpretability and Transparency Through Autonomous Policy Explanation

Bradley Hayes, Julie Shah

Shared expectations and mutual understanding are critical facets of teamwork. Achieving these in human-robot collaborative contexts can be especially challenging, as humans and robots are unlikely to share a common language to convey intentions, plans, or justifications. Even in cases where human co-workers can inspect a robot’s control code, and particularly when statistical methods are used to encode control policies, there is no guarantee that meaningful insights into a robot’s behavior can be derived or that a human will be able to efficiently isolate the behaviors relevant to the interaction. We present a series of algorithms and an accompanying system that enables robots to autonomously synthesize policy descriptions and respond to both general and targeted queries by human collaborators. We demonstrate applicability to a variety of robot controller types including those that utilize conditional logic, tabular reinforcement learning, and deep reinforcement learning, synthesizing informative policy descriptions for collaborators and facilitating fault diagnosis by non-experts.


Is a Robot a Better Walking Partner If It Associates Utterances with Visual Scenes?

Ryusuke Totsuka, Satoru Satake, Takayuki Kanda, Michita Imai

We aim to develop a walking partner robot with the capability to select small-talk topics that are associative to visual scenes. We first collected video sequences from five different locations and prepared a dataset about small-talk topics associated to visual scenes. Then we developed a technique to associate the visual scenes with the small-talk topics. We converted visual scenes into lists of words using an off-the-shelf vision library and formed a topic space with a Latent Dirichlet Allocation (LDA) method in which a list of words is transformed to a topic vector. Finally, the system selects the most similar utterance in the topic vectors. We tested our developed technique with a dataset, which successfully selected 72% appropriate utterances, and conducted a user study outdoors where participants took a walk with a small robot on their shoulder and engaged in small talk. We confirmed that the participants more highly perceived the robot with our developed technique because it selected appropriate utterances than a robot that randomly selected utterances. Further, they also felt that the former type of robot is a better walking partner.


Game-Theoretic Modeling of Human Adaptation in Human-Robot Collaboration

Stefanos Nikolaidis, Swaprava Nath, Ariel Procaccia, Siddhartha Srinivasa

In human-robot teams, humans often start with an inaccurate model of the robot capabilities. As they interact with the robot, they infer the robot’s capabilities and partially adapt to the robot: they might change their actions based on the observed outcomes of their own and the robot’s actions, without adopting the robot policy as their own. We present a game-theoretic model of human partial adaptation to the robot, where the human responds to the robot actions by maximizing a reward function that changes stochastically over time, capturing the evolution of their expectations of the robot capabilities. The robot can then use this model to decide optimally between taking actions that reveal its capabilities to the human and taking the best action given the information that the human currently has. We prove that under certain observability assumptions, the optimal policy can be computed efficiently. We demonstrate through a human subject experiment that the proposed model significantly improves human-robot team performance, compared to policies that assume complete adaptation of the human to the robot.


Towards Adaptive Social Behavior Generation for Assistive Robots Using Reinforcement Learning

Jacqueline Hemminghaus, Stefan Kopp

In this paper we explore whether a social robot can learn – in and from a task-oriented interaction with a human user – how to employ different social behaviors to achieve interactional goals in specific situational circumstances. We present a multimodal behavior generation architecture that maps high-level behaviors with interactional functions onto low-level behaviors executable by a robot. While high-level behaviors are selected based on the state of the user as well as the interaction, reinforcement learning (Q-learning) is used within each behavior to adapt its local mapping onto lower-level behaviors. The approach is implemented and applied in a scenario in which a social robot (Furhat) assists a human player in solving a Memory game by guiding the attention of the user to specific objects. Results of an evaluation study are reported which demonstrate that participants are able to solve the Memory faster with the adaptive, assistive robot.


The When, Where, and How: An Adaptive Robotic Info-Terminal for Care Home Residents – A long-term Study

Marc Hanheide, Denise Hebesberger, Tomas Krajnik
Nominated for Best Paper Award

Adapting to users’ intentions is a key requirement for autonomous robots in general, and in care settings in particular. In this paper, a comprehensive long-term study of a mobile robot providing information services to residents, visitors, and staff of a care home is presented with a focus on adapting to the when and where the robot should be offering its services to best accommodate the users’ needs. Rather than providing a fixed schedule, the presented system takes the opportunity of long-term deployment to explore the space of possibilities of interaction while concurrently exploiting the model learned to provide better services. But in order to provide effective services to users in a care home, not only then when and where are relevant, but also the way how the information is provided and accessed. Hence, also the usability of the deployed system is studied specifically, in order to provide a most comprehensive overall assessment of a robotic info-terminal implementation in a care setting. Our results back our hypotheses, (i) that learning a spatiotemporal model of users’ intentions improves efficiency and usefulness of the system, and (ii) that the specific information sought after is indeed dependent on the location the info-terminal is offered.

JHRI Session

 

Session Chair: tba

Wed, Mar 8

Robots Have Needs Too: How and Why People Adapt Their Proxemic Behavior to Improve Robot Social Signal Understanding

Ross Mead, Maja Mataric

Human preferences of distance (proxemics) to a robot significantly impact the performance of the robot’s automated speech and gesture recognition during face-to-face, social human-robot interactions. This work investigated how people respond to a sociable robot based on its performance at different locations. We performed an experiment in which the robot’s ability to understand social signals was artificially attenuated by distance. Participants (N = 180) instructed the robot using speech and pointing gestures, provided proxemic preferences before and after the interaction, and responded to a questionnaire. Our analysis of questionnaire responses revealed that robot performance factors—rather than human-robot proxemics—are significant predictors of user evaluations of robot competence, anthropomorphism, engagement, likability, and technology adoption. Our behavioral analysis suggests that human proxemic preferences change over time as users interact with and come to understand the needs of the robot, and those changes improve robot performance.


Learning Assistance by Demonstration: Smart Mobility With Shared Control and Paired Haptic Controllers

Harold Soh, Yiannis Demiris

In this paper, we present a framework, probabilistic model, and algorithm for learning shared control policies by observing an assistant. This is a methodology we refer to as Learning Assistance by Demonstration (LAD). As a subset of robot Learning by Demonstration (LbD), LAD focuses on the assistive element by explicitly capturing how and when to help. The latter is especially important in assistive scenarios—such as rehabilitation and training—where there exists multiple and possibly conflicting goals. We formalize these notions in a probabilistic model and develop an efficient online mixture of experts (OME) algorithm, based on sparse Gaussian processes (GPs), for learning the assistive policy. Focusing on smart mobility, we couple the LAD methodology with a novel paired-haptic-controllers setup for helping smart wheelchair users navigate their environment. Experimental results with 15 able-bodied participants demonstrate that our learned shared control policy improved driving performance (as measured in lap seconds) by 43 s (a speedup of 191%). Furthermore, survey results indicate that the participants not only performed better quantitatively, but also qualitatively felt the model assistance helped them complete the task.


Covert Robot-Robot Communication: Human Perceptions and Implications for HRI

Tom Williams, Priscilla Briggs, Matthias Scheutz

As future human-robot teams are envisioned for a variety of application domains, researchers have begun to investigate how humans and robots can communicate effectively and naturally in the context of human-robot team tasks. While a growing body of work is focused on human-robot communication and human perceptions thereof, there is currently little work on human perceptions of robot-robot communication. Understanding how robots should communicate information to each other in the presence of human teammates is an important open question for human-robot teaming. In this paper, we present two human-robot interaction (HRI) experiments investigating the human perception of verbal and silent robot-robot communication as part of a human-robot team task. The results suggest that silent communication of task-dependent, human-understandable information among robots is perceived as creepy by cooperative, co-located human teammates. Hence, we propose that, absent specific evidence to the contrary, robots in cooperative human-robot team settings need to be sensitive to human expectations about overt communication, and we encourage future work to investigate possible ways to modulate such expectations.


“Hands Up, Don’t Shoot!” HRI and the Automation of Police Use of Force

Peter Asaro

This paper considers the ethical challenges facing the development of robotic systems that deploy violent and lethal force against humans. While the use of violent and lethal force is not usually acceptable for humans or robots, police officers are authorized by the state to use violent and lethal force in certain circumstances in order to keep the peace and protect individuals and the community from an immediate threat. With the increased interest in developing and deploying robots for law enforcement tasks, including robots armed with weapons, the question arises as to how to design human-robot interactions (HRIs) in which violent and lethal force might be among the actions taken by the robot, or whether to preclude such actions altogether. This is what I call the “deadly design problem” for HRI. While it might be possible to design a system to recognize various gestures, such as “Hands up, don’t shoot!,” there are many more challenging and subtle aspects to the problem of implementing existing legal guidelines for the use of force in law enforcement robots. After examining the key legal and technical challenges of designing interactions involving violence, this paper concludes with some reflections on the ethics of HRI design raised by automating the use of force in policing. In light of the serious challenges in automating violence, it calls upon HRI researchers to adopt a moratorium on designing any robotic systems that deploy violent and lethal force against humans, and to consider ethical codes and laws to prohibit such systems in the future.


Session #7: New Techniques for Remotely Controlling Robots

 

Session Chair: Aaron Steinfeld

Thu, Mar 9

Probing the design space of a telepresence robot gesture arm with low fidelity prototypes

Patrik Björnfot, Victor Kaptelinin

The general problem addressed in this paper is supporting a more efficient communication between remote users, controlling telepresence robots, and people in the local setting. The design of most telepresence robots does not allow them to perform gestures. Given the key role of pointing in human communication, exploring design solutions for providing telepresence robots with deictic gesturing capabilities is, arguably, a timely research issue for Human-Robot Interaction. To address this issue we conducted an empirical study, in which a set of low fidelity prototypes, illustrating various designs of a robot’s gesture arm, were assessed by the participants (N=18). The study employed a mixed-method approach, a combination of a controlled experiment, elicitation study, and design provocation. The evidence collected in the study reveals participants’ assessment of the designs, used in the study, and provides insights into participants’ attitudes and expectations regarding gestural communication with telepresence robots in general.


A Motion Retargeting Method for Effective Mimicry-based Teleoperation of Robot Arms

Daniel Rakita, Bilge Mutlu, Michael Gleicher

In this paper, we introduce a novel interface for teleoperation that allows novice users to effectively and intuitively control robot manipulators. The premise of our method is that an interface that allows a user to direct a robot using the natural 6-DOF space of his/her hand would afford effective direct control of a robot arm. However, a direct mapping between the user’s hand and the robot’s end effector is impractical because the robot has different kinematic and speed capabilities than the human arm. Our key idea is that by relaxing the constraint of the direct mapping between hand position and orientation and end effector configuration, a system can provide the user with the feel of direct control, yet be able to achieve the practical requirements for telemanipulation such as motion smoothness and singularity avoidance. We present methods for implementing a motion retargeting solution that achieves this relaxed control using constrained optimization and describe a system that utilizes it to provide real-time control of a robot arm. We demonstrate the effectiveness of our approach in a user study that shows that novice users can complete a range of tasks more efficiently and enjoyably using our relaxed-mimicry based interface than with standard interfaces.


A Comparison of Remote Robot Teleoperation Interfaces for General Object Manipulation

David Kent, Carl Saldanha, Sonia Chernova

Robust remote teleoperation of high-DOF manipulators is of critical importance across a wide range of robotics applications. Contemporary robot manipulation interfaces primarily utilize a free-positioning pose specification approach to independently control each axis of translation and orientation in free space. In this work, we present two novel interfaces, constrained positioning and point-and-click, which incorporate scene information including points-of-interest and local surface geometry into the grasp specification process. We also present results of a user study evaluation comparing the effects of increased use of scene information in grasp pose specification algorithms for general object manipulation. The results of our study show that constrained positioning and point-and-click significantly outperform the widely used free positioning approach by significantly reducing the number of grasping errors and the number of user interactions required to specify poses. Furthermore, the point-and-click interface significantly increased the number of tasks users were able to complete.


Haptic Shape-Based Management of Robot Teams in Cordon and Patrol

Samuel McDonald, Mark Colton, Kristopher Alder, Michael Goodrich

There is a growing need to develop effective interaction methods that enable a single operator to manage a team of multiple robots. A novel approach that involves treating the team as a moldable volume is presented, in which deformations of the volume correspond to changes in team shape. The team possesses a level of autonomy that allows the team to travel to and surround buildings of interest in a patrol and cordon scenario. During surround mode, the operator explores or manipulates the team shape to create desired formations around a building. A spacing interaction method also allows the operator to adjust how robots are spaced within the current shape. Separate haptic feedback is developed for each method to allow the operator to “feel” the shape or spacing manipulation. During travel mode, the operator chooses desired travel locations and receives feedback to help identify how and where the team travels. Results from a human-subject experiment suggest that haptic feedback significantly improves operator performance in a reconnaissance task when task demand is higher, but may slightly increase operator workload. In the context of the experimental setup, these results suggest that haptic feedback may contribute to heads-up control of a team of autonomous robots. There were no significance differences in levels of situation awareness due to haptic feedback in this study.


Design and Evaluation of Adverb Palette: A GUI for Selecting Tradeoffs in Multi-objective Optimization Problems

Meher Shaikh, Michael Goodrich

An important part of expressing human intent is identifying acceptable tradeoffs among competing performance objectives. We present and evaluate a set of graphical user interfaces (GUIs), that are designed to allow a human to express intent by expressing desirable tradeoffs. The GUIs require an algorithm that identifies the set of Pareto optimal solutions to the multi-objective decision problem, which means that all the solutions are equally good in the sense that there are no other solutions better for every objective. Given the Pareto set, the GUIs provide different ways for a human to express intent by exploring tradeoffs between objectives; once a tradeoff is selected, the solution is chosen. The GUI designs are applied to interactive human-robot path-selection for a robot in an urban environment, but they can be applied to other tradeoff problems. A user study evaluates GUI designs by requiring users to select a tradeoff that satisfies a specified mission intent. Results of the user study suggest that GUIs designed to support an artist’s palette-metaphor can be used to express intent without incurring unacceptable levels of human workload.


Movers, Shakers, and Those Who Stand Still: Visual attention-grabbing techniques in robot tele-operation

Daniel Rea, Stela Seo, Neil Bruce, James Young

We designed and evaluated a series of teleoperation interface techniques that aim to draw operator attention while mitigating negative effects of interruption. Monitoring live teleoperation video feeds, for example to search for survivors in search and rescue, can be cognitively taxing, particularly for operators simultaneously driving a robot or monitoring multiple cameras. To reduce workload, emerging computer vision techniques can automatically identify and indicate (cue) salient points of potential interest for the operator. However, it is not clear how to cue such points to a preoccupied operator, whether cues would be distracting and a hindrance to operators, and how the design of the cue may impact operator cognitive load, attention drawn, and primary task performance. In this paper, we detail our iterative design process for creating a range of visual attention-grabbing cues that are grounded in psychological literature on human attention, and two formal evaluations that measure attention-grabbing capability and impact on operator performance. Our results show that visually cueing on-screen points of interest does not distract operators, that operators perform poorly without the cues, and detail how particular cue design parameters impact operator cognitive load and task performance. Finally, from this process we provide original, tested, and theoretically grounded cues for attention drawing in teleoperation

Session #8: Trust and Privacy

 

Session Chair: Alan Wagner

Thu, Mar 9

Evaluating Effects of User Experience and System Transparency on Trust in Automation

Xi Yang, Vaibhav Unhelkar, Julie Shah

Existing research assessing human operators’ trust in automation and robots has primarily examined trust as a steady-state variable, with little emphasis on the evolution of trust over time. With the goal of addressing this research gap, we present a study exploring the dynamic nature of trust. We defined trust of entirety as a measure that accounts for trust across a human’s entire interactive experience with automation, and first identified alternatives to quantify it using real-time measurements of trust. Second, we provided a novel model that attempts to explain how trust of entirety evolves as a user interacts repeatedly with automation. Lastly, we investigated the effects of automation transparency on momentary changes of trust. Our results indicated that trust of entirety is better quantified by the average measure of “area under the trust curve” than the traditional post-experiment trust measure. In addition, we found that trust of entirety evolves and eventually stabilizes as an operator repeatedly interacts with a technology. Finally, we observed that a higher level of automation transparency may mitigate the “cry wolf” effect — wherein human operators begin to reject an automated system due to repeated false alarms.


Do you want your autonomous car to drive like you?

Chandrayee Basu, Qian Yang, David Hungerman, Anca Dragan, Mukesh Singhal

With progress in enabling autonomous cars to drive safely on the road, it is time to start asking how they should be driving. A common answer is that they should be adopting their users’ driving style, which makes the assumption that users want their cars to drive like they do – aggressive drivers want aggressive cars, defensive drivers want defensive cars for the sake of comfort. In this paper, we put that assumption to the test. We find that users tend to prefer a significantly more defensive driving style than their own. Interestingly, they prefer the style they think is their own, even though their actual driving style tends to be more aggressive. These results open the door for learning what the user’s preferred style will be, by potentially learning their driving style but then purposefully deviating from it.


Piggybacking Robots: Human-Robot Overtrust in University Dormitory Security

Serena Booth, James Tompkin, Krzysztof Gajos, Jim Waldo, Hanspeter Pfister, Radhika Nagpal

Can overtrust in robots compromise physical security? We conducted a series of experiments in which a robot positioned outside a secure-access student dormitory asked passersby to assist it to gain access. We found individual participants were comparably likely to assist the robot in exiting (40% assistance rate) as in entering (19%). When the robot was disguised as a food delivery agent for the fictional start-up Robot Grub, individuals were more likely to assist the robot in entering (76%). Groups of people were more likely than individuals to assist the robot in entering (71%). Lastly, we found participants who identified the robot as a bomb threat were just as likely to open the door (87%) as those who did not. Thus, we demonstrate that overtrust—the unfounded belief that the robot does not intend to deceive or carry risk—can represent a significant threat to physical security.


Framing Effects on Privacy Concerns about a Home Telepresence Robot

Matthew Rueben, Frank J. Bernieri, Cindy M. Grimm, William D. Smart

Privacy-sensitive robotics is an emerging area of HRI research. Judgments about privacy would seem to be context-dependent, but none of the promising work on contextual “frames” has focused on privacy concerns. This work studies the impact of contextual “frames” on local users’ privacy judgments in a home telepresence setting. Our methodology consists of using an online questionnaire to collect responses to animated videos of a telepresence robot after framing people with an introductory paragraph. The results of four studies indicate a large effect of manipulating the robot operator’s identity between a stranger and a close confidante. It also appears that this framing effect persists throughout several videos and even after subjects are re-framed. These findings serve to caution HRI researchers that a change in frame could cause their results to fail to replicate or generalize. We also recommend that robots be designed to encourage or discourage certain frames. Researchers should extend this work to different types of frames and over longer periods of time.


Beyond Moral Dilemmas: Exploring the Ethical Landscape in HRI

Thomas Arnold, Matthias Scheutz

HRI research has yielded intriguing empirical results connected to ethics and how we react in social contexts with robots, even though much of this work has focused on short-term, one-on-one interaction. In this paper, we point to the need to investigate the longer-term effects of ongoing interactions with robots — individually and in groups, with a single robot or more. We specifically examine three areas: 1) the primacy and implicit dynamics of bodily perception, 2) the competing interests at work in a single robot-human interaction, and 3) the social intricacy of multiple agents — robots and human beings — communicating and making decisions. While these areas are not exhaustive by any means, we find they yield concrete directions for how HRI can contribute to a widening, intensifying set of ethical debates with critical empirical insight, starting to stake out more of the ethical landscape in HRI.

Session #9: Teaching Robots

 

Session Chair: Séverin Lemaignan

Thu, Mar 9

Code3: A System for End-to-End Programming of Mobile Manipulator Robots for Novices and Experts

Justin Huang, Maya Cakmak

This paper introduces Code3, a system for user-friendly and rapid programming of mobile manipulator robots. The system enables general programmers with little to no robotics experience to program robots. Code3 consists of three integrated components: perception, manipulation, and high-level programming. The perception component helps users define a library of object and scene parts that the robot can later detect. The manipulation component lets users define actions for manipulating objects or scene parts through programming by demonstration techniques. Finally, the high-level programming component offers a drag-and-drop interface with which users can define logic and control flow to accomplish a task using their previously specified perception and manipulation capabilities. We present findings from a two-session user study with non-roboticist programmers (N=10) that demonstrate their ability to quickly learn Code3 and program a PR2 robot to do useful tasks. We also demonstrate how an expert can use the system to program complex tasks in orders of magnitude less time than it would take to code by hand in traditional robot programming frameworks such as ROS.


Simplified Programming of Re-usable Skills on a Safe Industrial Robot – Prototype and Evaluation

Maj Stenmark, Mathias Haage, Elin Anna Topp

The development of robust non-expert programming systems is a long-standing challenge in robotics. Now emphasized with recently emerged collaborative industrial robots with a new feature set, such as built-in force controlled motion, vision, 7 degrees of freedom arms and dual arms. These features and the fact that the operator is being able to stay in close proximity during both programming and execution phases calls for a re-visit to shop-floor programming tools. This paper presents a tool prototype for iconic robot programming with a hybrid programming and execution mode. The tool was evaluated with 21 non-expert users with varying programming and robotics experience, including one nine year old. We also present a comparison of the programming times for an expert robot programmer using traditional tools versus this new method. The expert could program the same tasks in 1/5 of the time compared to traditional tools and the non-experts were able to program and debug a LEGO building task using the robot within 30 minutes.


Situated Tangible Robot Programming

Yasaman Sefidgar, Prerna Agarwal, Maya Cakmak
Nominated for Best Paper Award

This paper introduces situated tangible robot programming in which a robot is programmed by placing specially designed tangible “blocks” in its workspace. These blocks are used for annotating objects, locations, or regions, and specifying actions and their ordering. The robot compiles a program by detecting blocks and objects in its workspace and grouping them into instructions by solving constraints. We present a proof-of-concept implementation using blocks with unique visual markers in a pick-and-place task domain. Three user studies evaluate the intuitiveness and learnability of situated tangible programming and iterate the block design. We characterize common challenges and gather feedback on how to further improve the design of blocks. Our studies demonstrate that people can interpret, generalize, and create many different situated tangible programs with minimal instruction or even with no instruction.


Not your cup of tea? How teaching a robot can increase perceived self-efficacy in HRI and technology acceptance

Astrid Rosenthal von der Pütten, Nikolai Bock, Katharina Brockmann

The goal of this work is to explore the influence of do-it-yourself customization of a robot on technologically experienced students and unexperienced elderly users’ perceived self-efficacy in HRI, uncertainty, and technology acceptance. We introduce the Self-Efficacy in HRI Scale and present two experimental studies. In study 1 (students, n=40) we found that actively teaching a robot objects relevant for a subsequent social interaction significantly increases perceived self-efficacy in HRI in comparison to reading a fact sheet about the robot’s capabilities. Moreover, interacting with the robot itself regardless of the previous treatment increased self-efficacy. In a second study with elderly users (n=60) we could replicate the positive effect of the interaction on self-efficacy, but not the effect of do-it-yourself customization by training the robot. We discuss limitations of the setting and implications for questionnaire design for elderly participants.