| |
Last updated on October 12, 2019. This conference program is tentative and subject to change
Technical Program for Tuesday October 15, 2019
|
TuAT1 |
Room T8 |
Cognitive Interaction Design |
Regular Session |
Chair: Behera, Laxmidhar | IIT Kanpur |
Co-Chair: Orlandini, Andrea | National Research Council of Italy |
|
10:30-10:45, Paper TuAT1.1 | |
Learning Optimal Parameterized Policy for High Level Strategies in a Game Setting |
Prakash, Ravi (Indian Institute of Technology, Kanpur), Vohra, Mohit (Indian Institute of Technology, Kanpur), Behera, Laxmidhar (IIT Kanpur) |
Keywords: Machine Learning and Adaptation, Social Learning and Skill Acquisition Via Teaching and Imitation, Motivations and Emotions in Robotics
Abstract: Complex and interactive robot manipulation skills such as playing a game of table tennis against a human opponent is a multifaceted challenge and a novel problem. Accurate dynamic trajectory generation in such dynamic situations and an appropriate controller in order to respond to the incoming table tennis ball from the opponent is only a prerequisite to win the game. Decision making is a major part of an intelligent robot and a policy is needed to choose and execute the action which receives highest reward. In this paper, we address this very important problem on how to learn the higher level optimal strategies that enable competitive behaviour with humans in such an interactive game setting. This paper presents a novel technique to learn a higher level strategy for the game of table tennis using P-Q Learning (a mixture of Pavlovian learning and Q-learning) to learn a parameterized policy. The cooperative learning framework of Kohenon Self Organizing Map (KSOM) along with Replay Memory is employed for faster strategy learning in this short horizon problem. The strategy is learnt in simulation, using a simulated human opponent and an ideal robot that can perform hitting motion in its workspace accurately. We show that our method is able to improve the average received reward significantly in comparison to the other state-of-the-art methods.
|
|
10:45-11:00, Paper TuAT1.2 | |
Learning Context-Sensitive Strategies in Space Fortress |
Agarwal, Akshat (Carnegie Mellon University), Hope, Ryan (Carnegie Mellon University), Sycara, Katia (Carnegie Mellon University) |
Keywords: Machine Learning and Adaptation, Evaluation Methods and New Methodologies
Abstract: Research in deep reinforcement learning (RL) has coalesced around improving performance on benchmarks like the Arcade Learning Environment. However, these benchmarks do not emphasize two important characteristics that are often present in real-world domains: requirement of changing strategy conditioned on latent contexts, and temporal sensitivity. As a result, research in RL has not given these challenges their due, resulting in algorithms which do not understand critical changes in context, and have little notion of real world time. This paper introduces the game of Space Fortress as a RL benchmark which specifically targets these characteristics. We show that existing state-of-the-art RL algorithms are unable to learn to play the Space Fortress game, and then confirm that this poor performance is due to the RL algorithms' context insensitivity. We also identify independent axes along which to vary context and temporal sensitivity, allowing Space Fortress to be used as a testbed for understanding both characteristics in combination and also in isolation. We release Space Fortress as an open-source Gym environment.
|
|
11:00-11:15, Paper TuAT1.3 | |
Estimating Optimal Placement for a Robot in Social Group Interaction |
Pathi, Sai Krishna (Örebro University), Kristofferson, Annica (Mälardalen University), Kiselev, Andrey (Orebro University), Loutfi, Amy (Örebro University) |
Keywords: Social Intelligence for Robots, Creating Human-Robot Relationships, Cooperation and Collaboration in Human-Robot Teams
Abstract: In this paper, we present a model to propose an optimal placement for a robot in a social group interaction. Our model estimates the O-space according to the F-formation theory. The method automatically calculates a suitable placement for the robot. An evaluation of the method has been performed by conducting an experiment where participants stand in different formations and a robot is teleoperated to join the group. In one condition, the operator positions the robot according to the specified location given by our algorithm. In another condition, operators have the freedom to position the robot according to their personal choice. Follow-up questionnaires were performed to determine which of the placements were preferred by the participants. The results indicate that the proposed method for automatic placement of the robot is supported from the participants. The contribution of this work resides in a novel method to automatically estimate the best placement of the robot, as well as the results from user experiments to verify the quality of this method. These results suggest that teleoperated robots such as mobile robot telepresence systems could benefit from tools that assist operators in placing the robot in groups in a socially accepted manner.
|
|
11:15-11:30, Paper TuAT1.4 | |
ROS-TiPlEx: How to Make Experts in A.I. Planning and Robotics Talk Together and Be Happy |
La Viola, Carlo (ISTC-CNR), Orlandini, Andrea (National Research Council of Italy), Umbrico, Alessandro (National Research Council of Italy), Cesta, Amedeo (CNR -- National Research Council of Italy, ISTC) |
Keywords: Innovative Robot Designs
Abstract: This paper presents a novel comprehensive framework called ROS-TiPlEx (Timeline-based Planning and Execution with ROS) to provide a shared environment in which experts in robotics and planning can easily interact to, respectively, encode information about low-level robot control and define task planning and execution models. ROS-TiPlEx aims at facilitating the interaction between both kind of experts, thus, enhancing and possibly speeding up the process of an integrated control design.
|
|
11:30-11:45, Paper TuAT1.5 | |
Robot with an Olfactory Display: Decorating Its Movements by Smells |
Senbonmatsu, Hikaru (University of Tsukuba), Tanaka, Fumihide (University of Tsukuba) |
Keywords: Non-verbal Cues and Expressiveness, Novel Interfaces and Interaction Modalities, Creating Human-Robot Relationships
Abstract: This study explored olfactory displays for social robots. In particular, we tested decorating robot movements by using smells as a way for nonverbal expression. To this end, two prototype devices which enabled a humanoid robot to present smells during its movements were developed based on the following design requirements: (1) the smell presentation had to be synchronized with the robot movements, (2) the devices could be easily mounted to the robot, (3) the devices could present and switch between multiple smells, and (4) the intensity of the smell presentation was controllable. Initial pilot tests were conducted with human participants.
|
|
11:45-12:00, Paper TuAT1.6 | |
Learning Sequential Human-Robot Interaction Tasks from Demonstrations: The Role of Temporal Reasoning |
Carpio Mazariegos, Estuardo Rene (University of New Hampshire), Clark-Turner, Madison (University of New Hampshire), Begum, Momotaz (University of New Hampshire) |
Keywords: Social Learning and Skill Acquisition Via Teaching and Imitation, Machine Learning and Adaptation, Robots in Education, Therapy and Rehabilitation
Abstract: There are many human-robot interaction (HRI) tasks that are highly structured and follow a certain temporal sequence. Learning such tasks from demonstrations requires understanding the underlying rules governing the interactions. This involves identifying and generalizing the key spatial and temporal features of the task and capturing the high level relationships among them. Despite its crucial role in sequential task learning, temporal reasoning is often ignored in existing learning from demonstration (LfD) research. This paper proposes a holistic LfD framework that learns the underlying temporal structure of sequential HRI tasks. The proposed Temporal-Reasoning-based LfD (TR-LfD) framework relies on an automated spatial reasoning layer to identify and generalize relevant spatial features, and a temporal reasoning layer to analyze and learn the high-level temporal structure of a HRI task. We evaluate the performance of this framework by learning a well-explored task in HRI research: robot-mediated autism intervention. The source code for this implementation is available at https://github.com/AssistiveRoboticsUNH/TR-LfD.
|
|
TuAT2 |
Room T2 |
Human Robot Interaction |
Regular Session |
Chair: Indurkhya, Bipin | Jagiellonian University |
Co-Chair: Edwards, Autumn | Western Michigan University |
|
10:30-10:45, Paper TuAT2.1 | |
Generation of Expressive Motions for a Tabletop Robot Interpolating from Hand-Made Animations |
Mier, Gonzalo (Pablo De Olavide University), Caballero, Fernando (Universidad Pablo De Olavide), Nakamura, Keisuke (Honda Research Institute Japan Co., Ltd), Merino, Luis (Universidad Pablo De Olavide), Gomez, Randy (Honda Research Institute Japan Co., Ltd) |
Keywords: Interaction Kinesics, Programming by Demonstration, Non-verbal Cues and Expressiveness
Abstract: Motion is an important modality for human-robot interaction. Besides a fundamental component to carry out tasks, through motion a robot can express intentions and expressions as well. In this paper, we focus on a tabletop robot in which motion, among other modalities, is used to convey expressions. The robot incorporates a set of pre-programmed motion animations that show different expressions with different intensities. These have been created by designers with expertise in animation. However, these animations are discrete open-loop macro actions. The objective in the paper is to analyze if these examples can be used as demonstrations, and combined by the robot to express to additional intensities/expressions, or shape the motion while performing additional tasks. Challenges are the representation space used, and the scarce number of examples. The paper compares three different learning from demonstration approaches for the task at hand. A user study is presented to evaluate the resultant motions learnt from the demonstrations.
|
|
10:45-11:00, Paper TuAT2.2 | |
A Common Social Distance Scale for Robots and Humans |
Banks, Jaime (Texas Tech University), Edwards, Autumn (Western Michigan University) |
Keywords: Robot Companions and Social Robots, Evaluation Methods and New Methodologies, Embodiment, Empathy and Intersubjectivity
Abstract: From keeping robots as in-home helpers to banning their presence or functions, a person’s willingness to engage in variably intimate interactions are signals of social distance: the degree of felt understanding of and intimacy with an individual or group that characterizes pre-social and social connections. To date, social distance has been examined through surrogate metrics not actually representing the construct (e.g., self-disclosure or physical proximity). To address this gap between operations and measurement, this project details a four-stage social distance scale development project, inclusive of systematic item pool-generation, candidate item ratings for laypersons thinking about social distance, testing of candidate items via scalogram and initial validity analyses, and final testing for cumulative structure and predictive validity. The final metric yields a 15-item (18, counting applications with a ‘none’ option), three-dimension scale for physical distance, relational distance, and conversational distance.
|
|
11:00-11:15, Paper TuAT2.3 | |
Transparent Robot Behavior Using Augmented Reality in Close Human-Robot Interaction |
Bolano, Gabriele (FZI Forschungszentrum Informatik), Juelg, Christian (FZI Forschungszentrum Informatik), Roennau, Arne (FZI Forschungszentrum Informatik, Karlsruhe), Dillmann, Rüdiger (FZI - Forschungszentrum Informatik - Karlsruhe) |
Keywords: HRI and Collaboration in Manufacturing Environments, Novel Interfaces and Interaction Modalities, Motion Planning and Navigation in Human-Centered Environments
Abstract: Most robots consistently repeat their motion without changes in a precise and consistent manner. But nowadays there are also robots able to dynamically change their motion and plan according to the people and environment that surround them. Furthermore, they are able to interact with humans and cooperate with them. With no information about the robot targets and intentions, the user feels uncomfortable even with a safe robot. In close human-robot collaboration, it is very important to make the user able to understand the robot intentions in a quick and intuitive way. In this work we have developed a system to use augmented reality to project directly into the workspace useful information. The robot intuitively shows its planned motion and task state. The AR module interacts with a vision system in order to display the changes in the workspace in a dynamic way. The representation of information about possible collisions and changes of plan allows the human to have a more comfortable and efficient interaction with the robot. The system is evaluated in different setups.
|
|
11:15-11:30, Paper TuAT2.4 | |
Your Robot Is Watching: Using Surface Cues to Evaluate the Trustworthiness of Human Actions |
Surendran, Vidullan (Pennsylvania State University), Wagner, Alan Richard (Penn State University) |
Keywords: Social Intelligence for Robots, Detecting and Understanding Human Activity, Non-verbal Cues and Expressiveness
Abstract: A number of important human-robot applications demand trust. Although a great deal of research has examined how and why people trust robots, less work has explored how robots might decide whether to trust humans. Surface cues are perceptual clues that provide hints as to a person's intent and are predictive of behavior. This paper proposes and evaluates a model for recognizing trust surface cues by a robot and predicting if a person's behavior is deceitful in the context of a trust game. The model was tested in simulation and on a physical robot that plays an interactive card game. A human study was conducted where subjects played the game against a simulation, the robot, and a human opponent. Video data was hand coded by two coders with an inter-rater reliability of 0.41 based on Levenshtein distance. It was found that the model outperformed/matched the human coders on 50% of the subjects. Overall, this paper contributes a method that may begin to allow robots to evaluate the surface cues generated by a person to determine whether or not it should trust them.
|
|
11:30-11:45, Paper TuAT2.5 | |
Spatially Situated End-User Robot Programming in Augmented Reality |
Kapinus, Michal (Brno University of Technology, Faculty of Information Technology), Beran, Vitezslav (Brno University of Technology), Materna, Zdenek (Faculty of Information Technology, Brno University of Technology), Bambusek, Daniel (Brno University of Technology, Faculty of Information Technology) |
Keywords: Novel Interfaces and Interaction Modalities, HRI and Collaboration in Manufacturing Environments, Human Factors and Ergonomics
Abstract: Nowadays, industrial robots are being programmed using proprietary tools developed by robot manufacturer. A skilled robot programmer is needed to create even as simple task as pick a well-known object and put it somewhere else. Contrary, in every-day life people are using end-user programming to make different electronic devices work in expected manner, without even noticing they are actually programming. We propose augmented reality-enabled end-user programming system allowing regular shop-floor workers to program industrial robotic tasks. The user interface prototype for this system was evaluated in the user study with 7 participants with respect to usability, mental workload and user experience.
|
|
11:45-12:00, Paper TuAT2.6 | |
Human-Robot Interaction through Fingertip Haptic Devices for Cooperative Manipulation Tasks |
Musić, Selma (Technische Universität München), Prattichizzo, Domenico (University of Siena), Hirche, Sandra (Technische Universität München) |
Keywords: Novel Interfaces and Interaction Modalities, Degrees of Autonomy and Teleoperation, Cooperation and Collaboration in Human-Robot Teams
Abstract: Teleoperation of multi-robot systems, e.g. dual manipulators, in cooperative manipulation tasks requires haptic feedback of multi-contact interaction forces. Classical haptic devices restrict the workspace of the human operator and provide only one contact point. An alternative solution is to enable the operator to command the robot system via free-hand motions which extends the workspace of the human. In such a setting, a multi-contact haptic feedback may be provided to the human through multiple wearable haptic devices, e.g.fingertip devices that display forces on the human fingertips. In this paper we evaluate the benefit of using wearable haptic fingertip devices to interact with a bimanual robot setup in a pick-and-place manipulation task. We show that haptic feedback through wearable devices improves task performance compared to the base condition of no haptic feedback. Therefore, wearable haptic devices are a promising interface for guidance of multi-robot manipulation systems.
|
|
TuAT3 |
Room T3 |
Social Robots I |
Regular Session |
Chair: Cabibihan, John-John | Qatar University |
Co-Chair: Deshmukh, Amol | University of Glasgow |
|
10:30-10:45, Paper TuAT3.1 | |
Social and Entertainment Gratifications of Videogame Play Comparing Robot, AI, and Human Partners |
Bowman, Nick (Texas Tech University), Banks, Jaime (Texas Tech University) |
Keywords: Robots in art and entertainment, Robot Companions and Social Robots, Non-verbal Cues and Expressiveness
Abstract: As social robots’ and AI agents’ roles are becoming more diverse, those machines increasingly function as sociable partners. This trend raises questions about whether social gaming gratifications known to emerge in human-human co-play may (not) also manifest in human-machine co-play. In the present study, we examined social outcomes of playing a videogame with a human partner as compared to an ostensible social robot or A.I (i.e., computer-controlled player) partner. Participants (N = 103) were randomly assigned to three experimental conditions in which they played a cooperative video game with either a human, embodied robot, or non-embodied AI. Results indicated that few statistically significant or meaningful differences existed between any of the partner types on perceived closeness with partner, relatedness need satisfaction, or entertainment outcomes. However, qualitative data suggested that human and robot partners were both seen as more sociable, while AI partners were seen as more functional.
|
|
10:45-11:00, Paper TuAT3.2 | |
The Influence of Emotions on Time Perception in a Cognitive System for Social Robotics |
Cominelli, Lorenzo (E. Piaggio Research Center), Garofalo, Roberto (E. Piaggio Research Center), De Rossi, Danilo (University of Pisa) |
Keywords: Computational Architectures, Cognitive Skills and Mental Models, Social Intelligence for Robots
Abstract: In this paper, we discuss some evidences provided by neuroscience and psychology studies on human time perception, in terms of its representation and its psychological distortion due to emotional state variations. We propose, then, a novel model inspired by these recent findings to be applied in social robotics control architectures, with a specific reference to an existing and already tested bio-inspired cognitive architecture called SEAI (Social Emotional Artificial Intelligence). An hypothesis on how to represent emotional state influence on time perception in SEAI will be presented, discussing the consequent potential of the system with this integrated feature.
|
|
11:00-11:15, Paper TuAT3.3 | |
Shakespeare and Robots: Participatory Performance Art for Older Adults |
Greer, Julienne (Universtiy of Texas at Arlington), Doelling, Kris (Previously University of Texas at Arlington), Xu, Ling (University of Texas at Arlington, School of Social Work), Fields, Noelle (University of Texas at Arlington) |
Keywords: Creating Human-Robot Relationships, Robots in Education, Therapy and Rehabilitation, Embodiment, Empathy and Intersubjectivity
Abstract: Theatre arts, social work, and engineering researchers investigated the therapeutic benefits of an interdisciplinary multi-modal intervention with older adults using a social robotic platform in an independent living facility. This pilot study incorporated Shakespearean text and the social robot, NAO, performing and concurrently encouraging the older adult to perform as a function of a participatory performance arts model. The findings of this human-robot study include a reduction in depression and an increase in social engagement with the robot in the older adults who participated in the intervention.
|
|
11:15-11:30, Paper TuAT3.4 | |
Recognition of Aggressive Interactions of Children Toward Robotic Toys |
Alhaddad, Ahmad Yaser (Qatar University), Cabibihan, John-John (Qatar University), Bonarini, Andrea (Politecnico Di Milano) |
Keywords: Robot Companions and Social Robots, Applications of Social Robots, Innovative Robot Designs
Abstract: Social robots are now being considered to be a part of the therapy of children with autism. During the interactions, some aggressive behaviors could lead to harmful scenarios. The ability of a social robot to detect such behaviors and react to intervene or to notify the therapist would improve the outcomes of therapy and prevent any potential harm toward another person or to the robot. In this study, we investigate the feasibility of an artificial neural network in classifying 6 interaction behaviors between a child and a small robotic toy. The behaviors were: hit, shake, throw, pickup, drop, and no interaction or idle. Due to the ease of acquiring data from adult participants, a model was developed based on adults’ data and was evaluated with children’s data. The developed model was able to achieve promising results based on the accuracy (i.e. 80%), classification report (i.e. overall F1-score = 80%), and confusion matrix. The findings highlight the possibility of characterizing children’s negative interactions with robotic toys to improve safety.
|
|
11:30-11:45, Paper TuAT3.5 | |
The Power to Persuade: A Study of Social Power in Human-RobotInteraction |
Hashemian, Mojgan (INESC-ID), Paiva, Ana (INESC-ID and Instituto Superior Técnico, TechnicalUniversity Of), Mascarenhas, Samuel (INESC-ID / Instituto Superior Técnico, University of Lisbon), Santos, Pedro A. (Instituto Superior Tecnico), prada, Rui (INESC ID, Instituto Superior Tecnico, University of Lisbon) |
Keywords: Applications of Social Robots, Robot Companions and Social Robots, Social Intelligence for Robots
Abstract: Recent advances on Social Robotics raise the question whether a social robot can be used as a persuasive agent. To date, a body of literature has been performed using various approaches to answer this research question, ranging from the use of non-verbal behavior to the exploration of different embodiment characteristics. In this paper, we investigate the role of social power for making social robots more persuasive. Social power is defined as one's ability to influence another to do something which s/he would not do without the presence of such power. Different theories classify alternative ways to achieve social power, such as providing a reward, using coercion, or acting as an expert. In this work, we explored two types of persuasive strategies that are based on social power (specifically Reward and Expertise) and created two social robots that would employ such strategies. To examine the effectiveness of these strategies we performed a user study with 51 participants using two social robots in an adversarial setting in which both robots try to persuade the user on a concrete choice. The results show that even though each of the strategies caused the robots to be perceived differently in terms of their competence and warmth, both were similarly persuasive.
|
|
11:45-12:00, Paper TuAT3.6 | |
Eyes on You: Field Study of Robot Vendor Using Human-Like Eye Component “Akagachi” |
Hayashi, Kotaro (Toyohashi University of Technology), Toshimitsu, Yasunori (MIT) |
Keywords: Innovative Robot Designs, Robot Companions and Social Robots, User-centered Design of Robots
Abstract: Eye gaze is an important non-verbal behavior for communication robots as it serves as the onset of communication. Existing communication robots have various eyes because design choices for an appropriate eye have yet to be determined, so many robots are designed on the basis of individual designers’ ideas. Thus, this study focuses on human-like eye gaze in a real environment. We developed a human-like eye gaze component called Akagachi for various robots and conducted an observational field study by implementing it to a vendor robot called Reika. We conducted a field study in a theme park where Reika sells soft-serve ice cream in a food stall and analyzed the behaviors of 978 visitors. Our results indicate that Reika elicits significantly more interaction from people with eye gaze than without it.
|
|
TuAT4 |
Room T4 |
Tele-Operation and Autonomous Robots |
Regular Session |
Chair: Fitter, Naomi T. | University of Southern California |
Co-Chair: Ikeda, Tetsushi | Hiroshima City University |
|
10:30-10:45, Paper TuAT4.1 | |
Haptic Directional Information for Spatial Exploration |
ghosh, ayan (The University of Sheffield), Penders, Jacques (Sheffield Hallam University), Soranzo, Alessandro (Sheffield Hallam University) |
Keywords: Cognitive and Sensorimotor Development, Novel Interfaces and Interaction Modalities, Non-verbal Cues and Expressiveness
Abstract: This paper investigates the efficacy of a tactile and haptic human robot interface developed and trialled to aid navigation in poor visibility and audibility conditions, which occur, for example, in e.g. search and rescue. The new developed interface generates haptic directional information that will support human navigation when other senses are not or only partially accessible. The central question of this paper was whether humans are able to interpret haptic signals as denoting different spatial directions. The effectiveness of the haptic signals was measured in a novel experimental set up. Participants were given a stick (replicating the robot interface) and asked to reproduce the specific spatial information denoted by each of the haptic signals. The task performance was examined quantitatively and results show that the haptic signals can denote distinguishable spatial directions, supporting the hypothesis that tactile and haptic information can be effectively used to aid human navigation. Implications for robotics application of the newly developed interface are discussed.
|
|
10:45-11:00, Paper TuAT4.2 | |
User Interface Tradeoffs for Remote Deictic Gesturing |
Fitter, Naomi T. (University of Southern California), Joung, Youngseok (University of Southern California), Hu, Zijian (University of Southern California), Demeter, Marton (University of Southern California), Mataric, Maja (University of Southern California) |
Keywords: Non-verbal Cues and Expressiveness, Social Presence for Robots and Virtual Humans, Assistive Robotics
Abstract: Telepresence robots can help to connect people by providing videoconferencing and navigation abilities in far-away environments. Despite this potential, current commercial telepresence robots lack certain nonverbal expressive abilities that are important for permitting the operator to communicate effectively in the remote environment. To help improve the utility of telepresence robots, we added an expressive, non-manipulating arm to our custom telepresence robot system and developed three user interfaces to control deictic gesturing by the arm: onscreen, dial-based, and skeleton tracking methods. A usability study helped us evaluate user presence feelings, task load, preferences, and opinions while performing deictic gestures with the robot arm during a mock order packing task. The majority of participants preferred the dial-based method of controlling the robot, and survey responses revealed differences in physical demand and effort level across user interfaces. These results can guide robotics researchers interested in extending the nonverbal communication abilities of telepresence robots.
|
|
11:00-11:15, Paper TuAT4.3 | |
Improving Robot Transparency: An Investigation with Mobile Augmented Reality |
Rotsidis, Alexandros (University of Bath), Theodorou, Andreas (University of Bath), Bryson, Joanna (University of Bath), Wortham, Robert Hale (University of Bath) |
Keywords: Novel Interfaces and Interaction Modalities, Creating Human-Robot Relationships, Monitoring of Behaviour and Internal States of Humans
Abstract: Autonomous robots can be difficult to understand by their developers, let alone by end users. Yet, as they become increasingly integral parts of our societies, the need for affordable easy to use tools to provide transparency grows. The rise of the smartphone and the improvements in mobile computing performance have gradually allowed Augmented Reality (AR) to become more mobile and affordable. In this paper we review relevant robot systems architecture and propose a new software tool to provide robot transparency through the use of AR technology. Our new tool, ABOD3-AR provides real-time graphical visualisation and debugging of a robot’s goals and priorities as a means for both designers and end users to gain a better mental model of the internal state and decision making processes taking place within a robot. We also report on our on-going research programme and planned studies to further understand the effects of transparency to naive users and experts.
|
|
11:15-11:30, Paper TuAT4.4 | |
Investigation of the Driver's Seat That Displays Future Vehicle Motion |
イシイ, ユウキ (広島市立大学), Ikeda, Tetsushi (Hiroshima City University), Kobayashi, Toru (Hiroshima City University), Kato, Yumiko (St. Marianna University School of Medicine), Utsumi, Akira (ATR Intelligent Robotics and Communication Labs), Nagasawa, Isamu (SUBARU Co., LTD), Iwaki, Satoshi (Hiroshima City University) |
Keywords: Novel Interfaces and Interaction Modalities, Virtual and Augmented Tele-presence Environments, Multi-modal Situation Awareness and Spatial Cognition
Abstract: Automated driving reduces the burden on the driver, however also makes it difficult for the driver to understand the current situation and predict the future movement of the vehicle. When the acceleration due to automated driving occurs without future prediction, the driver's anxiety and discomfort are increased compared to the case in manual driving. To facilitate the prediction of the future behavior of the vehicle by the driver, this paper aims to design and evaluate a haptic interface that actuates the vehicle seat. Our system displays to the driver the movement of the vehicle a few seconds in the future, which allows the driver to make predictions and preparations. Using a driving simulator, we compared the conditions where the movement of the car was displayed in advance for the length of different time. The subjective evaluation of the driver showed that the predictability of the behavior of the vehicle were significantly increased compared to the case without display. The experiment also showed that comfortable feeling significantly decreased if the preceding display is too early.
|
|
11:30-11:45, Paper TuAT4.5 | |
Combining Electromyography and Fiducial Marker Based Tracking for Intuitive Telemanipulation with a Robot Arm Hand System |
Dwivedi, Anany (University of Auckland), Gorjup, Gal (The University of Auckland), Kwon, Yongje (The University of Auckland), Liarokapis, Minas (The University of Auckland) |
Keywords: Degrees of Autonomy and Teleoperation, Machine Learning and Adaptation
Abstract: Teleoperation and telemanipulation have since the early years of robotics found use in a wide range of applications, including exploration, maintenance, and response in remote or hazardous environments, healthcare, and education settings. As the capabilities of robot manipulators grow, so does the control complexity and the remote execution of intricate manipulation tasks still remains challenging for the user. This paper proposes an intuitive telemanipulation framework based on electromyography (EMG) and fiducial marker based tracking that can be used with a dexterous robot arm hand system. The EMG subsystem captures the myoelectric activations of the user during the execution of specific hand postures and gestures and translates them into the desired grasp type for the robot hand. The pose of the tracked fiducial marker is used as a task-space goal for the robot end-effector. The system performance is experimentally validated in a remote operation setting, where the system successfully performs a telemanipulation task.
|
|
11:45-12:00, Paper TuAT4.6 | |
Humanoid Co-Workers: How Is It Like to Work with a Robot? |
Vishwanath, Ajay (Nanyang Technological University), Singh, Aalind (Institute for Media Innovation), Chua, Yi Han Victoria (Nanyang Technological University), Dauwels, Justin (Nanyang Technological University), Thalmann, Nadia Magnenat (Nanyang Technological University) |
Keywords: Applications of Social Robots, Robot Companions and Social Robots, Philosophical Issues in Human-Robot Coexistence
Abstract: Human-robot interaction in corporate workplaces is a research area which remains unexplored. In this paper, we present the results and analysis of a social experiment we conducted by introducing a humanoid robot (Nadine) into a collaborative social workplace. The humanoid's primary task was to function as a receptionist and provide general assistance to the customers. Moreover, the employees who interacted with Nadine were given over a month to get used to her capabilities, after which, the feedback was collected from the staff on the grounds of influence on productivity, affect experienced during interaction and their views on social robots assisting with regular tasks. Our results show that the usage of social robots for assisting with normal day-to-day tasks is taken quite positively by the co-workers and that in the near future, more capable humanoid social robots can be used in workplaces for assisting with menial tasks. Finally, we posit that surveys such as ours could result in constructive opinions based on technological awareness, rather than opinions from media-driven fears about the threats of technology.
|
|
TuAS1 |
Room T5 |
Transparency and Trust in Human Robot Interaction |
Special Session |
Chair: Rossi, Silvia | Universita' Di Napoli Federico II |
Co-Chair: Rossi, Alessandra | University of Hertfordshire |
|
10:30-10:45, Paper TuAS1.1 | |
Verbal Explanations for Deep Reinforcement Learning Neural Networks with Attention on Extracted Features (I) |
Wang, Xinzhi (Tsinghua University), Yuan, Shengcheng (LazyComposer Inc., Beijing), Zhang, Hui (Tsinghua University), Sycara, Katia (Carnegie Mellon University), Lewis, Mike (Univ of Pittsburgh) |
Keywords: Linguistic Communication and Dialogue, Machine Learning and Adaptation, Interaction with Believable Characters
Abstract: In recent years, there has been increasing interest in transparency in Deep Neural Networks. Most of the works on transparency have been done for image classification. In this paper, we report on work of transparency in Deep Reinforcement Learning Networks (DRLNs). Such networks have been extremely successful in learning action control in Atari games. In this paper, we focus on generating verbal (natural language) descriptions and explanations of deep reinforcement learning policies. Successful generation of verbal explanations would allow better understanding by people (e.g., users, debuggers) of the inner workings of DRLNs which could ultimately increase trust in these systems. We present a generation model which consists of three parts: an encoder on feature extraction, an attention structure on selecting features from the output of the encoder, and a decoder on generating the explanation in natural language. Four variants of the attention structure - full attention, global attention, adaptive attention and object attention - are designed and compared. The adaptive attention structure performs the best among all the variants, even though the object attention structure is given additional information on object locations. Additionally, our experiment results showed that the proposed encoder outperforms two baseline encoders (Resnet and VGG) on the capability of distinguishing the game state images.
|
|
10:45-11:00, Paper TuAS1.2 | |
Coherent and Incoherent Robot Emotional Behavior for Humorous and Engaging Recommendations (I) |
Rossi, Silvia (Universita' Di Napoli Federico II), Cimmino, Teresa (University of Naples Federico II), Matarese, Marco (University of Naples Federico II), Raiano, Mario (University of Naples Federico II) |
Keywords: Non-verbal Cues and Expressiveness, Applications of Social Robots, Motivations and Emotions in Robotics
Abstract: Social robots are effective in influencing and motivating human behavior. To gain a deeper understanding of how the robot emotional non-verbal behaviors might shape the human perception of the interaction while providing recommendations, we conducted a between-subjects experimental study using a humanoid robot in a movie recommendation scenario. This experiment aims at evaluating whether an incoherent use of emotional behavior, with respect to the presented contents, may produce a sort of humorous effect that positively affect the user perception of the recommendation. We evaluated, using an off-the-shelf solution, engagement and emotions shown by the users. Our results showed that a robot incoherent behavior does not distract the user, but it increases his/her engagement producing a positive emotional response. Such a difference is significant in the case of female subjects and depends on the considered emotions.
|
|
11:00-11:15, Paper TuAS1.3 | |
Getting to Know Kaspar: Effects of People's Awareness of a Robot's Capabilities on Their Trust in the Robot (I) |
Rossi, Alessandra (University of Hertfordshire), Moros, Sílvia (University of Hertfordshire), Dautenhahn, Kerstin (University of Waterloo), Koay, Kheng Lee (University of Hertfordshire), Walters, Michael Leonard (University of Hertfordshire) |
Keywords: Robot Companions and Social Robots, Applications of Social Robots, Creating Human-Robot Relationships
Abstract: In this work we investigate how humans' awareness of a social robot's capabilities affect their trust in the robot. We present a user study that relates knowledge on different quality levels to participants' ratings of trust. Primary school pupils were asked to rate their trust in the robot after three types of interactions: a video demonstration, a live interaction, and a programming task. The study revealed that the pupils' trust is not significantly affected across different domains after each session. It did not appear to be significant differences in trust tendencies for the different experiences either; however, our results suggest that human users trust a robot more the more awareness about the robot they have.
|
|
11:15-11:30, Paper TuAS1.4 | |
Privacy First: Designing Responsible and Inclusive Social Robot Applications for in the Wild Studies (I) |
Tonkin, Meg (University of Technology Sydney), Vitale, Jonathan (University of Technology Sydney), Herse, Sarita (University of Technology Sydney), Raza, Syed Ali (University of Technology, Sydney), Madhisetty, Srinivas (University of Technology Sydney), Kang, Le (University of Technology Sydney), Vu, The Duc (University of Technology Sydney), Johnston, Benjamin (University of Technology, Sydney), Williams, Mary-Anne (University of Technology Sydney) |
Keywords: User-centered Design of Robots, Applications of Social Robots, Novel Interfaces and Interaction Modalities
Abstract: Deploying social robots applications in public spaces for conducting in the wild studies is a significant challenge but critical to the advancement of social robotics. Real world environments are complex, dynamic, and uncertain. Human-Robot interactions can be unstructured and unanticipated. In addition, when the robot is intended to be a shared public resource, management issues such as user access and user privacy arise, leading to design choices that can impact on users' trust and the adoption of the designed system. In this paper we propose a user registration and login system for a social robot and report on people's preferences when registering their personal details with the robot to access services. This study is the first iteration of a larger body of work investigating potential use cases for the Pepper social robot at a government managed centre for startups and innovation. We prototyped and deployed a system for user registration with the robot, which gives users control over registering and accessing services with either face recognition technology or a QR code. The QR code played a critical role in increasing the number of users adopting the technology. We discuss the need to develop social robot applications that responsibly adhere to privacy principles, are inclusive, and cater for a broad spectrum of people.
|
|
11:30-11:45, Paper TuAS1.5 | |
Trust Repair in Human-Swarm Teams (I) |
Liu, Rui (Kent State University), Cai, Zekun (University of Pittsburgh), Lewis, Mike (Univ of Pittsburgh), Lyons, Joseph (AFRL), Sycara, Katia (Carnegie Mellon University) |
Keywords: Cognitive Skills and Mental Models, Creating Human-Robot Relationships, Cognitive and Sensorimotor Development
Abstract: begin{abstract} Swarm robots are coordinated via simple control laws to generate emergent behaviors such as flocking, rendezvous, and deployment. Human-swarm teaming has been widely proposed for scenarios, such as human-supervised teams of unmanned aerial vehicles (UAV) for disaster rescue, UAV and ground vehicle cooperation for building security, and soldier-UAV teaming in combat. Effective cooperation requires an appropriate level of trust, between a human and a swarm. When an UAV swarm is deployed in a real-world environment, its performance is subject to real-world factors, such as system reliability and wind disturbances. Degraded performance of a robot can cause undesired swarm behaviors, decreasing human trust. This loss of trust, in turn, can trigger human intervention in UAVs' task executions, decreasing cooperation effectiveness if inappropriate. Therefore, to promote effective cooperation we propose and test a trust-repairing method (textit{Trust-repair}) restoring performance and human trust in the swarm to an appropriate level by correcting undesired swarm behaviors. Faulty swarms caused by both external and internal factors were simulated to evaluate the performance of the textit{Trust-repair} algorithm in repairing swarm performance and restoring human trust. Results show that textit{Trust-repair} is effective in restoring trust to a level intermediate between normal and faulty conditions. end{abstract}
|
|
11:45-12:00, Paper TuAS1.6 | |
“You Are Doing so Great!” – the Effect of a Robot’s Interaction Style on Self-Efficacy in HRI (I) |
Zafari, Setareh (Vienna University of Technology), Schwaninger, Isabel (TU Wien), Hirschmanner, Matthias (TU Wien), Schmidbauer, Christina (Vienna University of Technology), Weiss, Astrid (Vienna Univerity of Technology), Koeszegi, Sabine Theresia (Vienna University of Technology) |
Keywords: Cognitive Skills and Mental Models, Creating Human-Robot Relationships, Linguistic Communication and Dialogue
Abstract: People form mental models about robots' behavior and intentions, as they interact with them. The aim of this paper is to evaluate the effect of two different interaction styles on self-efficacy in human-robot interaction (HRI), people's perception of the robot and task engagement. We conducted a user study in which a social robot assists people while building a house of cards. Data from our experimental study revealed that people engaged longer in the task while interacting with a robot that provides person related feedback than with a robot that gives no person or task related feedback. Moreover, people interacting with a robot with a person-oriented interaction style reported a higher self-efficacy in HRI, perceived higher Agreeableness of the robot and found the interaction less frustrating, as compared to a robot with a task-oriented interaction style. This suggests that a robot's interaction style can be considered as a key factor for increasing people's perceived self-efficacy in HRI, which is essential for establishing trust and enabling Human-Robot Collaboration.
|
|
TuBT1 |
Room T8 |
Robots in Education |
Regular Session |
Chair: Robins, Ben | University of Hertfordshire |
Co-Chair: JOHAL, Wafa | École Polytechnique Fédérale De Lausanne |
|
13:00-13:15, Paper TuBT1.1 | |
A Participatory Design Process of a Robotic Tutor of Assistive Sign Language for Children with Autism |
Axelsson, Minja (Aalto University), Racca, Mattia (Aalto University), Weir, Daryl (Futurice Oy), Kyrki, Ville (Aalto University) |
Keywords: Assistive Robotics, User-centered Design of Robots
Abstract: We present the participatory design process of a robotic tutor of assistive sign language for children with autism spectrum disorder (ASD). Robots have been used in autism therapy, and to teach sign language to neurotypical children. The application of teaching assistive sign language --- the most common form of assistive and augmentative communication used by people with ASD --- is novel. The robot's function is to prompt children to imitate the assistive signs that it performs. The robot was therefore co-designed to appeal to children with ASD, taking into account the characteristics of ASD during the design process: impaired language and communication, impaired social behavior, and narrow flexibility in daily activities. To accommodate these characteristics, a multidisciplinary team defined design guidelines specific to robots for children with ASD, which were followed in the participatory design process. With a pilot study where the robot prompted children to imitate nine assistive signs, we found support for the effectiveness of the design. The children successfully imitated the robot and kept their focus, as measured by their eye gaze, on it. Children and their companions reported positive experiences with the robot, and companions evaluated it as potentially useful, suggesting that robotic devices could be used to teach assistive sign language to children with ASD.
|
|
13:15-13:30, Paper TuBT1.2 | |
Robot Analytics: What Do Human-Robot Interaction Traces Tell Us about Learning? |
Nasir, Jauwairia (EPFL), Norman, Utku (Swiss Federal Institute of Technology in Lausanne (EPFL)), JOHAL, Wafa (École Polytechnique Fédérale De Lausanne), Olsen, Jennifer (EPFL), Shahmoradi, Sina (EPFL), Dillenbourg, Pierre (EPFL) |
Keywords: Robots in Education, Therapy and Rehabilitation
Abstract: In this paper, we propose that the data generated by educational robots can be better used by applying learning analytics methods and techniques which can lead to a deeper understanding of the learners apprehension and behavior as well as refined guidelines for roboticists and improved interventions by the teachers. As a step towards this, we put forward analyzing behavior and task performance at team and/or individual levels by coupling robot data with the data from conventional methods of assessment through quizzes. Classifying learners/teams in the behavioral feature space with respect to the task performance gives insight into the relevant behavior patterns for high performance, which could be backed by feature ranking. As a use case, we present an open-ended learning activity using tangible robots in a classroom-level setting. The pilot study, spanning over approximately an hour, is conducted with 25 children in teams of two that are aged between 11-12. A linear separation is observed between the high and low performing teams where two of the behavioral features, namely number of distinct attempts and the visits to the destination, are found to be important. Although the pilot study in its current form has limitations, it contributes to highlighting the potential of the use of learning analytics in educational robotics.
|
|
13:30-13:45, Paper TuBT1.3 | |
Improv with Robots: Creativity, Inspiration, Co-Performance |
Rond, Jesse (Oregon State University), Sanchez, Alan (Oregon State University), Berger, Jaden (Oregon State University), Knight, Heather (Oregon State University) |
Keywords: Art pieces supported by robotics, Non-verbal Cues and Expressiveness, Storytelling in HRI
Abstract: Improvisational actors are adept at creative exploration within a set of boundaries. These boundaries come from each scene having ``games'' that establish the rules-of-play. In this paper, we introduce a game that allows an expressive motion robot to collaboratively develop a narrative with an improviser. When testing this game on eight improv performers, our team explored two research questions: (1) Can a simple robot be a creative partner to a human improviser, and (2) Can improvisers expand our understanding of robot expressive motion? After conducting 16 scenes and 40 motion demonstrations, we found that performers viewed our robot as a supportive teammate who positively inspired the scene's direction. The improvisers also provided insightful perspectives on robot motion, which led us to create a movement categorization scheme based on their various interpretations. We discuss our lessons learned, show the benefits of merging social robotics with improvisational theater, and hope this will encourage further exploration of this cross-disciplinary intersection.
|
|
13:45-14:00, Paper TuBT1.4 | |
CoWriting Kazakh: Transitioning to a New Latin Script Using Social Robots |
Kim, Anton (Nazarbayev University), Omarova, Meruyert (Nazarbayev University), Zhaksylyk, Adil (Nazarbayev University), asselborn, thibault (EPFL), JOHAL, Wafa (École Polytechnique Fédérale De Lausanne), Dillenbourg, Pierre (EPFL), Sandygulova, Anara (Nazarbayev University) |
Keywords: Robots in Education, Therapy and Rehabilitation, Robot Companions and Social Robots, Applications of Social Robots
Abstract: In the Republic of Kazakhstan, the transition from Cyrillic towards Latin alphabet raises challenges to teach the whole population in writing the new script. This paper presents a CoWriting Kazakh system that aims to implement an autonomous behavior of a social robot that would assist children in learning a new script. Considering the fact that the current generation of primary school children have to be fluent in both Kazakh scripts, this exploratory study aims to investigate which learning approach provides better effect. Participants were asked to teach a humanoid robot NAO how to write Kazakh words using one of the scripts, Latin vs Cyrillic. We hypothesize that it is more effective when a child mentally converts the word to Latin in comparison to having the robot perform conversion itself. The findings reject this hypothesis, but further research is needed as it is suggested that the way the pre-test was performed might have caused the obtained results.
|
|
14:00-14:15, Paper TuBT1.5 | |
Design and Perception of a Social Robot to Promote Hand Washing among Children in a Rural Indian School |
Radhakrishnan, Unnikrishnan (Amrita University), Deshmukh, Amol (University of Glasgow), Ramesh, Shanker (AMMACHI Labs, Amrita Vishwa Vidyapeetham, Amritapuri, India), K Babu, Sooraj (AMMACHI Labs, Amrita Vishwa Vidyapeetham, Amritapuri, India), A, Parameswari (Ammachilabs, Amrita Vishwa Vidyapeetham, Amritapuri, India), Rao R, Bhavani (Amrita Vishwa Vidyapeetham University) |
Keywords: Innovative Robot Designs, Applications of Social Robots, User-centered Design of Robots
Abstract: We introduce “Pepe”, a social robot for encouraging proper handwashing behaviour among children. We discuss the motivation, the robot design and a pilot study conducted at a primary school located in the Western Ghats mountain ranges of Southern India with a significant presence of indigenous tribes. The study included individual & group interviews with a randomly selected sample of 45 children to gauge their perception of the Pepe robot across various dimensions including gender, animacy & technology acceptance. We also discuss some HRI implications for running user studies with rural children.
|
|
14:15-14:30, Paper TuBT1.6 | |
The Effect of Interaction and Design Participation on Teenagers' Attitudes towards Social Robots |
Björling, Elin (University of Washington), Xu, Wendy M. (University of Washington), Cabrera, Maria Eugenia (University of Washington), Cakmak, Maya (University of Washington) |
Keywords: User-centered Design of Robots, Robots in Education, Therapy and Rehabilitation, Robot Companions and Social Robots
Abstract: Understanding people's attitudes towards robots and how those attitudes are affected by exposure to robots is essential to the effective design and development of social robots. Although researchers have been studying attitudes towards robots among adults and even children for more than a decade, little has been explored assessing attitudes among teens--a highly vulnerable population that presents unique opportunities and challenges for social robots. Our work aims to close this gap. In this paper we present findings from several participatory robot interaction and design sessions with 136 teenagers who completed a modified version of the Negative Attitudes Towards Robots Scale (NARS) before participation in a robot interaction. Our data reveal that most teens are 1) highly optimistic about the helpfulness of robots, 2) do not feel nervous talking with a robot, but also 3) do not trust a robot with their data. Ninety teens also completed a post-interaction survey and reported a significant change in the emotional attitudes subscale of the NARS. We discuss the implications of our findings on the design of social robots for teens.
|
|
TuBT2 |
Room T2 |
Human Centred Robot Design |
Regular Session |
Chair: Kato, Shohei | Nagoya Institute of Technology |
Co-Chair: Kim, Joonhwan | Korea Advanced Institute of Science and Technology(KAIST) |
|
13:00-13:15, Paper TuBT2.1 | |
Unconventional Uses of Structural Compliance in Adaptive Hands |
Chang, Che-Ming (University of Auckland), Gerez, Lucas (The University of Auckland), Elangovan, Nathan (University of Auckland), Zisimatos, Agisilaos (National Technical University of Athens), Liarokapis, Minas (The University of Auckland) |
Keywords: Evaluation Methods and New Methodologies, Innovative Robot Designs
Abstract: Adaptive robot hands are typically created by introducing structural compliance either in their joints (e.g., implementation of flexure joints) or in their finger-pads. In this paper, we present a series of alternative uses of structural compliance for the development of simple, adaptive, compliant and/or under-actuated robot grippers and hands that can efficiently and robustly execute a variety of grasping and dexterous, in-hand manipulation tasks. The proposed designs utilize only one actuator per finger to control multiple degrees of freedom and they retain the superior grasping capabilities of the adaptive grasping mechanisms even under significant object pose or other environmental uncertainties. More specifically, in this work, we introduce, discuss, and evaluate: a) the concept of compliance adjustable motions that can be predetermined by tuning the in-series compliance of the tendon routing system and by appropriately selecting the imposed tendon loads, b) a design paradigm of pre-shaped, compliant robot fingers that adapt / conform to the object geometry and, c) a hyper-adaptive finger-pad design that maximizes the area of the contact patches between the hand and the object, maximizing also grasp stability. The proposed hands use mechanical adaptability to facilitate and simplify the efficient execution of robust grasping and dexterous, in-hand manipulation tasks by design.
|
|
13:15-13:30, Paper TuBT2.2 | |
Design and Analysis of a Soft Bidirectional Bending Actuator for Human-Robot Interaction Applications |
SINGH, KUMAR SURJDEO (Indian Institute of Technology Madras), Thondiyath, Asokan (IIT Madras) |
Keywords: Assistive Robotics, Robots in Education, Therapy and Rehabilitation, Innovative Robot Designs
Abstract: The design of a novel, soft bidirectional actuator which can improve the human-robot interactions in collaborative applications is proposed in this paper. This actuator is advantageous over the existing designs due to the additional degree of freedom for the same number of pressure inputs as found in the conventional designs. This improves the workspace of the bidirectional actuator significantly and is able to achieve higher angles of bidirectional bending at much lower values of input pressure. This is achieved by eliminating the passive impedance offered by one side of the bending chamber in compression when the other side of the chamber is inflated. A simple kinematic model of the actuator is presented and theoretical and finite element analysis is carried out to predict the fundamental behavior of the actuator. The results are validated through experiments using a fabricated model of the soft bidirectional bending actuator.
|
|
13:30-13:45, Paper TuBT2.3 | |
Instrumented Shoe Based Foot Clearance and Foot-To-Ground Angle Measurement System for the Gait Analysis |
Tiwari, Ashutosh (Indian Institute of Technology), Saxena, somya (PGI Chandigarh), Joshi, Deepak (Indian Institute of Technology) |
Keywords: Assistive Robotics, Cognitive and Sensorimotor Development, Cognitive Skills and Mental Models
Abstract: This paper presents a wireless gait analysis system that incorporates anatomically located infrared (IR) distance sensor on the shoe for the measurement of various gait parameters such as foot-to-ground angle (FGA), foot clearance (FC). The system has been validated against the BTS bioengineering 3D motion capture gold standard method in gait analysis laboratory with FC RMSE error of 6.31% of the full range and FGA RMSE error of 5.53% of the full range. The squared correlation coefficient r2 for FC and FGA is equal to 0.970 and 0.935, respectively. This system has a sensor position adjustment mechanism in two degrees of freedom, which facilitates the adaptability of the system to any foot size. The system is inexpensive, simple to use, and provides accuracy at par to the existing systems. This system finds application in a variety of clinical domains, for example, neurological disease diagnosis affecting ambulation such as Parkinson’s and cerebral palsy, gait rehabilitation, and sports fields. The future scope of this work includes validation of the shoe with different foot sizes and with different walking speed.
|
|
13:45-14:00, Paper TuBT2.4 | |
Energy Conscious Over-Actuated Multi-Agent Payload Transport Robot |
Tallamraju, Rahul (International Institute of Information Technology, Hyderabad), Verma, Pulkit (International Institute of Information Technology), Sripada, Venkatesh (Oregon State University, Corvallis, USA), Agrawal, Shrey (International Institute of Information Technology, Hyderabad), Karlapalem, Kamalakar (IIIT-Hyderabad) |
Keywords: Innovative Robot Designs, Computational Architectures, Motion Planning and Navigation in Human-Centered Environments
Abstract: In this work, we consider a multi-wheeled payload transport system. Each of the wheels can be selectively actuated. When they are not actuated, wheels are free moving and do not consume battery power. The payload transport system is modeled as an actuated multi-agent system, with each wheel-motor pair as an agent. Kinematic and dynamic models are developed to ensure that the payload transport system moves as desired. We design optimization formulations to decide on the number of wheels to be active and which of the wheels to be active so that the battery is conserved and the wear on the motors is reduced. The proposed multi-level control framework over the agents ensures that near-optimal number of agents is active for the payload transport system to function. Through simulation studies we show that our solution ensures energy efficient operation and increases the distance traveled by the payload transport system, for the same battery power. We have built the payload transport system and provide results for preliminary experimental validation.
|
|
14:00-14:15, Paper TuBT2.5 | |
Effect of Human Hand Dynamics on Haptic Rendering of Stiff Springs Using Virtual Mass Feedback |
Desai, Indrajit (Indian Institute of Technology Bombay), Gupta, Abhishek (Indian Institute of Technology, Bombay), Chakraborty, Debraj (Indian Institute of Technology Bombay) |
Keywords: Evaluation Methods and New Methodologies, Robots in Education, Therapy and Rehabilitation
Abstract: Hard surfaces are typically simulated in a haptic interface as stiff springs. Stable interaction with these surfaces using force feedback is challenging due to the discrete nature of the controller. Previous research has shown that adding a virtual damping or virtual mass to the rendered surface helps to increase the stiffness of the surface for stable interaction. In this paper, we analyze the effect of adding virtual mass on the range of stiffness that can be stably rendered. The analysis is performed in the discrete time domain. Specifically, we study the coupled~(with human hand dynamics) stability of the haptic interface. Stability, when the human interacts with the robot, is investigated by considering different human hand models. Our analysis shows that, when the human operator is coupled to an uncoupled stable system, an increase in the mass of a human hand decreases maximum renderable stiffness. Moreover, the increase in human hand damping increases the stably renderable stiffness.
|
|
14:15-14:30, Paper TuBT2.6 | |
DronePick: Object Picking and Delivery Teleoperation with the Drone Controlled by a Wearable Tactile Display |
Ibrahimov, Roman (Skolkovo Institute of Technology and Science), Tsykunov, Evgeny (Skolkovo Institute of Science and Technology), Shirokun, Vladimir (Skolkovo Institute of Science and Technology), Somov, Andrey (Skolkovo Institute of Technology and Science), Tsetserukou, Dzmitry (Skolkovo Institute of Science and Technology) |
Keywords: Creating Human-Robot Relationships, Virtual and Augmented Tele-presence Environments, User-centered Design of Robots
Abstract: We report on the teleoperation system DronePick which provides remote object picking and delivery by a human-controlled quadcopter. The main novelty of the proposed system is that the human user continuously gets the visual and haptic feedback for accurate teleoperation. DronePick consists of a quadcopter equipped with a magnetic grabber, a tactile glove with finger motion tracking sensor, hand tracking system, and the Virtual Reality (VR) application. The human operator teleoperates the quadcopter by changing the position of the hand. The proposed vibrotactile patterns representing the location of the remote object relative to the quadcopter are delivered to the glove. It helps the operator to determine when the quadcopter is right above the object. When the “pick” command is sent by clasping the hand in the glove, the quadcopter decreases its altitude and the magnetic grabber attaches the target object. The whole scenario is in parallel simulated in VR. The air flow from the quadcopter and the relative positions of VR objects help the operator to determine the exact position of the delivered object to be picked. The experiments showed that the vibrotactile patterns were recognized by the users at the high recognition rates: the average 99% recognition rate and the average 2.36s recognition time. The real-life implementation of DronePick featuring object picking and delivering to the human was developed and tested.
|
|
TuBT3 |
Room T3 |
Social Robots II |
Regular Session |
Chair: Sandygulova, Anara | Nazarbayev University |
Co-Chair: Cabibihan, John-John | Qatar University |
|
13:00-13:15, Paper TuBT3.1 | |
Design of a Robotic Crib Mobile to Support Studies in the Early Detection of Cerebral Palsy: A Pilot Study |
Jamshad, Rabeya (Georgia Institute of Technology), Fry, Katelyn (Georgia Institute of Technology), Chen, Yu-ping (Georgia State University), Howard, Ayanna (Georgia Institute of Technology) |
Keywords: Robots in Education, Therapy and Rehabilitation, Detecting and Understanding Human Activity
Abstract: According to data from the Centers for Disease Control and Prevention, developmental disorders such as Autism Spectrum Disorder (ASD) and Cerebral Palsy (CP) affect nearly one in six children between the ages of 3 to 17 in the United States alone. In order to improve the quality of life for these individuals, there is increased emphasis on providing early intervention at infancy, when key developmental milestones are being achieved. This however requires accurate early detection of motor development delays in at-risk infants. Our research focuses on enabling early detection through the design of a robotic crib mobile that affects infant behavior. Stimuli integrated into the robotic mobile can be used to encourage certain motions such as kicking among infants in order to study infant motor development and identify at-risk populations. In this paper, we propose the design of such a robotic crib mobile and discuss preliminary results from deploying the mobile in the infants’ home environment during a pilot study.
|
|
13:15-13:30, Paper TuBT3.2 | |
AppGAN: Generative Adversarial Networks for Generating Robot Approach Behaviors into Small Groups of People |
Yang, Fangkai (KTH Royal Institute of Technology), Peters, Christopher (Royal Institute of Technology) |
Keywords: Social Intelligence for Robots, Machine Learning and Adaptation, Motion Planning and Navigation in Human-Centered Environments
Abstract: Robots that navigate to approach free-standing conversational groups should do so in a safe and socially acceptable manner. This is challenging since it not only requires the robot to plot trajectories that avoid collisions with members of the group, but also to do so without making those in the group feel uncomfortable, for example, by moving too close to them or approaching them from behind. Previous trajectory prediction models focus primarily on formations of walking pedestrians, and those models that do consider approach behaviours into free-standing conversational groups typically have handcrafted features and are only evaluated via simulation methods, limiting their effectiveness. In this paper, we propose AppGAN, a novel trajectory prediction model capable of generating trajectories into free-standing conversational groups trained on a dataset of safe and socially acceptable paths. We evaluate the performance of our model with state-of-the-art trajectory prediction methods on a semi-synthetic dataset. We show that our model outperforms baselines by taking advantage of the GAN framework and our novel group interaction module.
|
|
13:30-13:45, Paper TuBT3.3 | |
Effective Robot Evacuation Strategies in Emergencies |
Nayyar, Mollik (The Pennsylvania State University), Wagner, Alan Richard (Penn State University) |
Keywords: Applications of Social Robots, Assistive Robotics, Robot Companions and Social Robots
Abstract: Recent efforts in human-robot interaction research has shed some light on the impact of human-robot interactions on human decisions during emergencies. It has been shown that presence of crowds during emergencies can influence evacuees to follow the crowd to find an exit. Research has shown that robots can be effective in guiding humans during emergencies and can reduce this `follow the crowd' behavior potentially providing life-saving benefit. These findings make robot guided evacuation methodologies an important area to explore further. In this paper we propose techniques that can be used to design effective evacuation methods. We explore the different strategies that can be employed to help evacuees find an exit sooner and avoid over-crowding to increase their chances of survival. We study two primary strategies, 1) shepherding method and 2) handoff method. Simulated experiments are performed to study the effectiveness of each strategy. The results show that shepherding method is more effective in directing people to the exit.
|
|
13:45-14:00, Paper TuBT3.4 | |
Surprise! Predicting Infant Visual Attention in a Socially Assistive Robot Contingent Learning Paradigm |
Klein, Lauren (University of Southern California), Itti, Laurent (University of Southern California), Smith, Beth (University of Southern California), Rosales, Marcelo R. (University of Southern California), Nikolaidis, Stefanos (University of Southern California), Mataric, Maja (University of Southern California) |
Keywords: Cognitive Skills and Mental Models, Cognitive and Sensorimotor Development, Detecting and Understanding Human Activity
Abstract: Early intervention to address developmental disability in infants has the potential to promote improved outcomes in neurodevelopmental structure and function [1]. Researchers are starting to explore Socially Assistive Robotics (SAR) as a tool for delivering early interventions that are synergistic with and enhance human-administered therapy. For SAR to be effective, the robot must be able to consistently attract the attention of the infant in order to engage the infant in a desired activity. This work presents the analysis of eye gaze tracking data from five 6-8 month old infants interacting with a Nao robot that kicked its leg as a contingent reward for infant leg movement. We evaluate a Bayesian model of low-level surprise on video data from the infants’ head-mounted camera and on the timing of robot behaviors as a predictor of infant visual attention. The results demonstrate that over 67% of infant gaze locations were in areas the model evaluated to be more surprising than average. We also present an initial exploration using surprise to predict the extent to which the robot attracts infant visual attention during specific intervals in the study. This work is the first to validate the surprise model on infants; our results indicate the potential for using surprise to inform robot behaviors that attract infant attention during SAR interactions.
|
|
14:00-14:15, Paper TuBT3.5 | |
Learning Socially Appropriate Robot Approaching Behavior Toward Groups Using Deep Reinforcement Learning |
Gao, Yuan (Uppsala University), Yang, Fangkai (KTH Royal Institute of Technology), Frisk, Martin (Uppsala University), Hernandez, Daniel (University of York), Peters, Christopher (Royal Institute of Technology), Castellano, Ginevra (Uppsala University) |
Keywords: Social Intelligence for Robots, Machine Learning and Adaptation, Robot Companions and Social Robots
Abstract: Deep reinforcement learning has recently been widely applied in robotics to study tasks such as locomotion and grasping, but its application to social human-robot interaction (HRI) remains a challenge. In this paper, we present a deep learning scheme that acquires a prior model of robot approaching behavior in simulation and applies it to real-world interaction with a physical robot approaching groups of humans. The scheme, which we refer to as Staged Social Behavior Learning (SSBL), considers different stages of learning in social scenarios. We learn robot approaching behaviors towards small groups in simulation and evaluate the performance of the model using objective and subjective measures in a perceptual study and a HRI user study with human participants. Results show that our model generates more socially appropriate behavior compared to a state-of-the-art model.
|
|
14:15-14:30, Paper TuBT3.6 | |
What Do Children Want from a Social Robot? Toward Gratifications Measures for Child-Robot Interaction |
De Jong, Chiara (University of Amsterdam), Kühne, Rinaldo (University of Amsterdam), Peter, Jochen (University of Amsterdam), van Straten, Caroline Lianne (University of Amsterdam), Barco, Alex (ASCoR, University of Amsterdam) |
Keywords: Robot Companions and Social Robots, Evaluation Methods and New Methodologies, Robots in Education, Therapy and Rehabilitation
Abstract: Social robots have, in the case of children, rarely been studied from a uses-and-gratifications perspective. As social robots differ from more traditional media, the first aim of this study was to explore the gratifications that children seek and obtain from social robots. This was investigated in a study among 87 children. The second aim was to develop and initially validate measures for those gratifications. We studied this among a sample of 24 children. The measures for hedonic and social gratifications-obtained worked reasonably well. The measures for hedonic and informative gratifications-sought seemed problematic, whereas the others were acceptable. Our measures present a first step toward enabling future research on children’s gratifications of social robots.
|
|
TuBT4 |
Room T4 |
Situation Awareness and Spatial Cognition |
Regular Session |
Chair: Pandey, Amit Kumar | Hanson Robotics |
Co-Chair: Louie, Wing-Yue Geoffrey | Oakland University |
|
13:00-13:15, Paper TuBT4.1 | |
Desk Organization: Effect of Multimodal Inputs on Spatial Relational Learning |
Rowe, Ryan (University of Washington), Singhal, Shivam (University of Washington), Yi, Daqing (University of Washington), Bhattacharjee, Tapomayukh (University of Washington), Srinivasa, Siddhartha (University of Washington) |
Keywords: Multi-modal Situation Awareness and Spatial Cognition
Abstract: For robots to operate in a three dimensional world and interact with humans, learning spatial relationships among objects in the surrounding is necessary. Reasoning about the state of the world requires inputs from many different sensory modalities including vision and haptics. We examine the problem of desk organization: learning how humans spatially position different objects on a planar surface according to organizational ``preference''. We model this problem by examining how humans position objects given multiple features received from vision and haptic modalities. However, organizational habits vary greatly between people both in structure and adherence. To deal with user organizational preferences, we add an additional modality, ``utility'', which informs on a particular human's perceived usefulness of a given object. Models were trained as generalized (over many different people) or tailored (per person). We use two types of models: random forests, which focus on precise multi-task classification, and Markov logic networks, which provide an easily interpretable insight into organizational habits. The models were applied to both synthetic data, which proved to be learnable when using fixed organizational constraints, and human-study data, on which the random forest achieved over 90% accuracy. Over all combinations of modalities, utility + vision and all of them combined were the most informative for organization. In a follow-up study, we gauged participants preference of desk organizations by a generalized random forest organization vs. by a random model. On average, participants rated the random forest models as 4.15 on a 5-point Likert scale compared to 1.84 for the random model.
|
|
13:15-13:30, Paper TuBT4.2 | |
Audio-Visual SLAM towards Human Tracking and Human-Robot Interaction in Indoor Environments |
Chau, Aaron (University of Calgary), Sekiguchi, Kouhei (Kyoto University), Nugraha, Aditya Arie (RIKEN AIP), Yoshii, Kazuyoshi (Kyoto University), Funakoshi, Kotaro (Honda Research Inst. Japan Co., Ltd) |
Keywords: Computational Architectures, Multi-modal Situation Awareness and Spatial Cognition, Novel Interfaces and Interaction Modalities
Abstract: We propose a novel audio-visual simultaneous and localization (SLAM) framework that exploits human pose and acoustic speech of human sound sources to allow a robot equipped with a microphone array and a monocular camera to track, map, and interact with human partners in an indoor environment. Since human interaction is characterized by features perceived in not only the visual modality, but the acoustic modality as well, SLAM systems must utilize information from both modalities. Using a state-of-the-art beamforming technique, we obtain sound components correspondent to speech and noise; and estimate the Direction-of-Arrival (DoA) estimates of active sound sources as useful representations of observed features in the acoustic modality. Through estimated human pose by a monocular camera, we obtain the relative positions of humans as representation of observed features in the visual modality. Using these techniques, we attempt to eliminate restrictions imposed by intermittent speech, noisy periods, reverberant periods, triangulation of sound-source range, and limited visual field-of-views; and subsequently perform early fusion on these representations. We develop a system that allows for complimentary action between audio-visual sensor modalities in the simultaneous mapping of multiple human sound sources and the localization of observer position.
|
|
13:30-13:45, Paper TuBT4.3 | |
Teaching a Robot How to Spatially Arrange Objects: Representation and Recognition Issues |
Buoncompagni, Luca (University of Genoa), Mastrogiovanni, Fulvio (University of Genoa) |
Keywords: Cognitive Skills and Mental Models, Programming by Demonstration, Multi-modal Situation Awareness and Spatial Cognition
Abstract: This paper introduces a technique to teach robots how to represent and qualitatively interpret perceived scenes in tabletop scenarios. To this aim, we envisage a 3-step human-robot interaction process, in which (i) a human shows a scene to a robot, (ii) the robot memorises a symbolic scene representation (in terms of objects and their spatial arrangement), and (iii) the human can revise such a representation, if necessary, by further interacting with the robot; here, we focus on steps i and ii. Scene classification occurs at a symbolic level, using ontology-based instance checking and subsumption algorithms. Experiments showcase the main properties of the approach, i.e., detecting whether a new scene belongs to a scene class already represented by the robot, or otherwise creating a new representation with a one shot learning approach, and correlating scenes from a qualitative standpoint to detect similarities and differences in order to build a scene hierarchy.
|
|
13:45-14:00, Paper TuBT4.4 | |
Simple, Inexpensive, Accurate Calibration of 9 Axis Inertial Motion Unit |
Das, Shome S (Indian Institute of Science, Bangalore) |
Keywords: Multi-modal Situation Awareness and Spatial Cognition, Degrees of Autonomy and Teleoperation, Virtual and Augmented Tele-presence Environments
Abstract: Absolute orientation estimation is crucial for navigation of robots, drones, unmanned vehicles. It is also needed in bio-metrics, virtual reality, human robot interaction systems and in devices like cell phone, smart watch etc. Nine axis MEMS inertial motion unit(hereafter called IMU) consisting of accelerometer, gyroscope and magnetometer is used widely to obtain absolute orientation. Sensor fusion is used to combine the readings of the individual sensors to obtain correct estimate of the absolute orientation. However MEMS devices are noisy and correct calibration is needed for the sensor fusion to work. Existing libraries on nine axis sensor fusion fail to give accurate drift free orientation estimate as they either don't use accurate calibration algorithms or don't prescribe a way to collect good calibration data. Also it is difficult and time consuming for a hobbyist or a researcher working on bio-metrics or SLAM to implement complex calibration algorithms or to design good calibration rigs. We propose a new calibration setup which consists of some easy to implement calibration algorithms along with a new calibration rig. Our calibration framework attains better accuracy and drift free estimate of absolute orientation than the current state of art libraries.
|
|
14:00-14:15, Paper TuBT4.5 | |
Towards a Driver Monitoring System for Estimating Driver Situational Awareness |
Hijaz, Alaaldin (Oakland University), Louie, Wing-Yue Geoffrey (Oakland University), Mansour, Iyad (Dura Automotive) |
Keywords: Detecting and Understanding Human Activity, Monitoring of Behaviour and Internal States of Humans, Machine Learning and Adaptation
Abstract: Abstract— Autonomous vehicle technology is rapidly developing but the current state-of-the-art still has limitations and requires frequent human intervention. However, handovers from an autonomous vehicle to a human driver are challenging because a human operator may be unaware of the vehicle surroundings during a handover which can lead to dangerous driving outcomes. There is presently an urgent need to develop advanced driver-assistance systems capable of monitoring driver situational awareness within an autonomous vehicle and intelligently handing-over control to a human driver in emergency situations. Towards this goal, in this paper we present the development and evaluation of a vision-based system that identifies visual cues of a driver’s situational awareness including their: head pose, eye pupil position, average head movement rate and visual focus of attention.
|
|
14:15-14:30, Paper TuBT4.6 | |
Automatic Speech-Gesture Mapping and Engagement Evaluation in Human Robot Interaction |
Ghosh, Bishal (Indian Institute of Technology Ropar), Dhall, Abhinav (Indian Institute of Technology Ropar), Singla, Ekta (Indian Institute of Technology Ropar) |
Keywords: Non-verbal Cues and Expressiveness, Narrative and Story-telling in Interaction, Robots in Education, Therapy and Rehabilitation
Abstract: In this paper, we present an end-to-end system for enhancing the effectiveness of non-verbal gestures in human robot interaction. We identify prominently used gestures in performances by TED talk speakers and map them to their corresponding speech context and modulated speech based upon the attention of the listener. Gestures are localised with convolution neural networks based approach. Dominant gestures of TED speakers are used for learning the gesture-to-speech mapping. We evaluated the engagement of the robot with people by conducting a social survey. The effectiveness of the performance was monitored by the robot and it self-improvised its speech pattern on the basis of the attention level of the audience, which was calculated using visual feedback from the camera. The effectiveness of interaction as well as the decisions made during improvisation was further evaluated based on the head-pose detection and an interaction survey.
|
|
TuBS1 |
Room T5 |
Social and Affective Robots |
Special Session |
Chair: Sgorbissa, Antonio | University of Genova |
Co-Chair: Cominelli, Lorenzo | E. Piaggio Research Center |
|
13:00-13:15, Paper TuBS1.1 | |
Designing an Experimental and a Reference Robot to Test and Evaluate the Impact of Cultural Competence in Socially Assistive Robotics (I) |
Recchiuto, Carmine Tommaso (University of Genova), Papadopoulos, Chris (University of Bedfordshire), Hill, Tetiana (University of Bedfordshire, Vicarage St, Luton LU13JU, UK), Castro, Nina (Advinia Healthcare, 314 Regents Park Rd, London N32JX, UK), Bruno, Barbara (University of Genova), Papadopoulos, Irena (Middlesex University Higher Education Corporation), Sgorbissa, Antonio (University of Genova) |
Keywords: Assistive Robotics, Robot Companions and Social Robots, Evaluation Methods and New Methodologies
Abstract: The article focusses on the work performed in preparation for an experimental trial aimed at evaluating the impact of a culturally competent robot for care home assistance. Indeed, it has been estabilished that the user's cultural identity plays an important role during the interaction with a robotic system and cultural competence may be one of the key elements for increasing capabilities of socially assistive robots. Specifically, the paper describes part of the work carried out for the definition and implementation of two different robotic systems for the care of older adults: a culturally competent robot, that shows its awareness of the user's cultural identity, and a reference robot, non culturally competent, but with the same functionalities of the former. The design of both robots is here described in detail, together with the key elements that make a socially assistive robot culturally competent, which should be absent in the non-culturally competent counterpart. Examples of the experimental phase of the CARESSES project, with a fictional user are reported, giving a hint of the validness of the proposed approach.
|
|
13:15-13:30, Paper TuBS1.2 | |
Using Socially Expressive Mixed Reality Arms for Enhancing Low-Expressivity Robots (I) |
Groechel, Thomas (Univeristy of Southern California), Shi, Zhonghao (Univeristy of Southern California), Pakkar, Roxanna (University of Southern California), Mataric, Maja (University of Southern California) |
Keywords: Social Presence for Robots and Virtual Humans, Non-verbal Cues and Expressiveness, Innovative Robot Designs
Abstract: Expressivity--the use of multiple modalities to convey internal state and intent of a robot--is critical for interaction. Yet, due to cost, safety, and other constraints, many robots lack high degrees of physical expressivity. This paper explores using mixed reality to enhance a robot with limited expressivity by adding virtual arms that extend the robot's expressiveness. The arms, capable of a range of non-physically-constrained gestures, were evaluated in a between-subject study (n=34) where participants engaged in a mixed reality mathematics task with a socially assistive robot. The study results indicate that the virtual arms added a higher degree of perceived emotion, helpfulness, and physical presence to the robot. Users who reported a higher perceived physical presence also found the robot to have a higher degree of social presence, ease of use, usefulness, and had a positive attitude toward using the robot with mixed reality. The results also demonstrate the users' ability to distinguish the virtual gestures' valence and intent.
|
|
13:30-13:45, Paper TuBS1.3 | |
Wearable Affective Robot That Detects Human Emotions from Brain Signals by Using Deep Multi-Spectrogram Convolutional Neural Networks (Deep MS-CNN) (I) |
Wang, Ker-Jiun (University of Pittsburgh), ZHENG, Caroline Yan (Royal College of Art) |
Keywords: Motivations and Emotions in Robotics, Social Intelligence for Robots, Cognitive Skills and Mental Models
Abstract: Wearable robot that constantly monitors, adapts and reacts to human’s need is a promising potential for technology to facilitate stress alleviation and contribute to mental health. Current means to help with mental health include counseling, drug medications, and relaxation techniques such as meditation or breathing exercises to improve mental status. The theory of human touch that causes the body to release hormone oxytocin to effectively alleviate anxiety shed light on a potential alternative to assist existing methods. Wearable robots that generate affective touch have the potential to improve social bonds and regulate emotion and cognitive functions. In this study, we used a wearable robotic tactile stimulation device, AffectNodes2, to mimic human affective touch. The touch-stimulated brain waves were captured from 4 EEG electrodes placed on the parietal, prefrontal and left and right temporal lobe regions of the brain. The novel Deep MS-CNN with emotion polling structure had been developed to extract Affective touch, Non-affective touch and Relaxation stimuli with over 95% accuracy, which allows the robot to grasp the current human affective status. This sensing and decoding structure is our first step towards developing a self-adaptive robot to adjust its touch stimulation patterns to help regulate affective status.
|
|
13:45-14:00, Paper TuBS1.4 | |
Real-Time Gazed Object Identification with a Variable Point of View Using a Mobile Service Robot (I) |
Yuguchi, Akishige (Nara Institute of Science and Technology), Inoue, Tomoaki (Nara Institute of Science and Technology), Garcia Ricardez, Gustavo Alfonso (Nara Institute of Science and Techonology (NAIST)), Ding, Ming (Nara Institute of Science and Technology), Takamatsu, Jun (Nara Institute of Science and Technology), Ogasawara, Tsukasa (Nara Institute of Science and Technology) |
Keywords: Curiosity, Intentionality and Initiative in Interaction, Detecting and Understanding Human Activity, Applications of Social Robots
Abstract: As sensing and image recognition technologies advance, the environments where service robots operate expand into human-centered environments. Since the roles of service robots depend on the user situations, it is important for the robots to understand human intentions. Gaze information, such as gazed objects (i.e., the objects humans are looking at) can help to understand the users’ intentions. In this paper, we propose a real-time gazed object identification method from RGB-D images captured by a camera mounted on a mobile service robot. First, we search for the candidate gazed objects using state-of-the-art, real-time object detection. Second, we estimate the human face direction using facial landmarks extracted by a real-time face detection tool. Then, by searching for an object along the estimated face direction, we identify the gazed object. If the gazed object identification fails even though a user is looking at an object, i.e., has a fixed gaze direction, the robot can determine whether the object is inside or outside the robot’s view based on the face direction, and, then, change its point of view to improve the identification. Finally, through multiple evaluation experiments with the mobile service robot Pepper, we verified the effectiveness of the proposed identification and the improvement of the identification accuracy by changing the robot’s point of view.
|
|
14:00-14:15, Paper TuBS1.5 | |
A Reinforcement-Learning Approach for Adaptive and Comfortable Assistive Robot Monitoring Behaviors (I) |
Raggioli, Luca (University of Naples Federico II), Rossi, Silvia (Universita' Di Napoli Federico II) |
Keywords: Machine Learning and Adaptation, Assistive Robotics, Detecting and Understanding Human Activity
Abstract: Companion robots used in the field of elderly assistive care can be of great value in monitoring their everyday activities and well-being. However, in order to be accepted by the user, their behavior, while monitoring them, should not provide discomfort: robots must take into account the activity the user is performing and not be a distraction for them. In this paper, we propose a Reinforcement Learning approach to adaptively decide a monitoring distance and an approaching direction starting from an estimation of the current activity obtained by the use of a wearable device. Our goal is to improve user activity recognition performance without making the robot’s presence uncomfortable for the monitored person. Results show that the proposed approach is promising for real scenario deployment, succeeding in accomplishing the task in more than 80% of episodes run.
|
|
14:15-14:30, Paper TuBS1.6 | |
Proposing Human-Robot Trust Assessment through Tracking Physical Apprehension Signals in Close-Proximity Human-Robot Collaboration (I) |
Hald, Kasper (Aalborg University), Rehm, Matthias (Aalborg University), Moeslund, Thomas B. (Aalborg University) |
Keywords: Creating Human-Robot Relationships, Evaluation Methods and New Methodologies, Detecting and Understanding Human Activity
Abstract: We propose a method of human-robot trust assessment in close-proximity human-robot collaboration involving body tracking for recognition of physical signs of apprehension. We tested this by performing skeleton tracking on 30 participant while they repeated a shared task with a Sawyer robot while reporting trust between tasks. We tested different robot velocity and environment conditions with an unannounced increase in velocity midway through to provoke a dip trust. Initial analysis show significant effect for the test conditions on participant movements and reported trust as well as linear correlations between tracked signs of apprehension and reported trust.
|
|
TuCT1 |
Room T8 |
Cognitive Skills and Mental Models |
Regular Session |
Chair: Lewis, Michael | University of Pittsburgh |
Co-Chair: Schulz, Trenton | University of Oslo |
|
15:00-15:15, Paper TuCT1.1 | |
Ontologenius : A Long-Term Semantic Memory for Robotic Agents |
sarthou, guillaume (LAAS-CNRS), Clodic, Aurélie (Laas - Cnrs), Alami, Rachid (CNRS) |
Keywords: Cognitive Skills and Mental Models
Abstract: In this paper we present Ontologenius, a semantic knowledge storage and reasoning framework for autonomous robots. More than a classic ontology software to query a knowledge base and a first-order internal logic as it can be done for web-semantics, we propose with Ontologenius features adapted to a robotic use including human-robot interaction setting. Designed to be integrated in a complete robotic architecture we introduce the ability to modify the knowledge base during execution, whether through dialogue or geometric reasoning, and keep these changes even after the robot is powered off. Because Ontologenius was inspired by human behaviors and developed to be used in applications interacting with humans, we propose the possibility to estimate the semantic memory of a partner thus allowing to use the principles of theory of the mind in addition to the ability to estimate new knowledge through a generalization process. This article presents the architecture of this software and its features, as well as examples of use in robotics applications.
|
|
15:15-15:30, Paper TuCT1.2 | |
Mind Perception and Causal Attribution for Failure in a Game with a Robot |
Miyake, Tomohito (Osaka University), Kawai, Yuji (Osaka University), Park, Jihoon (Osaka University), Shimaya, Jiro (Osaka University), Takahashi, Hideyuki (Osaka University), Asada, Minoru (Osaka University) |
Keywords: Anthropomorphic Robots and Virtual Humans, Cooperation and Collaboration in Human-Robot Teams, Ethical Issues in Human-robot Interaction Research
Abstract: It is unclear how a human attributes the cause of failure to the robot in a human-robot interaction. We aim to identify the relationship between causal attribution and mind perception in a repeated game with an agent. We investigated causal attribution of the participant to the agent: which decision of the participant or the partner agent caused the unexpectedly small amount of the reward. We conducted experiments with three agent conditions: a human, robot, and computer. The results showed that the agency score negatively correlated with the degree of causal attribution to the partner agent. In particular, correlations of scores of "thought," "memory," "planning," and "self-control" that are sub-items of agency were significant. This implied the impression that ``the agent acted to succeed'' might reduce causal attribution. In addition, we found that decrease in the scores of mind perception correlated with the degree of causal attribution to the partner agent. This suggests that a sense of betrayal of the prior expectation by the partner agent through the game might lead to causal attribution to the partner agent.
|
|
15:30-15:45, Paper TuCT1.3 | |
Designing Child-Robot Interaction with Robotito |
Ewelina, Bakała (Facultad De Ingeniería, Universidad De La República, Montevideo), Visca, Jorge (Facultad De Ingeniería, Universidad De La República, Montevideo), Tejera López, Gonzalo Daniel (Universidad De La Republica, Facultad De Ingeniería, Instituto D), Seré, Andrés (Facultad De Ingeniería, Universidad De La República, Montevideo), Amorin, Guillermo (Facultad De Ingeniería, Universidad De La República, Montevideo), Gómez-Sena, Leonel (Laboratorio De Neurociencias, Facultad De Ciencias, Universidad) |
Keywords: Robots in Education, Therapy and Rehabilitation, Novel Interfaces and Interaction Modalities, User-centered Design of Robots
Abstract: Computational thinking is a skill that is considered essential for the future generations. Because of this it should be incorporated into the curricula as soon as possible. An interesting option to work on computational thinking with children is by means of robots. Here, we present Robotito, a robot that can be programmed by arranging its environment, intended to help the development of computational thinking in preschool children. We describe its hardware and software environment, and hierarchical state machines used to implement two modes of interaction with environment- first based on color detection and the second sensible to the surrounding objects. We also present activities that we developed to work on abstraction, generalization, decomposition, algorithmic thinking, and debugging- skills related to computational thinking.
|
|
15:45-16:00, Paper TuCT1.4 | |
Conflict Mediation in Human-Machine Teaming: Using a Virtual Agent to Support Mission Planning and Debriefing |
Haring, Kerstin Sophie (University of Denver), Tobias, Jessica (United States Air Force Academy), Waligora, Justin (United States Air Force Academy), Phillips, Elizabeth (Brown University), Tenhundfeld, Nathan (University of Alabama in Huntsville), Gale, Lucas (University of Southern California), De Visser, Ewart (George Mason University), Jonathan, Gratch (University of Southern California), Tossell, Chad (USAF Academy) |
Keywords: Anthropomorphic Robots and Virtual Humans, Interaction with Believable Characters, Human Factors and Ergonomics
Abstract: Socially intelligent artificial agents and robots are anticipated to become ubiquitous in home, work, and military environments. With the addition of such agents to human teams it is crucial to evaluate their role in the planning, decision making, and conflict mediation processes. We conducted a study to evaluate the utility of a virtual agent that provided mission planning support in a three-person human team during a military strategic mission planning scenario. The team consisted of a human team lead who made the final decisions and three supporting roles, two humans and the artificial agent. The mission outcome was experimentally designed to fail and introduced a conflict between the human team members and the leader. This conflict was mediated by the artificial agent during the debriefing process through discuss or debate and open communication strategies of conflict resolution [1]. Our results showed that our teams experienced conflict. The teams also responded socially to the virtual agent, although they did not find the agent beneficial to the mediation process. Finally, teams collaborated well together and perceived task proficiency increased for team leaders. Socially intelligent agents show potential for conflict mediation, but need careful design and implementation to improve team processes and collaboration.
|
|
16:00-16:15, Paper TuCT1.5 | |
Towards Automatic Visual Fault Detection in Highly Expressive Human-Like Animatronic Faces with Soft Skin |
Mayet, Ralf (Hanson Robotics), Diprose, James (Hanson Robotics), Pandey, Amit Kumar (Hanson Robotics) |
Keywords: Androids, Anthropomorphic Robots and Virtual Humans
Abstract: Designing reliable, humanoid social robots with highly expressive human-like faces is a challenging problem. Their construction requires complex mechanical assemblies, large numbers of actuators and involves soft material. When deploying these robots in the field they face problems of wear and tear and mechanical abuse. Mechanical defects of such faces can be hard to analyze automatically or by manual visual inspection. We propose a method of automatic visual calibration and actuator fault detection for complex animatronic faces. We use our approach to scan three expressive animatronic faces, and analyze the data. Our findings indicate that our approach is able to detect faulty actuators even when they contribute to the overall expression of the face only marginally, and are hard to spot visually.
|
|
16:15-16:30, Paper TuCT1.6 | |
Differences of Human Perceptions of a Robot Moving Using Linear or Slow In, Slow Out Velocity Profiles When Performing a Cleaning Task |
Schulz, Trenton (University of Oslo), Holthaus, Patrick (University of Hertfordshire), Amirabdollahian, Farshid (The University of Hertfordshire), Koay, Kheng Lee (University of Hertfordshire), Torresen, Jim (University of Oslo), Herstad, Jo (University of Oslo) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Non-verbal Cues and Expressiveness, Embodiment, Empathy and Intersubjectivity
Abstract: We investigated how a robot moving with different velocity profiles affects a person's perception of it when working together on a task. The two profiles are the standard linear profile and a profile based on the animation principles of slow in, slow out. The investigation was accomplished by running an experiment in a home context where people and the robot cooperated on a clean-up task. We used the Godspeed series of questionnaires to gather people's perception of the robot. Average scores for each series appear not to be different enough to reject the null hypotheses, but looking at the component items provides paths to future areas of research. We also discuss the scenario for the experiment and how it may be used for future research into using animation techniques for moving robots and improving the legibility of a robot's locomotion.
|
|
TuCT2 |
Room T2 |
HRI and Collaboration in Manufacturing Environment |
Regular Session |
Chair: Penders, Jacques | Sheffield Hallam University |
Co-Chair: Beran, Vitezslav | Brno University of Technology |
|
15:00-15:15, Paper TuCT2.1 | |
Combining Interactive Spatial Augmented Reality with Head-Mounted Display for End-User Collaborative Robot Programming |
Bambusek, Daniel (Brno University of Technology, Faculty of Information Technology), Materna, Zdenek (Faculty of Information Technology, Brno University of Technology), Kapinus, Michal (Brno University of Technology, Faculty of Information Technology), Beran, Vitezslav (Brno University of Technology), Smrz, Pavel (Brno University of Technology) |
Keywords: HRI and Collaboration in Manufacturing Environments, Novel Interfaces and Interaction Modalities, Human Factors and Ergonomics
Abstract: This paper proposes an intuitive approach for collaborative robot end-user programming using a combination of interactive spatial augmented reality (ISAR) and head-mounted display (HMD). It aims to reduce user's workload and to let the user program the robot faster than in classical approaches (e.g. kinesthetic teaching). The proposed approach, where user is using a mixed-reality HMD - Microsoft HoloLens - and touch-enabled table with SAR projected interface as input devices, is compared to a baseline approach, where robot's arms and a touch-enabled table are used as input devices. Main advantages of the proposed approach are the possibility to program the collaborative workspace without the presence of the robot, its speed in comparison to the kinesthetic teaching and an ability to quickly visualize learned program instructions, in form of virtual objects, to enhance the users' orientation within those programs. The approach was evaluated on a set of 20 users using the within-subject experiment design. Evaluation consisted of two pick and place tasks, where users had to start from the scratch as well as to update the existing program. Based on the experiment results, the proposed approach is better in qualitative measures by 33.84% and by 28.46% in quantitative measures over the baseline approach for both tasks.
|
|
15:15-15:30, Paper TuCT2.2 | |
Modulating Human Input for Shared Autonomy in Dynamic Environments |
Mower, Christopher Edwin (University of Edinburgh), Moura, Joao (Heriot-Watt University), Davies, Aled (Costain Group PLC), Vijayakumar, Sethu (University of Edinburgh) |
Keywords: Degrees of Autonomy and Teleoperation, HRI and Collaboration in Manufacturing Environments, Cooperation and Collaboration in Human-Robot Teams
Abstract: Many robotic tasks require human interaction through teleoperation to achieve high performance. However, in industrial applications these methods often require high levels of concentration and manual dexterity leading to high cognitive loads and dangerous working conditions. Shared autonomy attempts to address these issues by blending human and autonomous reasoning, relieving the burden of precise motor control, tracking, and localization. In this paper we propose an optimization-based representation for shared autonomy in dynamic environments. We ensure real-time tractability by modulating the human input with the information of the changing environment in the same task space, instead of adding it to the optimization cost or constraints. We illustrate the method with two real world applications: grasping objects in a cluttered environment, and a spraying task requiring sprayed linings with greater homogeneity. Finally we use a 7 degree of freedom KUKA LWR arm to simulate the grasping and spraying experiments.
|
|
15:30-15:45, Paper TuCT2.3 | |
Seamless Manual-To-Autopilot Transition: An Intuitive Programming Approach to Robotic Welding |
Eto, Haruhiko (Massachusetts Institute of Technology), Asada, Harry (MIT) |
Keywords: Programming by Demonstration, Degrees of Autonomy and Teleoperation, HRI and Collaboration in Manufacturing Environments
Abstract: An intuitive on-site robot programming method for small-lot robotic welding is presented. In current robotic welding, a human operator has to input numerous parameters, including feedrate, swing width, and frequency, by using a teach pendant or a control panel before executing the task. This traditional approach is suitable for mass production, but requires tedious, time-consuming programming, which does not fit low-volume manufacturing, such as shipbuilding. In this paper, a method is developed for acquiring those parameters directly from an on-site human demonstration and seamlessly transitioning from manual operation to automatic control. With this method, a welding worker can directly execute a welding task, and the motion of a welding torch is observed, from which key parameters are identified and the machine performs the rest of the task autonomously. No tedious parameter input is required, but the worker can jump-start the task. The motion of a welding torch is represented as a combination of sinusoidal and linear functions. Discrete Fourier Transform (DFT) and Recursive Least Squares (RLS) estimates are used for identifying the parametric model in real time. Furthermore, an algorithm is developed for determining whether an appropriate estimation result has been obtained and when to switch from manual operation to autonomous control. The method is implemented on a virtual teleoperation system and seamless control transition is demonstrated.
|
|
15:45-16:00, Paper TuCT2.4 | |
Teaching Method for Robot’s Gripper Posture with a Laser Sensor on a Pan-Tilt Actuator: A Method for Specifying Posture Feature Curves and Posture Feature Point |
Ishihata, kenji (Hiroshima City University), Sato, Kenjiro (Hiroshima City University), Fukui, Yuta (Hiroshima City Univercity), Iwaki, Satoshi (Hiroshima City University), Ikeda, Tetsushi (Hiroshima City University) |
Keywords: Assistive Robotics, Novel Interfaces and Interaction Modalities, Cooperation and Collaboration in Human-Robot Teams
Abstract: Recently a lot of robots have been developed for supporting our daily life or patient care. In order to instruct such a support robot that can work in such cluttered environments, we have conventionally developed an intuitive robot teaching interface using a TOF laser sensor on a pan-tilt actuator driven by a user. This interface enables us to control the direction of the laser spot to “click” a real object and instruct a robot to manipulate it by drag-and-drop operation throughout a PC world and a real world. In our conventional system, however, the success rates of grasping object was very low because only the position of the object can be taught, not the orientation of the object. To cope with the problem, in this paper we propose a system to easily grasp an object of arbitrary posture by measuring the locus of the laser spot using our real world click system. Some grasping experiments on various daily objects showed the effectiveness of the proposed method.
|
|
16:00-16:15, Paper TuCT2.5 | |
Model Checking Human-Agent Collectives for Responsible AI |
Abeywickrama, Dhaminda (University of Southampton), Cirstea, Corina (Electronics and Computer Science, University of Southampton), Ramchurn, Sarvapali (University of Southampton) |
Keywords: Ethical Issues in Human-robot Interaction Research, Cooperation and Collaboration in Human-Robot Teams
Abstract: Humans and agents often need to work together and agree on collective decisions. Ensuring that autonomous systems work responsibly is complex especially when encountering dilemmas. This paper proposes a novel, systematic model checking approach to responsible decision making by a human-agent collective to ensure it is safe, controllable and ethical. Our approach, which is based on the MCMAS model checker, verifies the permissibility of an agent's actions by checking the decision-making behaviour against the logical formulae specified for safety, controllability and ethical behaviour. The verification results through counterexamples and simulation results can provide a judgement, and an explanation to the AI engineer of the reasons actions are refused or allowed.
|
|
16:15-16:30, Paper TuCT2.6 | |
IoT Based Submersible ROV for Pisciculture |
Rohit, Mehboob Hasan (North South University), Barua, Sailanjan (North South University), Akter, Irin (North South University), Karim, S M Mujibul (North South University), Akter, Sharmin (North South University), Elahi, M. M. Lutfe (North South University) |
Keywords: Evaluation Methods and New Methodologies
Abstract: Pisciculture refers to the controlled commercial breeding and raising fish in tanks or enclosures such as fish ponds. An automated IOT based system for fish farming has been developed which actually reduces the human effort required and maximize fish production. Our system can be placed in the center of a submersible ROV and explore underwater for real-time monitoring of fishes and water quality parameters. Multiple sensors are integrated into the system to record essential data that sends these data to the user through Message Queuing Telemetry Transport (MQTT) protocol. A Single Board Computer is used for processing images such as fish counting and fish size measurement. The user can also control different aqua tools and calibrate the sensors via IOT. The system can closely monitor for any changes in the fish environment and notify the user and take necessary actions to reestablish a suitable environment.
|
|
TuCT3 |
Room T3 |
Social Robots III |
Regular Session |
Chair: Suri, Venkata Ratnadeep | Indraprasta Institute of Information Technology, Delhi, (IIIT-Delhi) |
Co-Chair: Zibetti, Elisabetta | CHART-LUTIN |
|
15:00-15:15, Paper TuCT3.1 | |
Teaching Pepper Robot to Recognize Emotions of Traumatic Brain Injured Patients Using Deep Neural Networks |
Ilyas, Chaudhary Muhammad (Aalborg University), Schmuck, Viktor (Aalborg University, Denmark), Haque, Mohammad Ahsanul (Aalborg University), Nasrollahi, Kamal (Aalborg University), Rehm, Matthias (Aalborg University), Moeslund, Thomas B. (Aalborg University) |
Keywords: Robots in Education, Therapy and Rehabilitation, Non-verbal Cues and Expressiveness, Applications of Social Robots
Abstract: Social signal extraction from the facial analysis is a popular research area in human-robot interaction. However, recognition of emotional signals from Traumatic Brain Injured (TBI) patients with the help of robots and non-intrusive sensors is yet to be explored. Existing robots have limited abilities to automatically identify human emotions and respond accordingly. Their interaction with TBI patients could be even more challenging and complex due to unique, unusual and diverse ways of expressing their emotions. To tackle the disparity in a TBI patient's Facial Expressions (FEs), a specialized deep-trained model for automatic detection of TBI patients' emotions and FE (TBI-FER model) is designed, for robot-assisted rehabilitation activities. In addition, the Pepper robot's built-in model for FE is investigated on TBI patients as well as on healthy people. Variance in their emotional expressions is determined by comparative studies. It is observed that the customized trained system is highly essential for the deployment of Pepper robot as a Socially Assistive Robot (SAR).
|
|
15:15-15:30, Paper TuCT3.2 | |
Mood Estimation As a Social Profile Predictor in an Autonomous, Multi-Session, Emotional Support Robot for Children |
Gamborino, Edwinn (National Taiwan University), Yueh, Hsiu-Ping (National Taiwan University), Lin, Weijane (National Taiwan University), Yeh, Su-Ling (National Taiwan University), Fu, Li-Chen (National Taiwan University) |
Keywords: Applications of Social Robots, Assistive Robotics, Creating Human-Robot Relationships
Abstract: In this work, we created an end-to-end autonomous robotic platform to give emotional support to children in long-term, multi-session interactions. Using a mood estimation algorithm based on visual cues of the user’s behaviors through their facial expressions and body posture, a multi-dimensional model predicts a qualitative measure of the subject’s affective state. Using a novel Interactive Reinforcement Learning algorithm, the robot is able to learn over several sessions the social profile of the user, adjusting its behavior to match their preferences. Although the robot is completely autonomous, a third party can optionally provide feedback to the robot through an additional UI to accelerate its learning of the user’s preferences. To validate the proposed methodology, we evaluated the impact of the robot on elementary school aged children in a long-term, multi-session interaction setting. Our findings show that using this methodology, the robot is able to learn the social profile of the users over a number of sessions, either with or without external feedback as well as maintain the user in a positive mood, as shown by the consistently positive rewards received by the robot using our proposed learning algorithm.
|
|
15:30-15:45, Paper TuCT3.3 | |
Mapping Robotic Affordances with Pre-Requisite Learning Interventions for Children with Autism Spectrum Disorder |
Shukla, Jainendra (Indraprastha Institute of Information Technology, Delhi), Suri, Venkata Ratnadeep (Indraprasta Institute of Information Technology, Delhi, (IIIT-De), Garg, Jatin (Indraprastha Institute of Information Technology Delhi), Verma, Krit (Indraprastha Institute of Information Technology Delhi), Kansal, Prarthana (IIIT Delhi) |
Keywords: Applications of Social Robots, Assistive Robotics, Robots in Education, Therapy and Rehabilitation
Abstract: For children with Autism Spectrum Disease (ASD), pre-requisite learning (PRL) skills are particularly important because they form the basis for acquiring other advanced cognitive skills. Globally, researchers have shown that robot-assisted therapy (RAT) can have several positive effects on children with ASD. However, previous researches have failed in clearly mapping the PRL skill training tasks and strategies to robot affordances. In this research, we foster a better understanding of the objectives of the PRL skills required for children with ASD and provide a mapping with robot affordances to execute PRL training activities. In-depth interviews and focus group discussions (N=25) with paediatricians, ASD therapists, and educators from three nonprofit organisations were conducted to understand the clinical practices for teaching PRL skills among children with ASD. Naturalistic observations were used to understand the exercise and training protocols implemented for improving PRL skills among children with ASD. Finally, clinical literature on robotic therapy and technical documents provided by the manufacturers were analysed for identifying commercially available robots and evaluating their features and affordances. Our analysis revealed that affordances offered by several commercially available robots could be effectively leveraged to develop Robot-Assisted Therapies (RATs) to improve PRL skills in children with ASD. Strategies and implications for developing RATs to improve PRL skills among children with ASD are discussed.
|
|
15:45-16:00, Paper TuCT3.4 | |
Health Counseling by Robots: Modalities for Breastfeeding Promotion |
Murali, Prasanth (Khoury College of Computer Science), O'Leary, Teresa (Khoury College of Computer and Information Science), Shamekhi, Ameneh (Northeastern University), Bickmore, Timothy (Northeastern University) |
Keywords: Novel Interfaces and Interaction Modalities, Robots in Education, Therapy and Rehabilitation, Social Presence for Robots and Virtual Humans
Abstract: Conversational humanoid robots are being increasingly used for health education and counseling. Prior research provides mixed indications regarding the best modalities to use for these systems, including user inputs spanning completely constrained multiple choice options vs. unconstrained speech, and embodiments of humanoid robots vs. virtual agents, especially for potentially sensitive health topics such as breastfeeding. We report results from an experiment comparing five different interface modalities, finding that all result in significant increases in user knowledge and intent to adhere to recommendations, with few differences among them. Users are equally satisfied with constrained (multiple choice) touch screen input and unconstrained speech input, but are relatively unsatisfied with constrained speech input. Women find conversational robots are an effective, safe, and non-judgmental medium for obtaining information about breastfeeding.
|
|
16:00-16:15, Paper TuCT3.5 | |
Persuasive ChairBots: A (Mostly) Robot-Recruited Experiment |
Agnihotri, Abhijeet (Oregon State University), Knight, Heather (Oregon State University) |
Keywords: Curiosity, Intentionality and Initiative in Interaction, Non-verbal Cues and Expressiveness, Applications of Social Robots
Abstract: Robot furniture is a growing area of robotics research, as people easily anthropomorphize these simple robots and they fit in easily to many human environments. Could they also be of service in recruiting people to play chess? Prior work has found motion gestures to aid in persuasion, but this work has mostly occurred in in-lab studies and has not yet been applied to robot furniture. This paper assessed the efficacy of four motion strategies in persuading passerbyers to participate in a ChairBot Chess Tournament, which consisted of a table with a chessboard and two ChairBots -- one for the white team, and another for the black team. The study occurred over a six-week period, seeking passersby to play chess in the atrium of our Computer Science building for an hour each Friday. Forward-Back motion was the most effective strategy in getting people to come to the table and play chess, while Spinning was the worst. Overall, people found the ChairBots to be friendly and somewhat dog-like. In-the-wild studies are challenging, but produce data that is highly likely to be replicable in future versions of the system. The results also support the potential of future robots to recruit participants to activities that they might already enjoy.
|
|
16:15-16:30, Paper TuCT3.6 | |
Robot-Assisted Therapy for Children with Delayed Speech Development: A Pilot Study |
Zhanatkyzy, Aida (Nazarbayev University), Turarova, Aizada (Nazarbayev University), Telisheva, Zhansaule (Nazarbayev University), Abylkasymova, Galiya (Republican Children's Rehabilitation Center), Sandygulova, Anara (Nazarbayev University) |
Keywords: Robots in Education, Therapy and Rehabilitation, Robot Companions and Social Robots, Applications of Social Robots
Abstract: This paper presents a study that aims to investigate the effects of Robot-Assisted Therapy (RAT) on children who have a form of verbal and mental development retardation disability. To this end, we developed a number of applications for a humanoid robot NAO with the aim to engage children during RAT sessions. We conducted an evaluation of these applications with children with Delayed Speech Development (DSD) who interacted with the robot on a few occasions. Our findings demonstrate the utility of such applications for the therapy of DSD children which was both engaging and entertaining. Similar approach could be utilized for the therapy of children with Autism Spectrum Disorder and Attention Deficit Hyperactivity Disorder.
|
|
TuCT4 |
Room T4 |
Visual Perception and Autonomous Robots |
Regular Session |
Chair: Hayashi, Kotaro | Toyohashi University of Technology |
Co-Chair: Chemori, Ahmed | Lirmm - Cnrs |
|
15:00-15:15, Paper TuCT4.1 | |
Grasping of Novel Objects for Robotic Pick and Place Applications |
Vohra, Mohit (Indian Institute of Technology, Kanpur), Prakash, Ravi (Indian Institute of Technology, Kanpur), Behera, Laxmidhar (IIT Kanpur) |
Keywords: Assistive Robotics, Degrees of Autonomy and Teleoperation, Motion Planning and Navigation in Human-Centered Environments
Abstract: Grasping of novel objects in pick and place applications is a fundamental and challenging problem in robotics, specifically for complex-shaped objects. It is observed that the well-known strategies like i) grasping from the centroid of object and ii) grasping along the major axis of the object often fails for complex-shaped objects. In this paper, a real-time grasp pose estimation strategy for novel objects in robotic pick and place applications is proposed. The proposed technique estimates the object contour in the point cloud and predicts the grasp pose along with the object skeleton in the image plane. The technique is tested for the objects like ball container, hand weight, tennis ball and even for complex shape objects like blower (non-convex shape). It is observed that the proposed strategy performs very well for complex shaped objects and predicts the valid grasp configurations in comparison with the above strategies. The experimental validation of the proposed grasping technique is tested in two scenarios, when the objects are placed distinctly and when the objects are placed in dense clutter. A grasp accuracy of 88.16% and 77.03% respectively are reported. All the experiments are performed with a real UR10 robot manipulator along with WSG-50 two-finger gripper for grasping of objects.
|
|
15:15-15:30, Paper TuCT4.2 | |
A Novel Image-Based Path Planning Algorithm for Eye-In-Hand Visual Servoing of a Redundant Manipulator in a Human Centered Environment |
Raina, Deepak (TCS Robotics Innovation Lab), P, Mithun (International Institute of Information Technology Hyderabad), Shah, Suril Vijaykumar (Indian Institute of Technology Jodhpur), Swagat, Kumar (Tata Consultancy Services) |
Keywords: Motion Planning and Navigation in Human-Centered Environments
Abstract: This paper presents a novel image-based path-planning and execution framework for vision-based control of a robot in a human centered environment. The proposed method involves applying Rapidly-exploring Random Tree (RRT) exploration to perform Image-Based Visual Servoing (IBVS) while satisfying multiple task constraints by exploiting robot redundancy. The methodology incorporates data-set of robot's workspace images for path-planning and design a controller based on visual servoing framework. This method is generic enough to include constraints like Field-of-View (FoV) limits, joint limits, obstacles, various singularities, occlusions etc. in the planning stage itself using task function approach and thereby avoiding them during the execution. The use of path-planning eliminates many of the inherent limitations of IBVS with eye-in-hand configuration and makes the use of visual servoing practical for dynamic and complex environments. Several experiments have been performed on a UR5 robotic manipulator to demonstrate that it is an effective and robust way to guide a robot in such environments.
|
|
15:30-15:45, Paper TuCT4.3 | |
A Novel Geometry-Based Algorithm for Robust Grasping in Extreme Clutter Environment |
Kundu, Olyvia (TCS Innovation Labs), Swagat, Kumar (Tata Consultancy Services) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, HRI and Collaboration in Manufacturing Environments, Degrees of Autonomy and Teleoperation
Abstract: This paper looks into the problem of grasping unknown objects in a cluttered environment using a 3D point cloud data obtained from a range sensor or an RGBD sensor. The objective is to identify graspable regions and detect suitable grasp poses from a single view, possibly, partial 3D point cloud without any apriori knowledge of the object geometry. The problem is solved in two steps - first, identifying and segmenting various object surfaces and second, searching for suitable grasping handles on these surfaces by applying geometric constraints of the physical gripper. The first step is solved by using a modified version of region growing algorithm that uses a pair of thresholds for the smoothness constraint on local surface normals to find natural boundaries of object surfaces. In this process, a novel concept of edge point is introduced that allows us to segment between different surfaces of the same object. The second step is solved by converting a 6D pose detection problem into a 1D linear search problem by projecting 3D cloud points onto the principal axes of the surface segment obtained in the first step. The graspable handles are then localized by applying physical constraints of the gripper. The resulting method allows us to grasp all kinds of objects including rectangular or box-type objects with flat surfaces which is otherwise considered to be difficult in the grasping literature. The proposed method is simple and can be implemented in real-time and does not require any off-line training phase for computing these affordances. The improvements achieved is demonstrated through comparison with another state-of-the-art grasping algorithm on various publicly-available datasets. We also contribute a new grasping dataset for extreme clutter situations.
|
|
15:45-16:00, Paper TuCT4.4 | |
Fatigue Estimation Using Facial Expression Features and Remote-PPG Signal |
Hasegawa, Masaki (Toyohash University of Technology), Hayashi, Kotaro (Toyohashi University of Technology), Miura, Jun (Toyohashi University of Technology) |
Keywords: Detecting and Understanding Human Activity, Assistive Robotics, Applications of Social Robots
Abstract: Currently, research and development of lifestyle support robots in daily life is being actively conducted. Healthcase is one such function robots. In this research, we develop a fatigue estimation system using a camera that can easily be mounted on robots. Measurements taken in a real environment have to be consider noises caused by changes in light and the subject's movement. This fatigue estimation system is based on a robust feature extraction method. As an indicator of fatigue, LF/HF-ratio was calculated from the power spectrum of RR interval in the electrocardiogram or the blood volume pulse (BVP). The BVP can be detected from the fingertip by using the photoplethysmography (PPG). In this study, we used a contactless PPG: remote-PPG (rPPG) detected by the luminance change of the face image. Some studies show facial expression features extracted from facial video are also useful for fatigue estimation. dimension reduction of past method using LLE spoiled the information in the large dimention of feature. We also developed a fatigue estimation method with such features using a camera for the healthcare robots. It used facial landmark points, line-of-sight vector, and size of the ellipse fitted with eyes and mouth landmark points. Therefore, proposed method simply use time-varying shape information of face like size of eyes, or gaze direction. We verified the performance of proposed features by the fatigue state classification using Support Vector Machine (SVM).
|
|
16:00-16:15, Paper TuCT4.5 | |
Model & Feature Agnostic Eye-In-Hand Visual Servoing Using Deep Reinforcement Learning with Prioritized Experience Reply |
Singh, Prerna (Tata Consultancy Services), Singh, Virender (TCS), Dutta, Samrat (TCS Research and Innovation), Swagat, Kumar (Tata Consultancy Services) |
Keywords: Machine Learning and Adaptation, Computational Architectures, Motion Planning and Navigation in Human-Centered Environments
Abstract: This paper presents a feature agnostic and model- free visual servoing (VS) technique using deep reinforcement learning (DRL) which exploits two new architectures of ex- perience replay buffer in deep deterministic policy gradient (DDPG). The proposed architectures are significantly fast and converge in a few numbers of steps. We use the proposed method to learn an end-to-end VS with eye-in-hand configu- ration. In traditional DDPG, the experience replay memory is randomly sampled for training the actor-critic network. This results in a loss of useful experiences when the buffer contains very few successful examples. We solve this problem by proposing two new replay buffer architectures: (a) min- heap DDPG (mH-DDPG) and (b) dual replay buffer DDPG (dR-DDPG). The former uses a min-heap data structure to implement the replay buffer whereas the latter uses two buffers to separate “good” examples from the “bad” examples. The training data for the actor-critic network is created as a weighted combination of the two buffers. The proposed algorithms are validated in simulation with the UR5 robotic manipulator model. It is observed that as the number of good experiences increases in the training data, the convergence time decreases. We find 27.25% and 43.25% improvements in the rate of convergence respectively by mH-DDPG and dR-DDPG over state-of-the-art DDPG.
|
|
16:15-16:30, Paper TuCT4.6 | |
Reasoning on Shared Visual Perspective to Improve Route Directions |
Waldhart, Jules (LAAS-CNRS), Clodic, Aurélie (Laas - Cnrs), Alami, Rachid (CNRS) |
Keywords: Applications of Social Robots, Cooperation and Collaboration in Human-Robot Teams, Robot Companions and Social Robots
Abstract: We claim that providing route directions can be best dealt with as a joint task where not only the robot as a direction provider but also the human as listener must be modeled and taken into account for planning. Moreover, we claim that in some cases, the robot should go with the human to reach a different perspective of the environment that allows the explanations to be more efficient. As a first step toward implementing such a system, we propose the SVP (Shared Visual Perspective) planner that searches for the right placements both for the robot and the human to enable efficient visual perspective sharing needed for providing route direction and enables to choose the best landmark when several are available. The shared perspective is chosen taking into account not only the visibility of the landmarks, but most importantly the whole guiding task. We use the SVP planner to produce solutions to the guiding problem showing the influence of the choice of the perspective on the qualitative quality of the route direction providing task.
|
|
TuCS1 |
Room T5 |
Social Human Robot Interaction of Service Robots |
Special Session |
Chair: Ahn, Ho Seok | The University of Auckland, Auckland |
Co-Chair: Jang, Minsu | Electronics & Telecommunications Research Institute |
|
15:00-15:15, Paper TuCS1.1 | |
Human Interaction and Improving Knowledge through Collaborative Tour Guide Robots (I) |
Velentza, Anna Maria (University of Birmingham, University of Macedonia), Heinke, Dietmar (University of Birmingham), Wyatt, Jeremy (University of Birmingham) |
Keywords: Storytelling in HRI, Personalities for Robotic or Virtual Characters, Creating Human-Robot Relationships
Abstract: In the coming years tour guide robots will be widely used in museums and exhibitions. Therefore, it is important to identify how these new museum guides can optimally interact with visitors. In this paper, we introduce the idea of two collaborative tour guide robots. We have been inspired by evidence from cognitive studies stating that people remember more when they receive information from two different human speakers. Our collaborative tour guides were benchmarked against single robot guides. Our study initially proved, through real-world experiments, previous proposals stating that the personality of the robot affects the human learning process; our results demonstrate that people remember significantly more information when they are guided by a cheerful robot than when their guide is a serious one. Moreover, another important outcome of our study is that our visitors tend to like more our collaborative robots, than any referenced single robot, as demonstrated by the higher scores in the aesthetic-related questions. Hence our results suggest that a cheerful robot is more suitable for learning purposes while two robots are more suitable for entertainment purposes.
|
|
15:15-15:30, Paper TuCS1.2 | |
Identity, Gender, and Age Recognition Convergence System for Robot Environments (I) |
jang, jaeyoon (ETRI) |
Keywords: Social Intelligence for Robots, Applications of Social Robots
Abstract: This paper proposes a new identity, gender, and age recognition convergence system for robot environments. In a robot environment, it is difficult to apply deep learning based methods because of various limitations. To overcome the limitations, we propose a shallow deep-learning fusion model that can calculate identity, gender, and age at once, and a technique for improving recognition performance. Using convergence network, we can obtain three pieces of information from a single input through a single operation. In addition, we propose a 2D / 3D augmentation method to generate virtual additional datasets for learning data. The proposed method has a smaller model size and faster computation time than existing methods and uses a very small number of parameters. Through the proposed method, we finally achieved 99.35%, 90.0%, and 60.9% / 94.5% of performance in identity recognition, gender recognition, and age recognition. In all experiments, we did not exceed the state-of-the-art results, but compared to other studies, we obtained performance similar to the previous study using only less than 10% parameters. In some experiments, we also achieved state-of-the-art result.
|
|
15:30-15:45, Paper TuCS1.3 | |
Hospital Receptionist Robot V2: Design for Enhancing Verbal Interaction with Social Skills (I) |
Ahn, Ho Seok (The University of Auckland, Auckland), Lim, Jong Yoon (University of Auckland), Ahn, Byeong-Kyu (Sungkyunkwan University), Johanson, Deborah (The University of Auckland), Hwang, Eui Jun (The University of Auckland), Lee, Min Ho (University of Auckland), Broadbent, Elizabeth (University of Auckland), MacDonald, Bruce (University of Auckland) |
Keywords: Applications of Social Robots, Assistive Robotics, Creating Human-Robot Relationships
Abstract: This paper presents a new version of robot receptionist system for healthcare facility environment. Our HealthBots consists of three subsystems: a receptionist robot system, a nurse assistant robot system, and a medical server. Our first version of receptionist robot, interacts with human at hospital reception, gives instructions to human verbally, but cannot understand what human says, so it uses a touch screen to get the response from human. In this paper, we design a receptionist robot that recognizes human face as well as speech, which enhances verbal interaction skill of robot. In addition, we design a reaction generation engine to generate appropriate reactive motions and speech. Moreover, we study which social skills are important to a hospital receptionist robot to enhance social interaction, such as friendliness and attention. We implemented perception modules, decision-making modules, and reaction modules to our HealthBots architecture, and did two case studies to find essential social skills for hospital receptionist robots.
|
|
15:45-16:00, Paper TuCS1.4 | |
Developing a Questionnaire to Evaluate Customers’ Perception in the Smart City Robotic Challenge (I) |
Wang, Lun (Sapienza University of Rome), Iocchi, Luca (Sapienza University of Roma), Marrella, Andrea (Sapienza University of Rome, Italy), Nardi, Daniele (Sapienza University of Rome) |
Keywords: Evaluation Methods and New Methodologies, Creating Human-Robot Relationships, Detecting and Understanding Human Activity
Abstract: In this paper, we present an approach to develop a new type of questionnaire for evaluating customers’ perceptions in the upcoming Smart CIty RObotic Challenge (SciRoc). The approach consists of two steps. First, it relies on interviewing experts on Human-Robot Interaction (HRI) to understand which robot’s behaviours can potentially affect the users’ perceptions during a HRI task. Then, it leverages a user survey to filter out those robot’s behaviours that are not significantly relevant from the end user perspective. We concretely enacted our approach over a specific scenario developed in the context of SciRoc, which instructs a robot to take an elevator of a shopping mall asking support to the customers of the mall. The results of the survey have allowed us to derive a final list of 17 behaviours to be captured in the questionnaire, which has been finally developed relying on a 5-point Likert-scale.
|
|
16:00-16:15, Paper TuCS1.5 | |
TeachMe: Three-Phase Learning Framework for Robotic Motion Imitation Based on Interactive Teaching and Reinforcement Learning (I) |
Kim, Taewoo (University of Science and Technology), Lee, Joo-Haeng (ETRI) |
Keywords: Social Learning and Skill Acquisition Via Teaching and Imitation, Applications of Social Robots
Abstract: Motion imitation is a fundamental communication skill for a robot specially as a nonverbal interaction with human. Due to kinematic configuration differences between human and robot, however, it is still challenging to find a proper mapping between two pose domains.Moreover, technical limitations of extracting 3D motion details such as a wrist joint from human motion videos makes motion retargeting more difficult. Explicit mapping over different motion domains could be a very inefficient solution. To solve these problems, we propose a three-phase reinforcement learning scheme to make a NAO robot to learn motions from human pose skeletons extracted from video inputs. Our learning scheme consists of three phases: (i) phase one for learning preparation, (ii) phase two for a simulation-based reinforcement learning, and (iii) phase three for a human-in-the-loop reinforcement learning. In phase one, embeddings of human skeleton and robot motions are learnt by AutoEncoder. In phase two, NAO robot can learn a rough imitation skill using reinforcement learning that translates learned embeddings. In the last phase, it learns motion details which are not considered in the previous phases by interactively setting rewards based on direct teaching over the policy of the previous phase. Especially, it is notable that relatively smaller number of interactive inputs are required for motion details in phase three, compared with the large volume of training sets for overall imitation in phase two. Experimental results show that the proposed method improves imitation skills efficiently for hand waving and salute motions from NTU-DB.
|
|
16:15-16:30, Paper TuCS1.6 | |
Lindsey the Tour Guide Robot - Usage Patterns in a Museum Long-Term Deployment (I) |
Del Duchetto, Francesco (University of Lincoln), Baxter, Paul Edward (University of Lincoln), Hanheide, Marc (University of Lincoln) |
Keywords: Long-term Experience and Longitudinal HRI Studies, Applications of Social Robots, Robots in art and entertainment
Abstract: The long-term deployment of autonomous robots co-located with humans in real-world scenarios remains a challenging problem. In this paper, we present the ``Lindsey'' tour guide robot system in which we attempt to increase the social capability of current state-of-the-art robotic technologies. The robot is currently deployed at a museum displaying local archaeology where it is providing guided tours and information to visitors. The robot is operating autonomously daily, navigating around the museum and engaging with the public, with on-site assistance from roboticists only in cases of hardware/software malfunctions. In a deployment lasting seven months up to now, it has travelled nearly 300km and has delivered more than 2300 guided tours. First, we describe the robot framework and the management interfaces implemented. We then analyse the data collected up to now with the goal of understanding and modelling the visitors' behavior in terms of their engagement with the technology. These data suggest that while short-term engagement is readily gained, continued engagement with the robot tour guide is likely to require more refined and robust socially interactive behaviours. The deployed system presents us with an opportunity to empirically address these issues.
|
| |