| |
Last updated on October 12, 2019. This conference program is tentative and subject to change
Technical Program for Wednesday October 16, 2019
|
WeAT1 |
Room T8 |
Machine Learning and Adaptation |
Regular Session |
Chair: Gupta, Kamal | Simon Fraser University |
Co-Chair: Busch, Baptiste | EPFL |
|
10:30-10:45, Paper WeAT1.1 | |
HiFI: A Hierarchical Framework for Incremental Learning Using Deep Feature Representation |
Raj, Ankita (IIT Delhi), Majumder, Anima (Tata Consultancy Services), Swagat, Kumar (Tata Consultancy Services) |
Keywords: Machine Learning and Adaptation
Abstract: The presented work focuses on automatic recognition of object classes while ensuring near real-time training required for recognizing a new object not seen previously. This is achieved by proposing a two-stage hierarchical deep learning framework which first learns object categories using a Nearest Class Mean (NCM) classifier applied directly to CNN features and then, uses a two-layer artificial neural network to learn the object labels within each category. In order to recognize a new object not seen earlier, the category is identified first and then the second stage neural network is incrementally trained with the features of the new object without forgetting previously learnt labels. The proposed hierarchical framework is shown to provide comparable recognition accuracy with significant reduction in overall computational time in recognizing new objects compared to methods that use end-to-end re-training. The efficacy of the approach is demonstrated through comparison with existing state-of-the-art methods on the publicly available CORe50 dataset.
|
|
10:45-11:00, Paper WeAT1.2 | |
Reinforcement Learning Motion Planning for an EOG-Centered Robot Assisted Navigation in a Virtual Environment |
Garrote, Luís Carlos (Institute of Systems and Robotics), Perdiz, João (University of Coimbra), Pires, Gabriel (University of Coimbra), Nunes, Urbano J. (Instituto De Sistemas E Robotica) |
Keywords: Machine Learning and Adaptation, Motion Planning and Navigation in Human-Centered Environments, Assistive Robotics
Abstract: This paper presents a new collaborative approach for robot motion planning of an assistive robotic platform that takes into account the intentions of the user provided through Electrooculographic (EOG) signals, as well as obstacles surrounding the robotic platform. In order to increase human confidence in the operation of robotic platforms with some degree of navigational autonomy, the intent of the user must be included in the decision process. In our system, the human-robot interface works through ocular movements (saccades and blinks), which are acquired as EOG signals and classified using a Convolutional Neural Network. In our proposed approach, a model-free Reinforcement Learning (RL) layer is used to provide commands to a virtual robotic platform. The RL layer is constantly being updated with the inputs from the user's intent, environment perception and previous machine-based decisions. In order to prevent collisions, machine-based perception using the proposed RL motion planning approach will assist the user by selecting suitable actions while learning from prior driving behaviors. The approach was validated by a set of tests that consisted of driving a robotic platform in an in-house 3D virtual model of our Research Center (ISR-UC). The experimental results show a better performance of the proposed approach with RL when compared to the version without the RL-based motion planning component. Results show that the approach is a promising step in the concept put forward for collaborative Human-Robotic Interaction (HRI), and opens a path for future research.
|
|
11:00-11:15, Paper WeAT1.3 | |
Identifying Multiple Interaction Events from Tactile Data During Robot-Human Object Transfer |
Davari, Mohammad-Javad (Simon Fraser University), Hegedus, Michael James (Simon Fraser University), Gupta, Kamal (Simon Fraser University), mehrandezh, mehran (University of Regina) |
Keywords: Machine Learning and Adaptation, Assistive Robotics, Cooperation and Collaboration in Human-Robot Teams
Abstract: During a robot to human object handover task, several intended or unintended events may occur with the object - it may be pulled, pushed, bumped or simply held - by the human receiver. We show that it is possible to differentiate between these events solely via tactile sensors. Training data from tactile sensors were recorded during interaction of human subjects with the object held by a 3-finger robotic hand. A Bag of Words approach was used to automatically extract effective features from the tactile data. A Support Vector Machine was used to distinguish between the four events with over 95 percent average accuracy.
|
|
11:15-11:30, Paper WeAT1.4 | |
Accuracy Improvement of Facial Expression Recognition in Speech Acts: Confirmation of Effectiveness of Information Around a Mouth and GAN-Based Data Augmentation |
Song, KyuSeob (KAIST (Korea Advanced Institute of Science and Technology)), Kwon, Dong-Soo (KAIST) |
Keywords: Social Intelligence for Robots, Motivations and Emotions in Robotics, Applications of Social Robots
Abstract: With the growth of the social robot market, much research has been undertaken on facial expression recognition, which is an important function of a social robot. Facial expression recognition models have shown good performance in a facial expression image dataset that expresses emotion without considering speaking effect. However, in reality, humans often express emotions by speaking and moving the muscles around the mouth. Therefore, the lack of consideration of speech leads to unsatisfactory emotion recognition results. In this paper, we investigated two points to be considered in learning a facial expression recognition model. First, we confirmed whether the information around a mouth induces the recognition model in speech act to misrecognition like the case of a facial expression recognition in non-speech acts or it has valid information for facial expression recognition. Second, Generative Adversarial Network (GAN)-based data augmentation has been performed to cover the problem in which the accuracy of the recognition model in speech acts is low because of the relatively small variance about the subject in RML dataset. The results showed that the information around the mouth made facial expression recognition in speech acts exhibit higher performance, unlike the case of facial expression recognition in non-speech acts. In addition, the GAN-based data augmentation alleviated the accuracy degradation in facial expression recognition because of the low variance of the dataset.
|
|
11:30-11:45, Paper WeAT1.5 | |
An Empirical Study of Person Re-Identification with Attributes |
Shree, Vikram (Cornell University), Chao, Wei-Lun (Cornell University), Campbell, Mark (Cornell University) |
Keywords: Machine Learning and Adaptation, Multimodal Interaction and Conversational Skills
Abstract: Person re-identification aims to identify a person from an image collection, given one image of that person as the query. There is, however, a plethora of real-life scenarios where we may not have a priori library of query images and therefore must rely on information from other modalities. In this paper, an attribute-based approach is proposed where the person of interest (POI) is described by a set of visual attributes, which are used to perform the search. We compare multiple algorithms and analyze how the quality of attributes impacts the performance. While prior work mostly relies on high precision attributes annotated by experts, we conduct a human-subject study and reveal that certain visual attributes could not be consistently described by human observers, making them less reliable in real applications. A key conclusion is that the performance achieved by non-expert attributes, instead of expert-annotated ones, is a more faithful indicator of the status quo of attribute-based approaches for person re-identification.
|
|
11:45-12:00, Paper WeAT1.6 | |
Q-Learning Based Navigation of a Quadrotor Using Non-Singular Terminal Sliding Mode Control |
yogi, Subhash Chand (Indian Institute of Technology - Kanpur), Tripathi, Vibhu Kumar (Indian Institute of Technology, Kanpur), Kamath, Archit Krishna (Indian Institute of Technology, Kanpur), Behera, Laxmidhar (IIT Kanpur) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Assistive Robotics
Abstract: This paper demonstrates an hybrid methodology of quadrotor navigation and control in an environment with obstacles by combining a Q-learning strategy for navigation with a non-linear sliding mode control scheme for position and altitude control of the quadrotor. In an unknown environment, an optimal safe path is estimated using the Q-learning scheme by considering the environment as a 3D grid world. Furthermore, a non-singular terminal sliding mode control (NTSMC) is employed to navigate the quadrotor through the planned trajectories. The NTSMC that is employed for trajectory tracking ensures robustness towards bounded disturbances as well as parametric uncertainties. In addition, it ensures finite time convergence of the tracking error and avoids issues that arise due to singularities in the dynamics. The effectiveness of the proposed navigation and control scheme are validated using numerical simulations wherein a quadrotor is required to pass through a window.
|
|
WeAT2 |
Room T2 |
Imitation Learning |
Regular Session |
Chair: Wachs, Juan | Purdue University |
Co-Chair: Di Nuovo, Alessandro | Sheffield Hallam University |
|
10:30-10:45, Paper WeAT2.1 | |
SMAK-Net: Self Supervised Multi-Level Spatial Attention Network for Knowledge Representation towards Imitation Learning |
Ramachandruni, Kartik (TCS Innovation Labs), Vankadari, Madhu Babu (TCS), Majumder, Anima (Tata Consultancy Services), Dutta, Samrat (TCS Research and Innovation), Swagat, Kumar (Tata Consultancy Services) |
Keywords: Programming by Demonstration, Social Learning and Skill Acquisition Via Teaching and Imitation, Machine Learning and Adaptation
Abstract: In this paper, we propose an end-to-end self-supervised feature representation network for imitation learning. The proposed network incorporates a novel multi-level spatial attention module to amplify the relevant and suppress the irrelevant information while learning task-specific feature embeddings. The multi-level attention module takes multiple intermediate feature maps of the input image at different stages of the CNN pipeline and results a 2D matrix of compatibility scores for each feature map with respect to the given task. The weighted combination of the feature vectors with the scores estimated from attention modules leads to a more task specific feature representation of the input images. We thus name the proposed network as SMAK-Net, abbreviated from Self-supervised Multi-level spatial Attention Knowledge representation Network. We have trained the network using a metric learning loss which aims to decrease the distance between the feature representations of simultaneous frames from multiple view points and increases the distance between the neighboring frames of the same view point. The experiments are performed on the publicly available Multi-View pouring dataset [1]. The outputs of the attention module are demonstrated to highlight the task specific objects while suppressing the rest of the background in the input image. The proposed method is validated by qualitative and quantitative comparisons with the state-of-the art technique TCN [1] along with intensive ablation studies. This method is shown to significantly outperform TCN by 6.5% in the temporal alignment error metric while reducing the total number of training steps by 155K.
|
|
10:45-11:00, Paper WeAT2.2 | |
Extending Policy from One-Shot Learning through Coaching |
Balakuntala Srinivasa Murthy, Mythra Varun (Purdue University), Venkatesh, L.N Vishnunandan (Purdue University), Padmakumar Bindu, Jyothsna (Purdue University), Voyles, Richard (Purdue University), Wachs, Juan (Purdue University) |
Keywords: Programming by Demonstration, Machine Learning and Adaptation, Social Learning and Skill Acquisition Via Teaching and Imitation
Abstract: Humans generally teach their fellow collaborators to perform tasks through a small number of demonstrations, often followed by episodes of coaching that tune and refine the execution during practice. Adopting a similar framework for teaching robots through demonstrations makes teaching tasks highly intuitive and imitating the refinement of complex tasks through coaching improves the efficacy. Unlike traditional Learning from Demonstration (LfD) approaches which rely on multiple demonstrations to train a task, we present a novel one-shot learning from demonstration approach, augmented by coaching, to transfer the task from task expert to robot. The demonstration is automatically segmented into a sequence of textit{a priori} skills (the task policy) parametrized to match task goals. During practice, the robotic skills self-evaluate their performances and refine the task policy to locally optimize cumulative performance. Then, human coaching further refines the task policy to explore and globally optimize the net performance. Both the self-evaluation and coaching are implemented using reinforcement learning (RL) methods. The proposed approach is evaluated using the task of scooping and unscooping granular media. The self-evaluator of the scooping skill uses the real-time force signature and resistive force theory to minimize scooping resistance similar to how humans scoop. Coaching feedback focuses modificatioins to sub-domains of the task policy whjile RL adjusts parameters. Thus, the proposed method provides a framework for learning tasks from one demonstration and generalizing it using human feedback through coaching.
|
|
11:00-11:15, Paper WeAT2.3 | |
DeepMoTIon: Learning to Navigate Like Humans |
Hamandi, Mahmoud (INSA Toulouse), D'Arcy, Michael (Northwestern University), Fazli, Pooyan (San Francisco State University) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Machine Learning and Adaptation
Abstract: We present a novel human-aware navigation approach, where the robot learns to mimic humans to navigate safely in crowds. The presented model, referred to as DeepMoTIon, is trained with pedestrian surveillance data to predict human velocity in the environment. The robot processes LiDAR scans via the trained network to navigate to the target location. We conduct extensive experiments to assess the components of our network and prove their necessity to imitate humans. Our experiments show that DeepMoTIion outperforms all the benchmarks in terms of human imitation, achieving a 24% reduction in time series-based path deviation over the next best approach. In addition, while many other approaches often failed to reach the target, our method reached the target in 100% of the test cases while complying with social norms and ensuring human safety.
|
|
11:15-11:30, Paper WeAT2.4 | |
Learning Active Spine Behaviors for Dynamic and Efficient Locomotion in Quadruped Robots |
Bhattacharya, Shounak (Indian Institute of Science), Singla, Abhik (Indian Institute of Science (IISc), Bangalore), Singh, Abhimanyu (BITS Pilani K K Birla Goa Campus), Dholakiya, Dhaivat (Indian Institute of Science), Bhatnagar, Shalabh (Indian Institute of Science, Bangalore), Amrutur, Bharadwaj (Indian Institute of Science), Ghosal, Ashitava (India Institute of Science (IISc), Nadubettu Yadukumar, Shishir (Indian Institute of Science) |
Keywords: Machine Learning and Adaptation, Innovative Robot Designs
Abstract: In this work, we provide a simulation framework to perform systematic studies on the effects of spinal joint compliance and actuation on bounding performance of a 16-DOF quadruped spined robot Stoch 2. Fast quadrupedal locomotion with active spine is an extremely hard problem, and involves a complex coordination between the various degrees of freedom. Therefore, past attempts at addressing this problem have not seen much success. Deep-Reinforcement Learning seems to be a promising approach, after its recent success in a variety of robot platforms, and the goal of this paper is to use this approach to realize the aforementioned behaviors. With this learning framework, the robot reached a bounding speed of 2.1 m/s with a maximum Froude number of 2. Simulation results also show that use of active spine, indeed, increased the stride length, improved the cost of transport, and also reduced the natural frequency to more realistic values.
|
|
11:30-11:45, Paper WeAT2.5 | |
Trajectory Based Deep Policy Search for Quadrupedal Walking |
Nadubettu Yadukumar, Shishir (Indian Institute of Science), Joglekar, Ashish (Robert Bosch Center for Cyber Physical Systems, Indian Institute), Shetty, Suhan (IISc), Dholakiya, Dhaivat (Indian Institute of Science), Singh, Abhimanyu (BITS Pilani K K Birla Goa Campus), Sagi, Aditya Varma (Indian Institute of Science), Bhattacharya, Shounak (Indian Institute of Science), Singla, Abhik (Indian Institute of Science (IISc), Bangalore), Bhatnagar, Shalabh (Indian Institute of Science, Bangalore), Ghosal, Ashitava (India Institute of Science (IISc), Amrutur, Bharadwaj (Indian Institute of Science) |
Keywords: Machine Learning and Adaptation
Abstract: In this paper, we explore a specific form of deep reinforcement learning (D-RL) technique for quadrupedal walking---trajectory based policy search via deep policy networks. Existing approaches determine optimal policies for each time step, whereas we propose to determine an optimal policy for each walking step. We justify our approach based on the fact that animals including humans use ``low" dimensional trajectories at the joint level to realize walking. We will construct these trajectories by using Bezier polynomials, with the coefficients being determined by a parameterized policy. In order to maintain smoothness of the trajectories during step transitions, hybrid invariance conditions are also applied. The action is computed at the beginning of every step, and a linear PD control law is applied to track at the individual joints. After each step, reward is computed, which is then used to update the new policy parameters for the next step. After learning an optimal policy, i.e., an optimal walking gait for each step, we then successfully play them in a custom built quadruped robot, Stoch 2, thereby validating our approach.
|
|
11:45-12:00, Paper WeAT2.6 | |
Natural Language Interface for Programming Sensory-Enabled Scenarios for Human-Robot Interaction |
Buchina, Nina (Eindhoven University of Techniligy), Sterkenburg, Paula (Free University of Amsterdam), Lourens, Tino (TiViPE), Barakova, Emilia I. (Eindhoven University of Technology) |
Keywords: Novel Interfaces and Interaction Modalities, Applications of Social Robots, Evaluation Methods and New Methodologies
Abstract: Previous research has shown that robot-mediated therapy may be effective in different mental or physical conditions, but this effectiveness strongly depends on how well the therapy can be translated to robot training. The goal of this study is to assist the end-users such as occupational and rehabilitation therapists to create without help of technical professional therapy-specific and sensory-enabled scenarios for the robotic assistant for use in an unstructured environment. The Cognitive Dimension of Notations framework was applied to assess the usability of the programming interface and the Cyclomatic complexity method was used to evaluate the complexity of the created robot scenarios. Eleven therapists with a mean age of 39 years working in the care for persons with visual-and-intellectual disabilities participated. The results show good usability of the interface, as measured via the CDN framework and the cyclomatic complexity analysis showed an increased complexity of the created by the occupational and rehabilitation therapist's scenarios. The participants did not request for very specifically defined behaviors for the robot, and therefore descriptions in natural text can be successfully used for robot programming.
|
|
WeAT3 |
Room T3 |
Motion Planning, Navigation, and Control in Human Centered Environment |
Regular Session |
Chair: Behera, Laxmidhar | IIT Kanpur |
Co-Chair: Krishna, Madhava | IIIT Hyderabad |
|
10:30-10:45, Paper WeAT3.1 | |
PIVO: Probabilistic Inverse Velocity Obstacle for Navigation under Uncertainty |
Poonganam, SriSai Naga Jyotish (IIIT Hyderabad), Goel, Yash (IIIT Hyderabad), Avula, Venkata Seetharama Sai Bhargav Kumar (International Institute of Information Technology, Hyderabad), Krishna, Madhava (IIIT Hyderabad) |
Keywords: Motion Planning and Navigation in Human-Centered Environments
Abstract: In this paper, we present an algorithmic framework which computes the collision-free velocities for the robot in a human shared dynamic and uncertain environment. We extend the concept of Inverse Velocity Obstacle (IVO) to a probabilistic variant to handle the state estimation and motion uncertainties that arise due to the other participants of the environment. These uncertainties are modeled as non-parametric probability distributions. In our PIVO: Probabilistic Inverse Velocity Obstacle, we propose the collision-free navigation as an optimization problem by reformulating the velocity conditions of IVO as chance constraints that takes the uncertainty into account. The space of collision-free velocities that result from the presented optimization scheme are associated to a confidence measure as a specified probability. We demonstrate the efficacy of our PIVO through numerical simulations and demonstrating its ability to generate safe trajectories under highly uncertain environments.
|
|
10:45-11:00, Paper WeAT3.2 | |
Trajectory Advancement During Human-Robot Collaboration |
Tirupachuri, Yeshasvi (Italian Institute of Technology), Nava, Gabriele (Istituto Italiano Di Tecnologia), Rapetti, Lorenzo (IIT), Latella, Claudia (Istituto Italiano Di Tecnologia), Pucci, Daniele (Italian Institute of Technology) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, HRI and Collaboration in Manufacturing Environments, Motion Planning and Navigation in Human-Centered Environments
Abstract: As technology advances, the barriers between the co-existence of humans and robots are slowly coming down. The prominence of physical interactions for collaboration and cooperation between humans and robots will be an undeniable fact. Rather than exhibiting simple reactive behaviors to human interactions, it is desirable to endow robots with augmented capabilities of exploiting human interactions for successful task completion. Towards that goal, in this paper, we propose a trajectory advancement approach in which we mathematically derive the conditions that facilitate advancing along a reference trajectory by leveraging assistance from helpful interaction wrench present during human-robot collaboration. We validate our approach through experiments conducted with the iCub humanoid robot both in simulation and on the real robot.
|
|
11:00-11:15, Paper WeAT3.3 | |
Vision-Based Fast-Terminal Sliding Mode Super Twisting Controller for Autonomous Landing of a Quadrotor on a Static Platform |
Kamath, Archit Krishna (Indian Institute of Technology, Kanpur), Tripathi, Vibhu Kumar (Indian Institute of Technology, Kanpur), yogi, Subhash Chand (Indian Institute of Technology - Kanpur), Behera, Laxmidhar (IIT Kanpur) |
Keywords: Assistive Robotics
Abstract: This paper proposes a vision-based sliding mode control technique for autonomous landing of a quadrotor over the static platform. The proposed vision algorithm estimates the quadrotor's position relative to an ArUco marker placed on a static platform using an on-board monocular camera. The relative position is provided as an input to a Fast-terminal Sliding Mode Super Twisting Controller (FTSMSTC) which ensures finite time convergence of the relative position between the landing pad marker and the quadrotor. In addition, the proposed controller attenuates chattering phenomena and guarantees robustness towards bounded external disturbances and modelling uncertainties. The proposed vision-based control scheme is implemented using numerical simulations and validated in real-time on the DJI Matrice 100.
|
|
11:15-11:30, Paper WeAT3.4 | |
Vision-Based Fractional Order Sliding Mode Control for Autonomous Vehicle Tracking by a Quadrotor UAV |
Maurya, Heera Lal (Indian Institute of Technology - Kanpur), Kamath, Archit Krishna (Indian Institute of Technology, Kanpur), Behera, Laxmidhar (IIT Kanpur), Verma, Nishchal K. (Indian Institute of Technology, Kanpur) |
Keywords: Assistive Robotics
Abstract: This paper proposes a vision-based sliding mode control technique for autonomous tracking of a moving vehicle by a quadrotor. The proposed vision algorithm estimates the quadrotor's position relative to moving vehicle using an on-board monocular camera. The relative position is provided as an input to a Fractional Order Sliding mode Controller (FOSMC) which ensures the convergence of the relative position between the moving vehicle and the quadrotor thereby enabling it to track the vehicle effectively. In addition, the proposed controller guarantees robustness towards bounded external disturbances and modelling uncertainties. The proposed vision-based control scheme is implemented using numerical simulations and validated in real-time on the DJI Matrice 100. Theses validations help in gaining into the maximum allowable speed of the moving target for the quadrotor to successfully track the object. This plays a vital role in surveillance operations and intruder chase.
|
|
11:30-11:45, Paper WeAT3.5 | |
End-User Programming of Low and High-Level Actions for Robotic Task Planning |
Liang, Ying Siu (Université Grenoble Alpes), Pellier, Damien (Laboratoire d'Informatique De Grenoble - CNRS), Fiorino, Humbert (University Grenoble Alpes - Laboratoire d'Informatique De Grenob), Pesty, Sylvie (University of Grenoble-Alps) |
Keywords: Novel Interfaces and Interaction Modalities, User-centered Design of Robots, Programming by Demonstration
Abstract: Programming robots for general purpose applications is extremely challenging due to the great diversity of end-user tasks ranging from manufacturing environments to personal homes. Recent work has focused on enabling end-users to program robots using Programming by Demonstration. However, teaching robots new actions from scratch that can be reused for unseen tasks remains a difficult challenge and is generally left up to robotic experts. We propose iRoPro, an interactive Robot Programming framework that allows end-users to teach robots new actions from scratch and reuse them with a task planner. In this work we provide a system implementation on a two-armed Baxter robot that (i) allows simultaneous teaching of low- and high-level actions by demonstration, (ii) includes a user interface for action creation with condition inference and modification, and (iii) allows creating and solving previously unseen problems using a task planner for the robot to execute in real-time. We evaluate the generalisation power of the system on six benchmark tasks and show how taught actions can be easily reused for complex tasks. We further demonstrate the system's usability with a user study (N=21), where users completed eight tasks to teach the robot new actions and execute plans in real-time. The study demonstrates that users with any programming level and educational background can easily learn and use the system.
|
|
11:45-12:00, Paper WeAT3.6 | |
Human Perception of Gait Styles on a Compass Walker in Variable Contexts Via Descriptive versus Emotive Labels |
Lambert, Jacey (University of Illinois at Urbana-Champaign), Huzaifa, Umer (University of Illinois at Urbana-Champaign), Rizvi, Wali (University of Illinois at Urbana Champaign), LaViers, Amy (University of Illinois at Urbana-Champaign) |
Keywords: Motivations and Emotions in Robotics, Creating Human-Robot Relationships, User-centered Design of Robots
Abstract: The behavior and aesthetics of robots can impact perception by human viewers, and prior work has shown that context influences this judgement. This paper presents an experiment to better understand what sort of gait label can best explain human estimate of an internal state based on external changes, despite effects of variable context on the perception of gait. The study analyzes how a user's perception of movement in a simple two degree-of-freedom mechanism changes through the use of varying environments via more emotive or more descriptive labels. Specifically, five bipedal gaits were overlaid onto illustrated backgrounds that were created to reflect various affective inclinations and given labels with and without emotive implications. Users were then asked to rate the accuracy of the descriptive or emotive labels of these videos, and the differences between their ratings were compared throughout the multiple backgrounds. It was found that while both sets of labels scored well, emotive labels were slightly preferred overall. However, although the addition of an environment positively affected the user perceptions in rating the suggested descriptive labels, it was more likely to negatively affect the ratings of emotive labels. The results of this analysis suggest that lay end-users prefer to make judgements about motion behavior in an emotive space but that descriptive labels may be more stable identifiers across various contexts for robot designers. These results highlight the emotional connection that humans make with motion and the role the context plays in helping to create this experience.
|
|
WeAT4 |
Room T4 |
Medical Robotics |
Regular Session |
Chair: Sgorbissa, Antonio | University of Genova |
Co-Chair: Xie, Le | Shanghai Jiao Tong University |
|
10:30-10:45, Paper WeAT4.1 | |
Master-Slave Guidewire and Catheter Robotic System for Cardiovascular Intervention |
Xiang, Yujia (Shanghai Jiao Tong University), Shen, Hao (Shanghai Jiao Tong University), Xie, Le (Shanghai Jiao Tong University), Wang, Hesheng (Shanghai Jiao Tong University) |
Keywords: Degrees of Autonomy and Teleoperation, Robots in Education, Therapy and Rehabilitation, Assistive Robotics
Abstract: Cardiovascular disease remains a primary cause of morbidity globally. Percutaneous coronary intervention plays a crucial role in the treatment. The radiation exposure of surgeons during the cardiovascular intervention can be avoided by master-slave surgical robots. This paper introduces a master-slave guidewire and catheter robotic system to protect the surgeons from X ray radiation to the most extent. And the jitters of master manipulators are mitigated by Kalman filtering algorithm. With two master manipulators, it helps to retain the surgeon's traditional operating habits. Also, a vascular model trial was conducted to validate that this interventional robotic system could complete the alternate progress and rotation of interventional guidewire and catheter.
|
|
10:45-11:00, Paper WeAT4.2 | |
A Brief Review of the Electronics, Control System Architecture, and Human Interface for Commercial Lower Limb Medical Exoskeletons Stabilized by Aid of Crutches |
Tabti, Nahla (Université Paris-Sud), KARDOFAKI, Mohamad (UVSQ), Alfayad, Samer (LISV, BIA), Chitour, Yacine (University of Paris Sud), Ben Ouezdou, Fathi (University of Versailles St. Quentin), DYCHUS, Eric (Sandyc) |
Keywords: Assistive Robotics, User-centered Design of Robots
Abstract: Research in the field of the powered orthosesor exoskeletons has expanded tremendously over the pastyears. Lower limb exoskeletons are widely used in roboticrehabilitation and are showing benefits in the patients qualityof life. Many engineering reviews have been published aboutthese devices and addressed general aspects. To the best of ourknowledge, no review has minutely discussed specifically thecontrol of the the most common used devices, particularly thealgorithms used to define the function state of the exoskeleton,such as walking, sit-to-stand, etc. In this contribution, thecontrol hardware and software, as well as the integrated sensorsfor the feedback are thoroughly analyzed. We will also discussthe importance of user-specific state definition and customizedcontrol architecture. Although there are many prototypesdeveloped nowadays, we chose to target medical lower limbexoskeletons that uses crutches to keep balance, and that areminimally actuated. These are the most common system thatare now being commercialized and used worldwide.Therefore, the outcome of such a review helps to have a practical insight in all of : the mechatronics design, system architecture, and control.
|
|
11:00-11:15, Paper WeAT4.3 | |
Development of a Foldable Five-Finger Robotic Hand for Assisting Laparoscopic Surgery |
Anzai, Yuki (Yokohama National University), Sagara, Yuto (Yokohama National University), Kato, Ryu (Yokohama National University), Mukai, Masaya (Tokai University) |
Keywords: Medical and Surgical Applications, Assistive Robotics
Abstract: The purpose of this study is to develop a robotic hand that can be inserted from a small incision wound and can handle large organs in laparoscopic surgery. We have determined the requirements for the proposed hand from a surgeon’s motions in HALS. We identified the basic 4 motions : "grasp", "pinch", "exclusion" and "spread". The proposed hand has the necessary DOFs for performing these behaviors, five fingers as in a human’s hand, a palm that can be folded into a bellows when a surgeon inserts the hand into the abdominal cavity. We evaluated the proposed robot hand based on a performance test, and we confirmed that it can insert from 20mm incision wound and grasp the simulated organs.
|
|
11:15-11:30, Paper WeAT4.4 | |
Effects of Flexible Surgery Robot on Endoscopic Procedure: Preliminary Bench-Top User Test |
Kim, Joonhwan (Korea Advanced Institute of Science and Technology(KAIST)), Hwang, Minho (Korea Advanced Institute of Science and Technology (KAIST)), Lee, Dong-Ho (Korea Advanced Institute of Science and Technology), Kim, Hansoul (Korea Advanced Institute of Science and Technology), Ahn, Jeongdo (Korea Advanced Institute of Science and Technology), You, Jae Min (Korea Advanced Institute of Science and Technology), Baek, DongHoon (KAIST), Kwon, Dong-Soo (KAIST) |
Keywords: Medical and Surgical Applications, Robots in Education, Therapy and Rehabilitation, Novel Interfaces and Interaction Modalities
Abstract: Endoscopes are widely used for not only intraluminal diagnosis but also therapeutic procedures in the gastrointestinal area. However, conventional endoscopes present a few challenges such as nonintuitive manipulation, physical burden on the operator, and lack of dexterity. These challenges limit endoscope usage in complex surgical procedures. Moreover, endoscope operators undergo extensive and lengthy training to attain an adequate skill level. In this paper, we introduce a flexible surgery robot platform K-FLEX that facilitates teleoperation via an intuitive master interface and bimanual manipulation by means of two dexterous surgical robot arms. Its effects on endoscopic procedures, especially in terms of task performance, learning properties, and physical burden on the operator, are validated by conducting a user test. The experimental results demonstrate that the developed robotic assistant increases operation speed, especially for novices; simplifies the learning process; and reduces the workload on the operator compared to conventional endoscopes.
|
|
11:30-11:45, Paper WeAT4.5 | |
Towards Securing the Sclera against Patient Involuntary Head Movement in Robotic Retinal Surgery |
Ebrahimi, Ali (Johns Hopkins University), Urias, Muller (Wilmer Eye Institute), He, Changyan (Beihang University), Patel, Niravkumar (Johns Hopkins University), Taylor, Russell H. (The Johns Hopkins University), Gehlbach, Peter (Johns Hopkins Medical Institute), Iordachita, Ioan Iulian (Johns Hopkins University) |
Keywords: Medical and Surgical Applications, Assistive Robotics
Abstract: Retinal surgery involves manipulating very delicate tissues within the confined area of eyeball. In such demanding practices, patient involuntary head movement might abruptly raise tool-to-eyeball interaction forces which would be detrimental to eye. This study is aimed at implementing different force control strategies and evaluating how they contribute to attaining sclera force safety while patient head drift is present. To simulate patient head movement, a piezoelectric-actuated linear stage is used to produce random motions in a single direction in random time intervals. Having an eye phantom attached to the linear stage then an experienced eye surgeon is asked to manipulate the eye and repeat a mock surgical task both with and without the assist of the Steady-Hand Eye Robot. For the freehand case, warning sounds were provided to the surgeon as auditory feedback to alert him about excessive slclra forces. For the robot-assisted experiments two variants of an adaptive sclera force control and a virtual fixture method were deployed to see how they can maintain eye safety under head drift circumstances.The results indicate that the developed robot control strategies are able to compensate for head drift and keep the sclera forces under safe levels as well as the free hand operation.
|
|
11:45-12:00, Paper WeAT4.6 | |
Detecting Deception in HRI Using Minimally-Invasive and Noninvasive Techniques |
Iacob, David-Octavian (ENSTA-ParisTech), Tapus, Adriana (ENSTA-ParisTech) |
Keywords: Non-verbal Cues and Expressiveness, Robot Companions and Social Robots
Abstract: Our work focuses on detecting deception in Human-Robot Interactions (HRI) by using measurement techniques that are appropriate for such interactions. In our previous research works, we obtained interesting results by using thermal and RGB-D cameras. In this paper, we approached this aspect from a different angle and used a lab-designed armband to accurately measure the participants' heart rate and skin conductance. We also developed a deception card game scenario that entices human participants to lie either to a robot or a human game partner, allowing us to monitor and understand the correlations between human physiological manifestations and their trustworthiness. Our results show the existence of statistically significant correlations between the participants' deceptive behaviour and their heart rate, skin conductance, face position, and face orientation. These results allow us to improve robots' ability to detect deception in HRI.
|
|
WeAT5 |
Room T5 |
Robotics for Rehabilitation |
Special Session |
Chair: Vashista, Vineet | Indian Institute of Technology Gandhinagar |
Co-Chair: Fiorini, Laura | The BioRobotics Institute, Scuola Superiore Sant'Anna |
|
10:30-10:45, Paper WeAT5.1 | |
Preliminary Evaluation of a Closed-Loop Social Robot for Reading Comprehension Testing (I) |
Migovich, Miroslava (University of Tennessee), McCarthy, Jillian (University of Tennessee Health Science Center), Wade, Eric (University of Tennessee) |
Keywords: Robots in Education, Therapy and Rehabilitation, Linguistic Communication and Dialogue, Robot Companions and Social Robots
Abstract: Reading comprehension in the United States has not shown significant improvement since 2007 [4]. Studies in comprehension improvement lack evidence and ease of implementation in and outside of the classroom [5] Our long-term study seeks to incorporate social robotics and reading comprehension activities to provide an option for in-home and in-school reading focused interventions. In the current study, we present an initial validation of a closed-loop social robotic system for reading comprehension testing. Results suggest our robot-based system is capable of recording and interpreting human responses and can provide contingent feedback to answers to evidence-derived reading comprehension questions. The system demonstrated few errors and was found to be acceptable, falling within the 3rd quartile range when compared to other studies, according to the System Usability Scale (SUS) [4]. These results demonstrate the potential utility of the system with the target population; additional testing with age-matched participants is needed to verify the relationship between errors and SUS scores. Keywords—Social Robotics, Reading Comprehension, System Usability, Deaf or Hard of Hearing
|
|
10:45-11:00, Paper WeAT5.2 | |
Evaluation of Physical Therapy through Analysis of Depth Images (I) |
Kramer, Ivanna (University of Koblenz-Landau), Memmesheimer, Raphael (University of Koblenz-Landau), Schmidt, Niko (University of Koblenz-Landau), Paulus, Dietrich (Universtät Koblenz-Landau) |
Keywords: Medical and Surgical Applications, Robots in Education, Therapy and Rehabilitation, Assistive Robotics
Abstract: The support through robots in orthopaedic rehabilitation is an opportunity to relieve physiotherapists. However, to be able to provide a control in the robot-patient cooperation in the therapy process a certain standard in interpreting the exercise has to be established. In this paper we present an evaluation approach of the health subject performance in a tibiofemoral rehabilitation on the example of squat exercises. The proposed method utilizes only depth images for the performance evaluation and any human-robot interaction system for the performance correction. Thus, this method can be easily applied to a mobile service robot in the robot-aided physical therapy. The patient is observed while performing the exercise and the motion is evaluated and segmented using Motion History Images. Concrete, depth images are used to monitor local points of interest on the performer during the exercise. The proposed approach was evaluated on custom image sequences with a multitude of varying subjects and shows the suitable performance for assisting in the correctness of the exercise execution.
|
|
11:00-11:15, Paper WeAT5.3 | |
Optimal Feature Selection for EMG-Based Finger Force Estimation Using LightGBM Model (I) |
Ye, Yuhang (South China University of Technology), Liu, Chao (LIRMM (UMR5506), CNRS, France), Zemiti, Nabil (LIRMM, Université Montpellier II - CNRS UMR 5506), Yang, Chenguang (University of the West of England) |
Keywords: Novel Interfaces and Interaction Modalities, Assistive Robotics, Medical and Surgical Applications
Abstract: Electromyogram (EMG) signal has been long used in human-robot interface in literature, especially in the area of rehabilitation. Recent rapid development in artificial intelligence (AI) has provided powerful machine learning tools to better explore the rich information embedded in EMG signals. For our specific application task in this work, i.e. estimate human finger force based on EMG signal, a LightGBM (Gradient Boosting Machine) model has been used. The main contribution of this study is the development of an automatic optimal feature selection algorithm that can minimize the number of features used in the LightGBM model in order to simplify implementation complexity, reduce computation burden and maintain comparable estimation performance to the one with full features. The performance of the LightGBM model with selected optimal features is compared with 4 other popular machine learning models in order to show the effectiveness of the developed feature selection method.
|
|
11:15-11:30, Paper WeAT5.4 | |
Learning Robot Policies Using a High-Level Abstraction Persona-Behaviour Simulator (I) |
Andriella, Antonio (IRI, CSIC-UPC), Torras, Carme (Csic - Upc), Alenyà, Guillem (CSIC-UPC) |
Keywords: Robots in Education, Therapy and Rehabilitation
Abstract: Collecting data in Human-Robot Interaction for training learning agents might be a hard task to accomplish. This is especially true when the target users are older adults with dementia since this usually requires hours of interactions and puts quite a lot of workload on the user. This paper addresses the problem of importing the Personas technique from HRI to create fictional patients' profiles. We propose a Persona-Behaviour Simulator tool that provides, with high-level abstraction, user's actions during an HRI task, and we apply it to cognitive training exercises for older adults with dementia. It consists of a Persona Definition that characterizes a patient along four dimensions and a Task Engine that provides information regarding the task complexity. We build a simulated environment where the high-level user's actions are provided by the simulator and the robot initial policy is learned using a Q-learning algorithm. The results show that the current simulator provides a reasonable initial policy for a defined Persona profile. Moreover, the learned robot assistance has proved to be robust to potential changes in the user's behaviour. In this way, we can speed up the fine-tuning of the rough policy during the real interactions to tailor the assistance to the given user. We believe the presented approach can be easily extended to account for other types of HRI tasks; for example, when input data is required to train a learning algorithm, but data collection is very expensive or unfeasible. We advocate that simulation is a convenient tool in these cases.
|
|
11:30-11:45, Paper WeAT5.5 | |
Estimating the Effect of Robotic Intervention on Elbow Joint Motion (I) |
Ghonasgi, Keya (The University of Texas at Austin), De Oliveira, Ana Christine (The University of Texas at Austin), Shafer, Anna (University of Texas at Austin), Rose, Chad (University of Texas at Austin), Deshpande, Ashish (University of Texas) |
Keywords: Robots in Education, Therapy and Rehabilitation, Assistive Robotics, Evaluation Methods and New Methodologies
Abstract: Much effort has been placed into the development of robotic devices to support, rehabilitate, and interact with humans. Despite these advances, reliably modeling the neuromuscular changes in human motion resulting from a robotic intervention remains difficult. This paper proposes a method to uncover the relationship between robotic intervention and human response by combining surface electromyography (sEMG), the musculoskeletal modeling platform OpenSim, and artificial neural networks (ANNs). To demonstrate the method, a one degree of freedom (DOF) elbow flexion-extension motion is performed and analyzed. Preliminary results show that while the robot provides assistance to the subject, it also appears to produce other unexpected responses in the movement. Further investigation using the new method reveals the neuromuscular effect of an unintended resistance to the subject's motion applied by the robot as it enforces a speed slower than the subject selects. The characterization of the differences in expected and actual interaction is enabled by the method presented in this paper. Thus, the method uncovers previously obscured aspects of human robot interaction, and creates possibilities for new training modalities.
|
|
11:45-12:00, Paper WeAT5.6 | |
Development and Applicability of a Cable-Driven Wearable Adaptive Rehabilitation Suit (WeARS) (I) |
Iyer, S. Srikesh (IIT Gandhinagar), V Joseph, Joel (Indian Institute of Technology Gandhinagar), Nakka, S S Sanjeevi (Indian Institute of Technology Gandhinagar), Singh, Yogesh (Indian Institute of Technology Gandhinagar), Vashista, Vineet (Indian Institute of Technology Gandhinagar) |
Keywords: User-centered Design of Robots, Detecting and Understanding Human Activity, Monitoring of Behaviour and Internal States of Humans
Abstract: Walking is one of the most relevant tasks that a person performs in his daily routine, which requires actuation and coordination of both inter and intra limb parameters of the lower extremity to adjust to the changing conditions of the environment like an unexpected perturbation or a change in terrain. Incidentally, with aging or due to an occurrence of a neuro-musculoskeletal disorder, human performance while walking degrades significantly. A major reason for the abnormal performance has been attributed to the observance of variability in the order and timing of muscle contraction in these individuals. In this work, we develop a Wearable Adaptive Rehabilitation Suit (WeARS) for lower extremity that uses externally actuated cables to resemble the role of agonist and antagonist muscles as in a biological system. WeARS also uses a subject-specific control strategy that is adaptive to the subject’s gait. The focus of the current study is to use WeARS in applying resistive forces on the hip joint to study various gait abnormalities and to develop subject-specific gait rehabilitation paradigms.
|
|
WeBT1 |
Room T8 |
Human Robot Collaboration and Cooperation |
Regular Session |
Chair: Lambrecht, Jens | Technische Universität Berlin |
Co-Chair: Fazli, Pooyan | San Francisco State University |
|
13:00-13:15, Paper WeBT1.1 | |
Can a Humanoid Robot Be Part of the Organizational Workforce? a User Study Leveraging Sentiment Analysis |
Mishra, Nidhi (Institute for Media Innovation, Nanyang Technological University), Ramanathan, Manoj (Institute for Media Innovation, Nanyang Technological University), Satapathy, Ranjan (Institute for Media Innovation, Nanyang Technological University), Cambria, Erik (Nanyang Technological University), Thalmann, Nadia Magnenat (Nanyang Technological University) |
Keywords: Applications of Social Robots, Creating Human-Robot Relationships, Social Intelligence for Robots
Abstract: Hiring robots for the workplaces is a challenging task as robots have to cater to customer demands, follow organizational protocols and behave with social etiquette. In this study, we propose to have a humanoid social robot, Nadine, as a customer service agent in an open social work environment. The objective of this study is to analyze the effects of humanoid robots on customers in a work environment, and see if it can handle social scenarios. We propose to evaluate these objectives through two modes, namely: survey questionnaire and customer feedback. The survey questionnaires are analyzed based on the datapoints provided in the questionnaire. We propose a novel approach to analyze customer feedback data using sentic computing. Specifically, we employ aspect extraction and sentiment analysis to analyze the data. From our framework, we detect sentiment associated to the aspects that mainly concerned the customers during their interaction. This allows us to understand customers expectations and current limitations of robots as employees.
|
|
13:15-13:30, Paper WeBT1.2 | |
A Multi Modal People Tracker for Real Time Human Robot Interaction |
Wengefeld, Tim (Ilmenau University of Technology), Mueller, Steffen (Ilmenau University of Technology), Lewandowski, Benjamin (Ilmenau University of Technology), Gross, Horst-Michael (Ilmenau University of Technology) |
Keywords: Multi-modal Situation Awareness and Spatial Cognition, Robot Companions and Social Robots, Assistive Robotics
Abstract: Tracking people in the surroundings of interactive service robots is a topic of high interest. Even if image based detectors using deep learning techniques have improved the detection rate and accuracy a lot, for robotic applications it is necessary to integrate those detections over time and over the limited ranges of individual sensors into a global model. That data fusion enables a continuous state estimation of people and helps reducing the false decisions taken by individual detectors and increasing the overall range. In this paper we present a tracking framework with a new distance measure for data association and a proper consideration of individual sensors' accuracies. By means of that, we could deal with high false detection rates of laser-based leg detectors without introducing further heuristics like a background model. The proposed system is compared to other tracking approaches from the state of the art. Furthermore, we present a novel manually annotated benchmark dataset for multi sensor person tracking from a moving robot platform in a guide scenario.
|
|
13:30-13:45, Paper WeBT1.3 | |
Human Prediction for the Natural Instruction of Handovers in Human Robot Collaboration |
Lambrecht, Jens (Technische Universität Berlin), Nimpsch, Sebastian (GESTALT Robotics GmbH) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, HRI and Collaboration in Manufacturing Environments, Machine Learning and Adaptation
Abstract: Human robot collaboration is aspiring to establish hybrid work environments in accordance with specific strengths of humans and robots. We present an approach of flexibly integrating robotic handover assistance into collaborative assembly tasks through the use of natural communication. For flexibly instructed handovers, we implement recent Convolutional Neural Networks in terms of object detection and grasping of arbitrary objects based on an RGB-D camera equipped to a robot following the eye-in-hand principle. In order to increase fluency and efficiency of the overall assembly process, we investigate the human ability to instruct the robot predictively with voice commands. We conduct a user study quantitatively and qualitatively evaluating the predictive instruction in order to achieve just-in-time handovers of tools needed for following subtasks. We compare our predictive strategy with a pure manual assembly having all tools in direct reach and a step-by-step reactive handover. The results reveal that the human is able to predict the handover comparable to algorithm-based predictors. Nevertheless, human prediction does not rely on extensive prior knowledge and is thus suitable for more flexible usage. However, the cognitive workload for the worker is increased compared to manual or reactive assembly.
|
|
13:45-14:00, Paper WeBT1.4 | |
Evaluation of an Industrial Robotic Assistant in an Ecological Environment |
Busch, Baptiste (EPFL), Cotugno, Giuseppe (King's College London), Khoramshahi, Mahdi (EPFL), Skaltsas, Grigorios (University of Hertfordshire), Turchi, Dario (Ocado), Urbano, Leonardo (EPFL), Waechter, Mirko (Karlsruhe Institute of Technology (KIT)), Zhou, You (Karlsruhe Institute of Technology (KIT)), Asfour, Tamim (Karlsruhe Institute of Technology (KIT)), Deacon, Graham (OCADO - Robotics Research), Russell, Duncan (Ocado Technology), Billard, Aude (EPFL) |
Keywords: HRI and Collaboration in Manufacturing Environments, Evaluation Methods and New Methodologies, Cooperation and Collaboration in Human-Robot Teams
Abstract: Social robotic assistants have been widely studied and deployed as telepresence tools or caregivers. Evaluating their design and impact on the people interacting with them is of prime importance. In this research, we evaluate the usability and impact of ARMAR-6, an industrial robotic assistant for maintenance tasks. For this evaluation, we have used a modified System Usability Scale (SUS) to assess the general usability of the robotic system and the Godspeed questionnaire series for the subjective perception of the coworker. We have also recorded the subjects' gaze fixation patterns and analyzed how they differ when working with the robot compared to a human partner.
|
|
14:00-14:15, Paper WeBT1.5 | |
Human Trust after Robot Mistakes: Study of the Effects of Different Forms of Robot Communication |
Ye, Sean (Georgia Institute of Technology), Neville, Glen (Georgia Institute of Technology), Schrum, Mariah (Georgia Institute of Technology), Gombolay, Matthew (Georgia Institute of Technology), Chernova, Sonia (Georgia Institute of Technology), Howard, Ayanna (Georgia Institute of Technology) |
Keywords: Curiosity, Intentionality and Initiative in Interaction, Cooperation and Collaboration in Human-Robot Teams, Creating Human-Robot Relationships
Abstract: Collaborative robots that work alongside humans will experience service breakdowns and make mistakes. These robotic failures can cause a degradation of trust between the robot and the community being served. A loss of trust may impact whether a user continues to rely on the robot for assistance. In order to improve the teaming capabilities between humans and robots, forms of communication that aid in developing and maintaining trust need to be investigated. In our study, we identify four forms of communication which dictate the timing of information given and type of initiation used by a robot. We investigate the effect that these forms of communication have on trust with and without robot mistakes during a cooperative task. Participants played a memory task game with the help of a humanoid robot that was designed to make mistakes after a certain amount of time passed. The results showed that participants' trust in the robot was better preserved when that robot offered advice only upon request as opposed to when the robot took initiative to give advice.
|
|
14:15-14:30, Paper WeBT1.6 | |
Path Planning through Tight Spaces for Payload Transportation Using Multiple Mobile Manipulators |
Tallamraju, Rahul (International Institute of Information Technology, Hyderabad), Sripada, Venkatesh (Oregon State University, Corvallis, USA), Shah, Suril Vijaykumar (Indian Institute of Technology Jodhpur) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Assistive Robotics, Cooperation and Collaboration in Human-Robot Teams
Abstract: In this paper, the problem of path planning through tight spaces, for the task of spatial payload transportation, using a formation of mobile manipulators is addressed. Due to the high dimensional configuration space of the system, efficient and geometrically stable path planning through tight spaces is challenging. We resolve this by planning the path for the system in two phases. First, an obstacle-free trajectory in R3 for the payload being transported is determined using RRT. Next, near-energy optimal and quasi-statically stable paths are planned for the formation of robots along this trajectory using non-linear multi-objective optimization. We validate the proposed approach in simulation experiments and compare different multi-objective optimization algorithms to find energy optimal and geometrically stable robot path plans.
|
|
WeBT2 |
Room T2 |
Linguistic Communication and Dialogue |
Regular Session |
Chair: Trovato, Gabriele | Waseda University |
Co-Chair: Kirstein, Franziska | Blue Ocean Robotics |
|
13:00-13:15, Paper WeBT2.1 | |
Autonomous Generation of Robust and Focused Explanations for Robot Policies |
Struckmeier, Oliver (Aalto University), Racca, Mattia (Aalto University), Kyrki, Ville (Aalto University) |
Keywords: User-centered Design of Robots, Cooperation and Collaboration in Human-Robot Teams, Machine Learning and Adaptation
Abstract: Transparency of robot behaviors increases efficiency and quality of interactions with humans. To increase transparency of robot policies, we propose a method for generating robust and focused explanations that express why a robot chose a particular action. The proposed method examines the policy based on the state space in which an action was chosen and describes it in natural language. The method can generate focused explanations by leaving out irrelevant state dimensions, and avoid explanations that are sensitive to small perturbations or have ambiguous natural language concepts. Furthermore, the method is agnostic to the policy representation and only requires the policy to be evaluated at different samples of the state space. We conducted a user study with 18 participants to investigate the usability of the proposed method compared to a comprehensive method that generates explanations using all dimensions. We observed how focused explanations helped the subjects more reliably detect the irrelevant dimensions of the explained system and how preferences regarding explanation styles and their expected characteristics greatly differ among the participants.
|
|
13:15-13:30, Paper WeBT2.2 | |
A Robot’s Expressive Language Affects Human Strategy and Perceptions in a Competitive Game |
Roth, Aaron M. (Carnegie Mellon University), Reig, Samantha (Carnegie Mellon University), Bhatt, Umang (Carnegie Mellon University), Shulgach, Jonathan (Carnegie Mellon University), Amin, Tamara (Independent), Doryab, Afsaneh (Carnegie Mellon University), Fang, Fei (Carnegie Mellon University), Veloso, Manuela (Carnegie Mellon University) |
Keywords: Personalities for Robotic or Virtual Characters, Linguistic Communication and Dialogue, Robot Companions and Social Robots
Abstract: As robots are increasingly endowed with social and communicative capabilities, they will interact with humans in more settings, both collaborative and competitive. We explore human-robot relationships in the context of a competitive Stackelberg Security Game. We vary humanoid robot expressive language (in the form of “encouraging” or “discouraging” verbal commentary) and measure the impact on participants’ rationality, strategy prioritization, mood, and perceptions of the robot. We learn that a robot opponent that makes discouraging comments causes a human to play a game less rationally and to perceive the robot more negatively. We also contribute a simple open source Natural Language Processing framework for generating expressive sentences, which was used to generate the speech of our autonomous social robot.
|
|
13:30-13:45, Paper WeBT2.3 | |
Walk the Talk! Exploring (Mis)Alignment of Words and Deeds by Robotic Teammates in a Public Goods Game |
Correia, Filipa (INESC-ID and Instituto Superior Técnico, Technical University Of), Chandra, Shruti (INESC-ID and Instituto Superior Técnico, TechnicalUniversity Of), Mascarenhas, Samuel (INESC-ID / Instituto Superior Técnico, University of Lisbon), Charles-Nicolas, Julien (Técnico Lisboa), Gally, Justin Philippe Roger Luc (INSA Lyon), Lopes, Diana (Instituto Superior Técnico), Santos, Fernando P. (Princeton University), Santos, Francisco C. (IST, Universidade De Lisboa, Portugal), Melo, Francisco S. (Instituto Superior Tecnico), Paiva, Ana (INESC-ID and Instituto Superior Técnico, TechnicalUniversity Of) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Creating Human-Robot Relationships, Robot Companions and Social Robots
Abstract: This paper explores how robotic teammates can enhance and promote cooperation in collaborative settings. It presents a user study in which participants engaged with two fully autonomous robotic partners to play a game together, named "For The Record", a variation of a public goods game. The game is played for a total of five rounds and in each of them, players face a social dilemma: to cooperate i.e., contributing towards the team's goal while compromising individual benefits, or to defect i.e., favouring individual benefits over the team's goal. Each participant collaborates with two robotic partners that adopt opposite strategies to play the game: one of them is an unconditional cooperator (the pro-social robot), and the other is an unconditional defector (the selfish robot). In a between-subjects design, we manipulated which of the two robots criticizes behaviours, which consists of condemning participants when they opt to defect, and it represents either an alignment or a misalignment of words and deeds by the robot. Two main findings should be highlighted (1) the misalignment of words and deeds may affect the level of discomfort perceived on a robotic partner; (2) the perception a human has of a robotic partner that criticizes him is not damaged as long as the robot displays an alignment of words and deeds.
|
|
13:45-14:00, Paper WeBT2.4 | |
Your Instruction May Be Crisp, but Not Clear to Me! |
Pramanick, Pradip (TCS Research & Innovation), Sarkar, Chayan (TCS Research & Innovation), Bhattacharya, Indrajit (TCS Research & Innovation) |
Keywords: Social Presence for Robots and Virtual Humans, Linguistic Communication and Dialogue, HRI and Collaboration in Manufacturing Environments
Abstract: The number of robots deployed in our daily surroundings is ever-increasing. Even in the industrial set-up, the use of coworker robots is increasing rapidly. These cohabitant robots perform various tasks as instructed by co-located human beings. Thus, a natural interaction mechanism plays a big role in the usability and acceptability of the robot, especially by a non-expert user. The recent development in natural language processing (NLP) has paved the way for chatbots to generate an automatic response for users' query. A robot can be equipped with such a dialogue system. However, the goal of human-robot interaction is not focused on generating a response to queries, but it often involves performing some tasks in the physical world. Thus, a system is required that can detect user intended task from the natural instruction along with the set of pre- and post-conditions. In this work, we develop a dialogue engine for a robot that can classify and map a task instruction to the robot's capability. If there is some ambiguity in the instructions or some required information is missing, which is often the case in natural conversation, it asks an appropriate question(s) to resolve it. The goal is to generate minimal and pin-pointed queries for the user to resolve an ambiguity. We evaluate our system for a telepresence scenario where a remote user instructs the robot for various tasks. Our study based on 12 individuals shows that the proposed dialogue strategy can help a novice user to effectively interact with a robot, leading to satisfactory user experience.
|
|
14:00-14:15, Paper WeBT2.5 | |
Building Language-Agnostic Grounded Language Learning Systems |
Kery, Caroline (University of Maryland, Baltimore County), Pillai, Nisha (UMBC), Matuszek, Cynthia (University of Maryland, Baltimore County), Ferraro, Francis (University of Maryland Baltimore County) |
Keywords: Linguistic Communication and Dialogue, Machine Learning and Adaptation, Cooperation and Collaboration in Human-Robot Teams
Abstract: Learning the meaning of grounded language--language that references the robot’s physical environment and perceptual data--is an important and increasingly widely studied problem in robotics and human-robot interaction. However, with a few exceptions, research in this area has focused on learning groundings for a single natural language pertaining to rich perceptual data. We present experiments on taking an existing natural language grounding system designed for English and applying it to a novel multilingual corpus of descriptions of objects paired with RGB-D perceptual data. We demonstrate that this specific approach transfers well to different languages, but also present possible design constraints to consider for grounded language learning systems that are intended for robots that will function in a variety of linguistic settings.
|
|
14:15-14:30, Paper WeBT2.6 | |
Let Me Show You Your New Home: Studying the Effect of Proxemic-Awareness of Robots on Users' First Impressions |
Petrak, Björn (Augsburg University), Weitz, Katharina (Augsburg University), Aslan, Ilhan (Augsburg University), Andre, Elisabeth (Augsburg University) |
Keywords: Robot Companions and Social Robots, Creating Human-Robot Relationships, Non-verbal Cues and Expressiveness
Abstract: First impressions play an important part in social interactions, establishing the foundation of a person's opinion about their counterparts. Since interpersonal communication is essentially multimodal, people are judged during first encounters by both their verbal utterances and nonverbal behavior, such as how they utilize eye contact, body distance, and body orientation. In this paper, we argue that robots would provide better user experiences, including being perceived as more likable if they were able to make a good first impression when introduced to a new home. Moreover, we wanted to test if robots can improve their perceived impression by behaving in a proxemic-aware manner; i.e., by following established social norms, which prescribe, for example how far people should position themselves around other objects to improve the facilitation of social interactions. In order to test this hypothesis, we conducted a user study with 16 participants in a virtual reality setting, comparing the impression of two agents being introduced to their new homes by users. We found that the proxemic-aware agent was indeed perceived as significantly better considering multiple constructs, including perceived anthropomorphism and trustworthiness.
|
|
WeBT3 |
Room T3 |
Robot Companions |
Regular Session |
Chair: Indurkhya, Bipin | Jagiellonian University |
Co-Chair: Michael, John | Central European University |
|
13:00-13:15, Paper WeBT3.1 | |
An Adaptive Robot Teacher Boosts a Human Partner's Learning Performance in Joint Action |
Vignolo, Alessia (Istituto Italiano Di Tecnologia), Powell, Henry (University of Glasgow), McEllin, Luke (Central European University), Rea, Francesco (Istituto Italiano Di Tecnologia), Sciutti, Alessandra (Italian Institute of Technology), Michael, John (Central European University) |
Keywords: Non-verbal Cues and Expressiveness, Social Intelligence for Robots
Abstract: One important challenge for roboticists in the coming years will be to design robots to teach humans new skills or to lead humans in activities which require sustained motivation (e.g. physiotherapy, skills training). In the current study, we tested the hypothesis that if a robot teacher invests physical effort in adapting to a human learner in a context in which the robot is teaching the human a new skill, this would facilitate the human's learning. We also hypothesized that the robot teacher's effortful adaptation would lead the human learner to experience greater rapport in the interaction. To this end, we devised a scenario in which the iCub and a human participant alternated in teaching each other new skills. In the high effort condition, the iCub slowed down his movements when repeating a demonstration for the human learner, whereas in the low effort condition he sped the movements up when repeating the demonstration. The results indicate that participants indeed learned more effectively when the iCub adapted its demonstrations, and that the iCub’s apparently effortful adaptation led participants to experience him as more helpful.
|
|
13:15-13:30, Paper WeBT3.2 | |
On the Role of Trust in Child-Robot Interaction |
Zguda, Paulina (Jagiellonian University), Kolota, Anna (Jagiellonian University), Jarosz, Mateusz (AGH University of Science and Technology), Sondej, Filip (AGH University of Science and Technology), Izui, Takamune (Tokyo University of Agriculture and Technology), Dziok, Maria (AGH University of Science and Technology), Belowska, Anna (AGH University of Science and Technology), Jędras, Wojciech (AGH University of Science), Venture, Gentiane (Tokyo University of Agriculture and Technology), Sniezynski, Bartlomiej (AGH University of Science and Technology), Indurkhya, Bipin (Jagiellonian University) |
Keywords: Robot Companions and Social Robots, Creating Human-Robot Relationships, Applications of Social Robots
Abstract: In child-robot interaction, the element of trust towards the robot is critical. This is particularly important the first time the child meets the robot, as the trust gained during this interaction can play a decisive role in future interactions. We present an in-the-wild study where Polish kindergartners interacted with a Pepper robot. The videos of this study were analyzed for the issues of trust, anthropomorphization, and reaction to malfunction, with the assumption that the last two factors influence the children’s trust towards Pepper. Our results reveal children’s interest in the robot performing tasks specific for humans, highlight the importance of the conversation scenario and the need for an extended library of answers provided by the robot about its abilities or origin and show how children tend to provoke the robot.
|
|
13:30-13:45, Paper WeBT3.3 | |
An Exploratory Study on Proxemics Preferences of Humans in Accordance with Attributes of Service Robots |
Samarakoon, Bhagya (University of Moratuwa), Muthugala Arachchige, Viraj Jagathpriya Muthugala (Singapore University of Technology and Design), Jayasekara, A.G.B.P. (University of Moratuwa), Elara, Mohan Rajesh (Singapore University of Technology and Design) |
Keywords: Robot Companions and Social Robots, Motion Planning and Navigation in Human-Centered Environments, Human Factors and Ergonomics
Abstract: Service robots that possess social interactive capabilities are vital to cater to the demand in emerging domains of robotic applications. A service robot frequently needs to interact with users when performing service tasks. The comfortability of users depends on the human-robot proxemics during these interactions. Hence, a service robot should be capable of maintaining proper proxemics that improves the comfort of users. The proxemics preferences of users might depend on diverse attributes of a robot, such as emotional state, noise level, and physical appearance. Therefore, it is vital to gain a better understanding of a robot's attributes which influence human-robot proxemics behavior. This paper contributes to an exploratory study to analyze the effects on human-robot proxemics preferences due to a robot's attributes; facial and vocal emotions, level of internal noises, and the physical appearance. Four sub-studies have been conducted to gather the required human-robot proxemics data. The gathered data have been analyzed through statistical tests. The test statistics reveal that facial and vocal emotions, internal noise level, and the physical appearance of a robot have significant effects on proxemics preferences of humans. The outcomes of this exploratory study would be useful in designing and developing human-robot proxemics strategies of a service robot that would enhance social interaction.
|
|
13:45-14:00, Paper WeBT3.4 | |
Augmented Reality As a Medium for Human-Robot Collaborative Tasks |
Chacko, Sonia (NYU Tandon School of Engineering), Kapila, Vikram (NYU Tandon School of Engineering) |
Keywords: Novel Interfaces and Interaction Modalities, Virtual and Augmented Tele-presence Environments, HRI and Collaboration in Manufacturing Environments
Abstract: This paper presents a novel augmented reality (AR) interaction method that allows a robot to perform manipulation of unknown physical objects in a human-robot collaborative working environment. A mobile AR application is developed to determine and communicate, in real-time, the position, orientation, and dimension of any random object in a robot manipulator's workspace to perform pick-and-place operations. The proposed method is based on estimating the pose and size of the object by means of an AR virtual element superimposed on the live view of the real object. In particular, a semi-transparent AR element is created and manipulated through touch screen interactions to match with the pose and scale of the physical object to provide the information about that object. The resulting data is communicated to the robot manipulator to perform pick-and-place tasks. In this way, the AR virtual element acts as a medium of communication between a human and a robot. The performance of the proposed AR interface is assessed by conducting multiple trials with random objects, and it is observed that the robot successfully accomplishes tasks communicated through the AR virtual elements. The proposed interface is also tested with 20 users to determine the quality of user experience, followed by a post-study survey. The participants reported that the AR interface is intuitive and easy to operate for manipulating physical objects of various sizes and shapes.
|
|
14:00-14:15, Paper WeBT3.5 | |
Designing a Socially Assistive Robot for Long-Term In-Home Use for Children with Autism Spectrum Disorders |
Pakkar, Roxanna (University of Southern California), Clabaugh, Caitlyn (University of Southern California), Lee, Rhianna (University of Southern California), Deng, Eric (University of Southern California), Mataric, Maja (University of Southern California) |
Keywords: Innovative Robot Designs, Robots in Education, Therapy and Rehabilitation, Applications of Social Robots
Abstract: Socially assistive robotics (SAR) research has shown great potential for supplementing and augmenting therapy for children with autism spectrum disorders (ASD). However, the vast majority of SAR research has been limited to short-term studies in highly controlled environments. The design and development of a SAR system capable of interacting autonomously in situ for long periods of time involves many engineering and computing challenges. This paper presents the design of a fully autonomous SAR system for long-term, in-home use with children with ASD. We address design decisions based on robustness and adaptability needs, discuss the development of the robot’s character and interactions, and provide insights from the month-long, in-home data collections with children with ASD. This work contributes to a larger research program that is exploring how SAR can be used for enhancing the social and cognitive development of children with ASD.
|
|
14:15-14:30, Paper WeBT3.6 | |
Proof of Concept of a Projection-Based Safety System for Human-Robot Collaborative Engine Assembly |
Hietanen, Antti Eerikki (Tampere University of Technology), Changizi, Alireza (Tampere University of Technology), Lanz, Minna (Department of Mechanical Engineering and Industrial Systems), Kamarainen, Joni-Kristian (Tampere University of Technology), GANGULY, PALLAB (Tampere University), Pieters, Roel S. (Tampere University), Latokartano, Jyrki Matias (Tampere University of Technology) |
Keywords: HRI and Collaboration in Manufacturing Environments, Cooperation and Collaboration in Human-Robot Teams, Novel Interfaces and Interaction Modalities
Abstract: In the past years human-robot collaboration has gained interest among industry and production environments. While there is interest towards the topic, there is a lack of industrially relevant cases utilizing novel methods and technologies. The feasibility of the implementation, worker safety and production efficiency are the key questions in the field. The aim of the proposed work is to provide a conceptual safety system for context-dependent, multi-modal communication in human-robot collaborative assembly, which will contribute to safety and efficiency of the collaboration. The approach we propose offers an addition to traditional interfaces like push buttons installed at fixed locations. We demonstrate an approach and corresponding technical implementation of the system with projected safety zones based on the dynamically updated depth map and a graphical user interface (GUI). The proposed interaction is a simplified two-way communication between human and the robot to allow both parties to notify each other, and for the human to coordinate the operations.
|
|
WeBT4 |
Room T4 |
Therapy and Rehabilitation |
Regular Session |
Chair: Cavallo, Filippo | Scuola Superiore Sant'Anna - Pisa |
Co-Chair: Fiorini, Laura | The BioRobotics Institute, Scuola Superiore Sant'Anna |
|
13:00-13:15, Paper WeBT4.1 | |
Linear Parameter-Varying Identification of the EMG–Force Relationship of the Human Arm |
PESENTI, Mattia (Department of Information and Bioengineering, Politecnico Di Mil), ALKHOURY, Ziad (University of Strasbourg), BEDNARCZYK, Maciej (ICube Laboratory, University of Strasbourg, Strasbourg), OMRAN, Hassan (ICube Laboratory, University of Strasbourg, Strasbourg), Bayle, Bernard (University of Strasbourg) |
Keywords: Medical and Surgical Applications, Creating Human-Robot Relationships, Human Factors and Ergonomics
Abstract: In this paper, we present a novel identification approach to model the EMG–Force relationship of the human arm, reduced to a single degree of freedom (1-DoF) for simplic- ity. Specifically, we exploit the Linear Parameter Varying (LPV) framework. The inputs of the model are the electromyographic (EMG) signals acquired on two muscles of the upper arm, biceps brachii and triceps brachii, and two muscles of the forearm, brachioradialis and flexor carpi radialis. The output of the model is the force produced at the hand actuating the elbow. Because of the position-dependency of the system, the elbow angle is used as scheduling signal for the LPV model. Accurate modeling of the human arm with this approach opens new possibilities in terms of robot control for physical Human- Robot Interaction and rehabilitation robotics.
|
|
13:15-13:30, Paper WeBT4.2 | |
Co-Designing and Field-Testing Adaptable Robots for Triggering Positive Social Interactions for Adolescents with Cerebral Palsy |
Mariager, Casper Sloth (Aalborg University), Fischer, Daniel K. B. (Aalborg University), Kristiansen, Jakob (Aalborg University), Rehm, Matthias (Aalborg University) |
Keywords: Robots in Education, Therapy and Rehabilitation, Degrees of Autonomy and Teleoperation, Applications of Social Robots
Abstract: Robots in the health care sector are often envisioned as a kind of social interaction partner. We suggest a different approach, where robots become adaptable tools for facilitating positive social interaction between and learning for special needs users. The paper presents the development and a series of field tests of a new robot game platform, which is envisioned to level the playing field for users with distinct motor and cognitive capacities by adapting the robots to their abilities. The series of field tests shows that the system is successful in triggering positive social interactions between the players.
|
|
13:30-13:45, Paper WeBT4.3 | |
Socially Assistive Robot’s Behaviors Using Microservices |
Ercolano, Giovanni (University of Naples Federico II), Lambiase, Paolo D. (University of Naples Federico II), Leone, Enrico (University of Naples "Federico II"), Raggioli, Luca (University of Naples Federico II), Trepiccione, Davide (University of Naples Federico II), Rossi, Silvia (Universita' Di Napoli Federico II) |
Keywords: Assistive Robotics, Robots in Education, Therapy and Rehabilitation, Robot Companions and Social Robots
Abstract: In this work, we introduce a set of robot's behavior aimed at being used for monitoring and interaction with elderly people affected by Alzheimer disease. Robot's behaviors for a low cost robotic device rely on the use of microservices running on a local server. A microservice is an independent, self-contained, self-scope and self-responsibility component of the robotic system proposed for decoupling the implemented functions linked to the complex robot behaviors. The services developed include navigation, interaction, and monitoring capabilities. The requests and the signals of the patients are handled and managed relying on real-time event-based communications between the system components. The use design patterns like this, increases the overall reliability of a service composition. The system is currently operating in a private house with an elderly couple.
|
|
13:45-14:00, Paper WeBT4.4 | |
A Robot-Mediated Assessment of Tinetti Balance Scale for Sarcopenia Evaluation in Frail Elderly |
Fiorini, Laura (The BioRobotics Institute, Scuola Superiore Sant'Anna), D'Onofrio, Grazia (Complex Unit of Geriatrics, Department of Medical Sciences, IRC), Rovini, Erika (Scuola Superiore Sant'Anna - Pisa), Sorrentino, Alessandra (Scuola Superiore Sant'Anna), Coviello, Luigi (The Biorobotics Institute, Scuola Superiore Sant'Anna), Limosani, Raffaele (Scuola Superiore Sant'Anna), Sancarlo, Daniele (Complex Unit of Geriatrics, Department of Medical Sciences, IRC), Cavallo, Filippo (Scuola Superiore Sant'Anna - Pisa) |
Keywords: Assistive Robotics, Detecting and Understanding Human Activity
Abstract: Aging society is characterized by a high prevalence of sarcopenia, which is considered one of the most common health problems of the elderly population. Sarcopenia is due to the age-related loss of muscle mass and muscle strength. Recent literature findings highlight that the Tinetti Balance Assessment (TBA) scale is used to assess the sarcopenia in elderly people. In this context, this article proposes a model for sarcopenia assessment that is able to provide a quantitative assessment of TBA-gait motor parameters by means of a cloud robotics approach. The proposed system is composed of cloud resources, an assistive robot namely ASTRO and two inertial wearable sensors. Particularly, data from two inertial sensors (i.e., accelerometers and gyroscopes), placed on the patient’s feet, and data from ASTRO laser sensor (position in the environment) were analyzed and combined to propose a set of motor features correspondent to the TBA gait domains. The system was preliminarily tested at the hospital of “Fondazione Casa Sollievo della Sofferenza” in Italy. The preliminary results suggest that the extracted set of features is able to describe the motor performance. In the future, these parameters could be used to support the clinicians in the assessment of sarcopenia, to monitoring the motor parameters over time and to propose personalized care-plan.
|
|
14:00-14:15, Paper WeBT4.5 | |
Stakeholder’s Acceptance and Expectations of Robot-Assisted Therapy for Children with Autism Spectrum Disorder |
Oliver, Joan (Instituto De Robótica Para La Dependencia), Oliván, Rebeca (Instituto De Robótica Para La Dependencia), Shukla, Jainendra (Indraprastha Institute of Information Technology, Delhi), Folch, Annabel (Intellectual Disability and Developmental Disorders Research Uni), Martínez-Leal, Rafael (Intellectual Disability and Developmental Disorders Research Uni), Castellà, Mireia (Intellectual Disability and Developmental Disorders Research Uni), Puig, Domenec (Rovira I Virgili University) |
Keywords: Robots in Education, Therapy and Rehabilitation, Assistive Robotics, Human Factors and Ergonomics
Abstract: Robot assisted therapy for children with Autism Spectrum Disorder (ASD) should take into account the stakeholders expectations about their potential benefits. Any disparity between the stakeholders expectations and the gained benefits may negatively impact the acceptance and adoption of the robot assisted therapy. In this research, we conducted an observational study with eleven parents and five clinical professionals related with the children with ASD who were pre-selected to undergo robot assisted therapeutic sessions. The aim was to investigate and identify the potential impact regarding the interventions delivered by the social robots during the interventions, roles of the social robots and benefit offered by them. Specifically, the social robot Cozmo was used for this study. Their opinions were collected using questionnaires and were analyzed quantitatively and qualitatively. The results of the study confirm a positive attitude towards the adoption of these technologies, both among the caregivers and the professionals.
|
|
14:15-14:30, Paper WeBT4.6 | |
SHEBA: A Low-Cost Assistive Robot for Older Adults in the Developing World |
Motahar, Tamanna (North South University), Farden, Fahim (North South University), Sarkar, Dibya Prokash (North South University), Islam, Atiqul (North South University), Cabrera, Maria Eugenia (University of Washington), Cakmak, Maya (University of Washington) |
Keywords: Assistive Robotics, User-centered Design of Robots
Abstract: Maintaining independence and dignity is a primary goal of successful aging for older adults around the globe. Robots can support this goal in various ways by assisting everyday tasks that become challenging due to aging-related deterioration in physical and mental abilities. While a growing body of research tackles challenges in creating such robots, most work has focused on older adults with high socio-economic status in the developed world. In most cases, the price of these robots alone prohibits their potential use in the developing world. Further, socio-cultural differences in the developing world will limit the usability and chance of adoption of a robot designed based on users in the developed world. Our work aims to close this gap. In this paper we present findings from the user-centered design and development process of a low-cost assistive robot for older adults in the developing world named SHEBA, which is a Bengali term for care. We first interviewed 37 older adults and 21 caregivers in assisted and independent living settings in Dhaka, Bangladesh to gather requirements and understand priorities. We then developed a prototype focused on medication management and delivery and we brought it to an assisted living center to interact with potential older adult users. We interviewed 23 older adults and 5 caregivers who interacted with or observed our prototype to gather feedback. We present quantitative and qualitative data obtained in these interviews, identifying key requirements for robots designed for older adults in the developing world.
|
|
WeBT5 |
Room T5 |
Medical Robotics and Intelligent Control Systems in the Indian Context |
Special Session |
Chair: Maria Joseph, Felix Orlando | Indian Institute of Technology Roorkie |
Co-Chair: Pradhan, PyariMohan | IIT Roorkee |
|
13:00-13:15, Paper WeBT5.1 | |
Bondgraph Modelling for the Master-Slave Robotic Teleoperation System (I) |
Saini, Sarvesh (Indian Institute of Technology Roorkee), Pathak, Pushparaj M. (Indian Institute of Technology Roorkee), Maria Joseph, Felix Orlando (Indian Institute of Technology Roorkee) |
Keywords: Virtual and Augmented Tele-presence Environments, Interaction Kinesics
Abstract: Teleoperation is required where the operator cannot directly access the actual workspace such as nuclear exploration, garbage treatment, surgical workspace in laparoscopic and Natural Orifice Transluminal Endoscopic Surgery (NOTES), etc. In master and slave robotic teleoperation system the force and velocity information exchange take place between master and slave robots. In this paper, the bondgraph modelling technique has been used for the modeling of master-slave robotic teleoperation system. Here, the elements of teleoperation system such as master robot (Phantom Omni haptic device), slave robot (miniature In-vivo robot), communication architecture and external environment are modeled in bondgraph. Simulation results for trajectory tracking (in unilateral teleoperation) and force feedback (in bilateral teleoperation) between master and slave are presented.
|
|
13:15-13:30, Paper WeBT5.2 | |
Simultaneously Concentrated PSWF-Based Synchrosqueezing S-Transform and Its Application to R Peak Detection in ECG Signal (I) |
Singh, Neha (IIT Roorkee), Deora, Puneesh (IIT Roorkee), Pradhan, PyariMohan (IIT Roorkee) |
Keywords: Medical and Surgical Applications, Machine Learning and Adaptation, Evaluation Methods and New Methodologies
Abstract: Time-frequency (TF) analysis through well-known TF tool namely S-transform (ST) has been extensively used for QRS detection in Electrocardiogram (ECG) signals. However, Gaussian window-based conventional ST suffers from poor TF resolution due to the fixed scaling criterion and the long taper of the Gaussian window. Many variants of ST using different scaling criteria have been reported in literature for improving the accuracy in the detection of QRS complexes. This paper presents the usefulness of zero-order prolate spheroidal wave function (PSWF) as a window kernel in ST. PSWF has ability to concentrate maximum energy in narrow and finite time and frequency intervals, and provides more flexibility in changing window characteristics. Synchrosqueezing transform is a post processing method that improves the energy concentration in a TFR remarkably. This paper proposes a PSWF-based synchrosqueezing ST for detection of R peaks in ECG signals. The results show that the proposed method accurately detects R peaks with a sensitivity, positive predictivity and accuracy of 99.96%, 99.96% and 99.92% respectively. It also improves upon on existing techniques in terms of the aforementioned metrics and the search back range.
|
|
13:30-13:45, Paper WeBT5.3 | |
Continuous Higher Order Sliding Mode Control of Bevel-Tip Needle for Percutaneous Interventions (I) |
Maria Joseph, Felix Orlando (Indian Institute of Technology Roorkee) |
Keywords: Medical and Surgical Applications
Abstract: The major challenge in percutaneous interventions involving rigid needles are to assure accuracy in target reaching and stability. Human factor such as breathing process and human errors along with image distortion during needle deformation process can lead to the abovementioned challenges. Thus, in this piece of work, a robust second order sliding mode controller called super twisting algorithm to ensure chattering free response of the bevel tip flexible needle motion is proposed. Through the kinematic model of the bevel tip needle, the performance of the proposed controller is tested. Furthermore, the comparison study through extensive simulations are also performed with conventional sliding mode controller. From the results, it is observed that the stable maneuvering performance of the needle due to the proposed algorithm will be suitable for real-time clinical scenarios involving minimal invasive surgeries.
|
|
13:45-14:00, Paper WeBT5.4 | |
Development of an Intelligent Cane for Visually Impaired Human Subjects (I) |
Maria Joseph, Felix Orlando (Indian Institute of Technology Roorkee) |
Keywords: Assistive Robotics, Robots in Education, Therapy and Rehabilitation, Human Factors and Ergonomics
Abstract: People with visual disabilities are often dependent on external assistance which is provided by either humans, trained dogs, or other special electronic devices for decision making but there are certain limitations to these aids.Hence, an intelligent white cane is developed for visually challenged people which makes use of HR-SO4 ultrasonic sensors to detect any obstacle that lies in the range of the the sensor and determine its distance. The ultrasonic sensor has range up to 450 meters so that any object lying within this range can be easily detected and the warning signal is provided using the buzzer which gives beeping signals in order to alert the user for prompt action. Also, an intelligent technique of object detection and classification using the web camera which captures the image and hence classifies it is being used. The classification obtained is in the form of text which is further converted to audio signal using text-to-speech conversion which is implemented in Python using Espeak open source library.
|
|
14:00-14:15, Paper WeBT5.5 | |
Intention Detection and Gait Recognition (IDGR) System for Gait Assessment: A Pilot Study (I) |
Singh, Yogesh (Indian Institute of Technology Gandhinagar), Kher, Manan (Institute of Technology, Nirma University), Vashista, Vineet (Indian Institute of Technology Gandhinagar) |
Keywords: User-centered Design of Robots, Detecting and Understanding Human Activity, Monitoring of Behaviour and Internal States of Humans
Abstract: Gait abnormality is the most significant symptom in the neurologically affected patients. To improve their quality of life, it is important to complement and further enhance the existing qualitative gait analysis protocol with a technically sound quantitative paradigm. In this paper, we present a pilot study and the development of a wearable intention detection and gait recognition (IDGR) system. This system comprises a well-established integrated network of microcontrollers and sensors which acts as a diagnostic tool for gait correction. IDGR system provides real-time feedback of the temporal gait parameter on a user interface. Furthermore, this system classifies the subject’s intention - standing still, walking or ascending the stairs using simple logic inherent to an individual’s walking style. It offers reliable tools for functional assessment of the patient’s progress by measuring physical parameters. We conducted an experiment on a healthy participant as a validation of our approach and proof-of-concept.
|
|
14:15-14:30, Paper WeBT5.6 | |
Transferring Dexterous Surgical Skill Knowledge between Robots for Semi-Autonomous Teleoperation (I) |
Rahman, Md Masudur (Purdue University - West Lafayette), Sanchez-Tamayo, Natalia (Purdue University), Gonzalez, Glebys (Purdue University), Agarwal, Mridul (Purdue University), Vaneet, Aggarwal (Purdue University), Voyles, Richard (Purdue University), Xue, Yexiang (Purdue University), Wachs, Juan (Purdue University) |
Keywords: Medical and Surgical Applications, Machine Learning and Adaptation, Degrees of Autonomy and Teleoperation
Abstract: In the future, deployable, teleoperated surgical robots can save the lives of critically injured patients in battlefield environments. These robotic systems will need to have autonomous capabilities to deal with communication delays and unexpected environmental conditions during critical phases of the procedure. Understanding and predicting the next surgical actions (referred as “surgemes”) is essential for autonomous surgery. Most approaches for surgeme recognition cannot cope with the high variability associated with austere environments and thereby cannot “transfer” well to field robotics. We propose a methodology that uses compact image representations with kinematic features for surgeme recognition in the DESK dataset. This dataset offers samples for surgical procedures over different robotic platforms with a high variability in the setup. We performed surgeme classification in two setups: 1) No transfer, 2) Transfer from a simulated scenario to two real deployable robots. Then, the results were compared with recognition accuracies using only kinematic data with the same experimental setup. The results show that our approach improves the recognition performance over kinematic data across different domains. The proposed approach produced a transfer accuracy gain up to 20% between the simulated and the real robot, and up to 31% between the simulated robot and a different robot. A transfer accuracy gain was observed for all cases, even those already above 90%.
|
|
WeCT1 |
Room T8 |
Poster Slot 1 |
Poster Session |
Chair: Behera, Laxmidhar | IIT Kanpur |
|
15:00-17:00, Paper WeCT1.1 | |
Learning by Collaborative Teaching : An Engaging Multi-Party CoWriter Activity |
El Hamamsy, Laila (EPFL), JOHAL, Wafa (École Polytechnique Fédérale De Lausanne), asselborn, thibault (EPFL), Nasir, Jauwairia (EPFL), Dillenbourg, Pierre (EPFL) |
Keywords: Robots in Education, Therapy and Rehabilitation
Abstract: This paper presents the design of a novel and engaging collaborative learning activity for handwriting where a group of participants simultaneously tutor a Nao robot. This activity was intended to take advantage of both collaborative learning and the learning by teaching paradigm to improve children’s meta-cognition (perception of their own skills). Multiple engagement probes were integrated into the activity as a first step towards fostering long term interactions. As a lot of research targets social interactions, the goal here was to determine whether an engagement strategy focused on the task could be as, or more efficient than one focused on social interactions and participants’ introspection. To that effect, two engagement strategies were implemented. They differed in content but used the same multi-modal design in order to increase participants’ meta-cognitive reflection, once on the task and performances, and once on participants’ enjoyment and emotions. Both strategies were compared to a baseline by probing and assessing engagement at the individual and group level, along the behavioural, emotional and cognitive dimensions, in a between subject experiment with 12 groups of children. The experiments showed that the collaborative task pushed the children to adapt their manner of writing to the group, even though the adopted solution was not always correct. Furthermore, there was no significant difference between the strategies in terms of behaviour on task (behavioural engagement), satisfaction (emotional engagement) or performance (cognitive engagement) as the group dynamics had a stronger impact on the outcome of the collaborative teaching task. Therefore, the task and social engagement strategies can be considered as efficient in the context of collaboration.
|
|
15:00-17:00, Paper WeCT1.2 | |
Trajectory Optimization of Continuum Arm Robots |
Yadav, Ritesh (BITS Pilani), Rout, Bijay Kumar (Birla Institute of Technology and Science, Pilani, India) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Anthropomorphic Robots and Virtual Humans, Curiosity, Intentionality and Initiative in Interaction
Abstract: Rigid manipulators are applicable for a very structured environment and standard applications. For real world applications, continuum manipulators are used which has required high degrees of freedom, and compliance. The current work focus on the trajectory optimization of continuum robot for a specified application to minimize energy usage. To achieve this task Lagrangian mechanics is used to develop the mathematical model of the continuum robot with the payload. In this case the trajectory optimization has been carried out by treating the problem as a nested optimization problem. The outer optimization task is to optimize the trajectory using minimization of input force as primary goal where initial and final configurations of the arm are already available. Here, Genetic Algorithm is used as the optimizer for the selected tasks. The purpose of inner optimization loop is to find the feasible inverse solution for the manipulator that is required to calculate input forces which is further required to optimize the trajectory of the arm. A constrained non-linear optimization algorithm is used for the task. The optimization results show 30-80 % decrease in the input force required for the specified trajectories of the arm. The current paper shows that various tasks can be optimized using the formulated strategy to save the energy required by the arm to execute specified task.
|
|
15:00-17:00, Paper WeCT1.3 | |
Playful Interaction with Humanoid Robots for Social Development in Autistic Children: A Pilot Study |
Cervera, Enric (Jaume-I University), del Pobil, Angel P. (Jaume-I University), Cabezudo, Maria-Isabel (Hospital De Manises) |
Keywords: Robots in Education, Therapy and Rehabilitation
Abstract: Children with a diagnosis of Autism Spectrum Disorder (ASD) have serious difficulties in the development of their communicative and social skills. In recent years, robots have been tested in the therapy of autistic children as a promising tool for increasing their interest and motivation in the activities. In this paper we present the results of a pilot study with playful robot-child interaction developed for the therapy of diagnosed children aged between 3 and 5. The children were separated into an intervention and a control group. Their progress in development was measured before and after the intervention. Although the experience was unanimously considered as positive by parents and caregivers, we have found no significative differences between the intervention and control groups. Some observed trends demand more caution and additional studies for identifying not only the advantages but also the possible pitfalls of the use of robots in the therapy of autistic children.
|
|
15:00-17:00, Paper WeCT1.4 | |
Formulating User Requirements for Designing Collaborative Robots |
Macovetchi, Ana Maria (Blue Ocean Robotics), Shahabeddini Parizi, Mohammad (Blue Ocean Robotics), Kirstein, Franziska (Blue Ocean Robotics) |
Keywords: HRI and Collaboration in Manufacturing Environments, User-centered Design of Robots, Evaluation Methods and New Methodologies
Abstract: This paper is concerned with a methodology for gathering user requirements (URs) to inform a later design process of industrial collaborative robots. The methodology is applied to four use cases from CoLLaboratE, which is a European project focusing on how industrial robots learn to cooperate with human workers in performing new manufacturing tasks. The project follows a User-Centered Design (UCD) approach by involving end-users in the development process. The user requirements (URs) are gathered using a mixed methodology, with the purpose of formulating a list of case specific requirements, which can be also generalized. The results presented in this paper consist of the list of user requirements, which will serve as a basis in establishing scenarios and system requirements for later design of a Human-Robot Collaboration (HRC) system. The described methodology contributes to the field of design of HRC systems by taking a UCD approach. The methodology is aimed at improving the solution performance and users’ acceptance of the technology, by early involvement of the users in the design process. It can also be adaptable to other development projects, where users play an essential role in creating Human-Robot Collaboration solutions.
|
|
15:00-17:00, Paper WeCT1.5 | |
Dark-Room Exchange: Human Supervision of Decentralized Multi-Robot Systems Using Distributed Ledgers and Network Mapping |
Krishnamoorthy, Sai-Prasanth (NYU Tandon School of Engineering), Go, Albert (Massachusetts Institute of Technology), Tiwari, Ashlee (Indian Institute of Technology, Kanpur), Kapila, Vikram (NYU Tandon School of Engineering) |
Keywords: Novel Interfaces and Interaction Modalities, Cooperation and Collaboration in Human-Robot Teams, Multi-modal Situation Awareness and Spatial Cognition
Abstract: This paper develops a distributed technique to populate the network graph of a decentralized multi-robot system (MRS) by employing a consensus protocol for extracting the identities and states of each robot's neighbors in the MRS. A dark-room exchange (DRE) technique is proposed wherein each robot uses its on-board 2D LiDAR for range sensing and peer-to-peer communication to identify and track neighboring objects. The resulting information is utilized to build and maintain a distributed ledger populated with the information of the MRS network graph structure to facilitate supervision by human operators. The system is tested in a simulated environment consisting of TurtleBot3 robots scattered in a 2D plane. Using the results of simulation, an analysis of the speed and performance of the DRE technique is conducted that illustrates high reliability and fast response times. The paper concludes with a discussion of the future scope of this research for multi-robot/swarm applications.
|
|
15:00-17:00, Paper WeCT1.6 | |
Communicating with SanTO – the First Catholic Robot |
Trovato, Gabriele (Waseda University), Pariasca, Franco (Pontificia Universidad Catolica Del Peru), Ramirez, Renzo (Pontificia Universidad Católica Del Perú), Cerna, Javier (Pontificia Universidad Catolica Del Peru), Reutskiy, Vadim (Innopolis University), Rodriguez, Laureano (Pontificia Universidad Católica Del Perú), Cuellar, Francisco (Pontificia Universidad Catolica Del Peru) |
Keywords: Innovative Robot Designs, Novel Interfaces and Interaction Modalities, Linguistic Communication and Dialogue
Abstract: In the 1560s Philip II of Spain commissioned the realisation of a "mechanical monk", a small humanoid automaton with the ability to move and walk. Centuries later, we present a Catholic humanoid robot. With the appearance of a statue of a saint and some interactive features, it is designed for Christian Catholic users for a variety of purposes. Its creation offers new insights on the concept of sacredness applied to a robot and the role of automation in religion. In this paper we present its concept, its functioning, and a preliminary test. A dialogue system, integrated within the multimodal communication consisting of vision, touch, voice and lights, drives the interaction with the users. We collected the first responses, particularly focused on the impression of sacredness of the robot, during an experiment that took place in a church in Peru.
|
|
15:00-17:00, Paper WeCT1.7 | |
Quantitative Evaluation of Clothing Assistance Using Whole-Body Robotic Simulator of the Elderly |
Joshi, Ravi Prakash (Graduate School of Life Science and Systems Engineering, Kyushu), Shibata, Tomohiro (Kyushu Institute of Technology), Ogata, Kunihiro (National Institute of Advanced Industrial Science and Technology), Matsumoto, Yoshio (AIST) |
Keywords: Evaluation Methods and New Methodologies, Robots in Education, Therapy and Rehabilitation, Programming by Demonstration
Abstract: The recent demographic trend across developed nations shows a dramatic increase in the aging population, fallen fertility rates and a shortage of caregivers. Robotic solutions to clothing assistance can significantly improve the Activity of Daily Living (ADL) for the elderly and disabled. We have developed a clothing assistance robot using dual arms and conducted many successful demonstrations with healthy people. It was, however, impossible to systematically evaluate its performance because human arms are not visible due to occlusion from a shirt and robot during dressing. To address this problem, we propose to use another robot, Whole-Body Robotic Simulator of the Elderly that can mimic the posture and movement of the elderly persons during the dressing task. The dressing task is accomplished by utilizing Dynamic Movement Primitives (DMP) wherein the control points of DMP are determined by applying forward kinematics on the robotic simulator. The experimental results show the plausibility of our approach.
|
|
15:00-17:00, Paper WeCT1.8 | |
Impression Change on Nonverbal Non-Humanoid Robot by Interaction with Humanoid Robot |
Ueno, Azumi (Tokyo University of Agriculture and Technology), Mizuuchi, Ikuo (Tokyo University of Agriculture and Technology), Hayashi, Kotaro (Toyohashi University of Technology) |
Keywords: Anthropomorphic Robots and Virtual Humans, Curiosity, Intentionality and Initiative in Interaction, Non-verbal Cues and Expressiveness
Abstract: Even if a robot is not designed with a specific impression, if there is a means that can add an impression later to the robot, it will be useful for social robot design, we considered. In particular, anthropomorphism seems to be an important impression of designing social interaction between humans and robots. In the movie, ”STAR WARS,” there is a non-humanoid robot, called R2-D2, which communicates mainly by sounds. A humanoid interpreter robot, called C- 3PO, responds to the sound of R2-D2 with natural language and gesture. And the audience finds the personality in R2-D2 richer than the personality which is based on the information which R2-D2’s sounds have. that it might be possible to change the impression of a non-humanoid robot emitting simple sounds by communication with a humanoid robot that speaks a natural language and make gestures. We conducted an impression evaluation experiment. In the condition where robots are inter- acting, the observer evaluated anthropomorphism of the non- humanoid robot more than in the non-interacting condition. There were also some other impressions that have changed.
|
|
15:00-17:00, Paper WeCT1.9 | |
MobiKa - Low-Cost Mobile Robot for Human-Robot Interaction |
Graf, Florenz (Fraunhofer IPA), odabasi, cagatay (Fraunhofer IPA), Jacobs, Theo (Fraunhofer IPA), Graf, Birgit (Fraunhofer IPA), Födisch, Thomas (BruderhausDiakonie) |
Keywords: Applications of Social Robots, Innovative Robot Designs, Motion Planning and Navigation in Human-Centered Environments
Abstract: One way to allow elderly people to stay longer in their homes is to use of service robots to support them with everyday tasks. With this goal, we design, develop and evaluate a low-cost mobile robot to communicate with elderly people. The main idea is to create an affordable communication assistant robot which is optimized for multimodal Human-Robot Interaction (HRI). Our robot can navigate autonomously through dynamic environments using a new algorithm to calculate poses for approaching persons. The robot was tested in a real life scenario in an elderly care home.
|
|
15:00-17:00, Paper WeCT1.10 | |
Design and Evaluation of Expressive Turn-Taking Hardware for a Telepresence Robot |
Fitter, Naomi T. (University of Southern California), Joung, Youngseok (University of Southern California), Demeter, Marton (University of Southern California), Hu, Zijian (University of Southern California), Mataric, Maja (University of Southern California) |
Keywords: Non-verbal Cues and Expressiveness, Social Presence for Robots and Virtual Humans, Assistive Robotics
Abstract: Although nonverbal expressive abilities are an essential element of human-to-human communication, telepresence robots support only select nonverbal behaviors. As a result, telepresence users can experience difficulties taking turns in conversation and using various cues to obtain the attention of others. To expand telepresence robot users' abilities to hold the floor during conversation, this work proposes and evaluates new types of expressive telepresence robot hardware. The described within-subjects study compared robot user and co-present person experiences during teamwork activity conditions involving basic robot functions, expressive LED lights, and an expressive robot arm. We found that among participants who preferred the arm-based expressiveness, individuals in both study roles felt the robot operator to be more in control of the robot during the arm condition, and participants co-located with the robot felt closer to their teammate during the arm phase. Participants also noted advantages of the LED lights for notification-type information and advantages of the arm for increasing perceptions of the robot as a human-like entity. Overall, these findings can inform future work on augmenting the nonverbal expressiveness of telepresence robots.
|
|
15:00-17:00, Paper WeCT1.11 | |
Study of Empathy on Robot Expression Based on Emotion Estimated from Facial Expression and Biological Signals |
Sripian, Peeraya (Shibaura Institute of Technology), Kurono, Yuya (Shibaura Institute of Technology), Yoshida, Reiji (Shibaura Institute of Technology), Sugaya, Midori (Shibaura Institute of Technology) |
Keywords: Creating Human-Robot Relationships, Robot Companions and Social Robots, Motivations and Emotions in Robotics
Abstract: Empathy, the ability to share the other's feeling, is one of the effective elements in promoting mutual reliability and construction of a good relationship. In order to create empathy between human-robot, a robot must be able to estimate the emotion of human and reflect the same emotion on its expression. In general, emotion can be estimated based on observable expressions such as facial expression, or unobservable expressions such as biological signals. Although there are many methods for measuring emotion from both facial expression and biological signals, few studies have been done on the comparison of estimated emotion. In this paper, we investigate whether emotion estimated from facial expression or biological signals could lead to empathy toward a robot. Using our proposed emotion estimation system, we performed two experiments and found that higher impression was rated on sociability elements with significant when the reflected emotion is estimated from uncontrollable emotion.
|
|
WeCT2 |
Room T2 |
Poster Slot 2 |
Poster Session |
Chair: Behera, Laxmidhar | IIT Kanpur |
|
15:00-17:00, Paper WeCT2.1 | |
Does a Friendly Robot Make You Feel Better? |
Ruijten, Peter (Eindhoven University of Technology), Cuijpers, Raymond (Eindhoven University of Technology) |
Keywords: Motivations and Emotions in Robotics, Non-verbal Cues and Expressiveness, Creating Human-Robot Relationships
Abstract: As robots are taking a more prominent role in our daily lives, it becomes increasingly important to consider how their presence influences us. Several studies have investigated effects of robot behavior on the extent to which that robot is positively evaluated. Likewise, studies have shown that the emotions a robot shows tend to be contagious: a happy robot makes us feel happy as well. It is unknown, however, whether the affect that people experience while interacting with a robot also influences their evaluation of the robot. This study aims to discover whether people’s affective and evaluative responses to a social robot are related. Results show that affective responses and evaluations are related, and that these effects are strongest when a robot shows meaningful motions. These results are consistent with earlier findings in terms of how people evaluate social robots.
|
|
15:00-17:00, Paper WeCT2.2 | |
Brand Recognition with Partial Visible Image in the Bottle Random Picking Task Based on Inception V3 |
ZHU, Chen (Waseda University), Matsumaru, Takafumi (Waseda University) |
Keywords: Machine Learning and Adaptation
Abstract: In the brand-wise random-ordered drinking PET bottles picking task, the overlapping and viewing angle problem makes a low accuracy of the brand recognition. In this paper, we set the problem to increase the brand recognition accuracy and try to find out how the overlapping rate infects on the recognition accuracy. By using a stepping motor and transparent fixture, the training images were taken automatically from the bottles under 360 degrees to simulate a picture taken from viewing angle. After that, the images are augmented with random cropping and rotating to simulate the overlapping and rotation in a real application. By using the automatically constructed dataset, the Inception V3, which was transferred learning from ImageNet, is trained for brand recognition. By generating a random mask with a specific overlapping rate on the original image, the Inception V3 can give 80% accuracy when 45% of the object in the image is visible or 86% accuracy when the overlapping rate is lower than 30%.
|
|
15:00-17:00, Paper WeCT2.3 | |
A Conditional Adversarial Network for Scene Flow Estimation |
Thakur, Ravi Kumar (Indian Institute of Information Technology Sri City, Chittoor), Mukherjee, Snehasis (Indian Institute of Information Technology Sri City, Chittoor) |
Keywords: Cognitive Skills and Mental Models, Machine Learning and Adaptation, Motion Planning and Navigation in Human-Centered Environments
Abstract: The problem of Scene flow estimation in depth videos has been attracting attention of researchers of robot vision, due to its potential application in various areas of robotics. The conventional scene flow methods are difficult to use in real-life applications due to their long computational overhead. We propose a conditional adversarial network SceneFlowGAN for scene flow estimation. The proposed SceneFlowGAN uses loss function at two ends: both generator and descriptor ends. The proposed network is the first attempt to estimate scene flow using generative adversarial networks, and is able to estimate both the optical flow and disparity from the input stereo images simultaneously. The proposed method is experimented on a huge RGB-D benchmark sceneflow dataset.
|
|
15:00-17:00, Paper WeCT2.4 | |
Evaluating Imitation of Human Eye Contact and Blinking Behavior Using an Android for Human-Like Communication |
Tetsuya, Sano (Nara Institute of Science and Technology), Yuguchi, Akishige (Nara Institute of Science and Technology), Garcia Ricardez, Gustavo Alfonso (Nara Institute of Science and Techonology (NAIST)), Takamatsu, Jun (Nara Institute of Science and Technology), Nakazawa, Atsushi (Kyoto University), Ogasawara, Tsukasa (Nara Institute of Science and Technology) |
Keywords: Androids
Abstract: The appearance of android robots is very similar to that of human beings. From their appearance, we expect that androids might provide us with high-level communication. The imitation of human behavior gives us the feeling of natural behavior even if we do not know what drives high-level communication. In this paper, we evaluate the imitation of human eye behavior by an android. We consider that the android imitates human eye behavior while explaining some research topic and a person acts as a listener. Then, we construct a method to imitate the eye behavior obtained from eye trackers. For the evaluation, we asked seventeen male subjects for their subjective evaluation and compared the imitation with an android that controlled eye-contact duration and eyeblinks by editing the imitation or programming rule-based behavior. From the results, we found out that 1) the rule-based behaviors kept human-likeness, 2) 3-second eye contact obtained better scores regardless of the imitation-based or rule-based eye behavior, and 3) the subjects might regard the longer eyeblinks as voluntary eyeblinks, with the intention to break eye contacts.
|
|
15:00-17:00, Paper WeCT2.5 | |
Deep-Pack: A Vision-Based 2D Online Bin Packing Algorithm with Deep Reinforcement Learning |
Kundu, Olyvia (TCS Innovation Labs), Dutta, Samrat (TCS Research and Innovation), Swagat, Kumar (Tata Consultancy Services) |
Keywords: Machine Learning and Adaptation
Abstract: This paper looks into the problem of online 2D bin packing where the objective is to place an incoming object in a way so as to maximize the overall packing density inside the bin. Unlike off-line methods, the online methods do not make use of information about the sequence of future objects that are going to arrive and hence, are comparatively difficult to solve. A deep reinforcement learning framework based on Double DQN is proposed to solve this problem that takes an image showing the current state of the bin as input and gives out the pixel location where the incoming object needs to be placed as the output. The reward function is defined in such a way so that the system learns to place an incoming object adjacent to the already placed items so that the maximum grouped empty area is retained for future placement. The resulting approach is shown to outperform existing state-of-the-art-method for 2D online packing and can easily be extended to 3D online bin packing problems.
|
|
15:00-17:00, Paper WeCT2.6 | |
Collaborative Transportation of Cable-Suspended Payload Using Two Quadcopter with Human in the Loop |
Prajapati, Pratik (Indian Institute of Technology Gandhinagar), Parekh, Sagar (Institute of Technology, Nirma University), Vashista, Vineet (Indian Institute of Technology Gandhinagar) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Cooperation and Collaboration in Human-Robot Teams, Creating Human-Robot Relationships
Abstract: We study the problem of collaborative transportation of cable-suspended payload using two quadcopters. While previous works on transportation using quadcopters emphasize more on autonomous control and generating complex trajectory, in this paper a master-slave strategy is implemented where the master quadcopter is controlled by human and the slave quadcopter tries to stabilize the oscillations of the payload. Two quadcopters with a cable-suspended payload system is under-actuated with coupled dynamics and hence, manual control is difficult. We use Lagrangian mechanics on a manifold for deriving equations of motion and apply variation based linearization to linearize the system. We designed a Lyapunov based controller to minimize the oscillations of the payload while transportation, leading to an easier manual control of master quadcopter.
|
|
15:00-17:00, Paper WeCT2.7 | |
Effective Human-Robot Collaboration in Near Symmetry Collision Scenarios |
da Silva Filho, José Grimaldo (University Grenoble Alpes - INRIA), Olivier, Anne-Hélène (Univ Rennes, M2S Lab, Inria, MimeTIC), Crétual, Armel (M2S Lab, University Rennes 2), Pettre, Julien (Inria - Irisa), Fraichard, Thierry (INRIA) |
Keywords: Motion Planning and Navigation in Human-Centered Environments, Monitoring of Behaviour and Internal States of Humans
Abstract: Recent works in the domain of Human-Robot Motion (HRM) attempt to plan collision avoidance behavior that accounts for cooperation between agents. This is important as effective cooperation requires, among several factors, predicting whether the person will attempt to avoid collision as first or last crosser. The robot should be able to replicate this decision making process in order to allow for effective collaboration during collision avoidance. However, whenever situations arise in which the choice crossing order is not consistent for people, the robot is forced to account for the possibility that both agents will assume the same role ie{} a decision detrimental to collision avoidance. Thus, in our work we evaluate the boundary that separates the decision to avoid collision as first or last crosser. By approximating the uncertainty around this boundary, we developed a collision avoidance strategy to address this problem. Our approach is based on the insight that the robot should plan its collision avoidance motion in such a way that, even if agents, at first, incorrectly choose the same crossing order, they would be able to unambiguously perceive their crossing order on their following collision avoidance action.
|
|
15:00-17:00, Paper WeCT2.8 | |
Establishing Human-Robot Trust through Music-Driven Robotic Emotion Prosody and Gesture |
Savery, Richard (Georgia Inst. of Technology), Weinberg, Gil (Georgia Inst. of Technology), Rose, Ryan (Georgia Inst. of Technology) |
Keywords: Creating Human-Robot Relationships, Sound design for robots, Cooperation and Collaboration in Human-Robot Teams
Abstract: As human-robot collaboration opportunities continue to expand, trust becomes ever more important for full engagement and utilization of robots. Affective trust, built on emotional relationship and interpersonal bonds is particularly critical as it is more resilient to mistakes and increases the willingness to collaborate. In this paper we present a novel model built on music-driven emotional prosody and gestures that encourages the perception of a robotic identity, designed to avoid uncanny valley. Symbolic musical phrases were generated and tagged with emotional information by human musicians. These phrases controlled a synthesis engine playing back pre-rendered audio samples generated through interpolation of phonemes and electronic instruments. Gestures were also driven by the symbolic phrases, encoding the emotion from the musical phrase to low degree-of-freedom movements. Through a user study we showed that our system was able to accurately portray a range of emotions to the user. We also showed with a significant result that our non-linguistic audio generation achieved an 8% higher mean of average trust than using a state-of-the-art text-to-speech system.
|
|
15:00-17:00, Paper WeCT2.9 | |
Effectiveness of Robot Communication Level on Likeability, Understandability and Comfortability |
Chatterji, Nupur (Georgia Institute of Technology), Allen, Courtney (Georgia Institute of Technology), Chernova, Sonia (Georgia Institute of Technology) |
Keywords: Multimodal Interaction and Conversational Skills, User-centered Design of Robots, Sound design for robots
Abstract: The proliferation of commercially available social robots is undeniable. As humans and robots interact more closely and frequently, it brings to light issues surrounding how the human feels and perceives when dealing with robots - how much do they like the way the interaction occurs, how well do they understand what the robot is trying to communicate, and how comfortable do they feel? Much of this is intertwined within the communication level that the robot uses. In this paper, we evaluate the effect of different robot communication levels -- voice only, sound only, or voice and sound -- when applied to robots designed with differing levels of anthropomorphism and different purposes, evaluating the resulting impact on human-robot interaction with respect to likeability, understandability and comfort. We evaluate these factors on 13 commercially designed robots. Our results show that in almost all cases, survey responders showed a preference for robots that incorporate more spoken interaction than currently deployed systems.
|
|
15:00-17:00, Paper WeCT2.10 | |
Tracking Control Incorporating Friction Estimation of a Cleaning Robot with a Scrubbing Brush |
Nemoto, Takuma (Singapore University of Technology and Design), Mohan, Rajesh Elara (Singapore University of Technology and Design) |
Keywords: Motion Planning and Navigation in Human-Centered Environments
Abstract: This paper proposes an improved controller with a friction estimator for the path tracking of a cleaning robot having a scrubbing brush, motivated by the presence of the positive influence of brush friction on robot propulsion. For the controller and the estimator, the dynamics of a cleaning robot with a scrubbing brush is represented by a model incorporating the LuGre dynamic friction model. This dynamic robot model is transformed into appropriate forms in controller and estimator design. The controller employs a sliding mode control (SMC) law improved to exploit the friction for achieving a control target. The estimator provides estimates of the state and parameters of the dynamic model in an unscented Kalman filter (UKF) framework to calculate the friction force. The performance of the proposed controller with the estimator is tested through numerical simulations. The simulation results illustrate that the proposed approach is effective for the path tracking of cleaning robots with less input torque.
|
|
WeCT3 |
Room T3 |
Poster Slot 3 |
Poster Session |
Chair: Behera, Laxmidhar | IIT Kanpur |
|
15:00-17:00, Paper WeCT3.1 | |
Evaluation of Robots That Signals a Pedestrian Using Face Orientation Based on Moving Trajectory Analysis |
Yamashita, Shohei (Hiroshima City University), Ikeda, Tetsushi (Hiroshima City University), Shinozawa, Kazuhiko (Advanced Telecommunications Research Institute), Iwaki, Satoshi (Hiroshima City University) |
Keywords: Social Intelligence for Robots, Non-verbal Cues and Expressiveness, Detecting and Understanding Human Activity
Abstract: Robots that share daily environments with us are required to behave in a socially acceptable manner. There are two important approaches to this purpose: 1) robots model human behavior, understand it properly and behave appropriately 2) robots present their understanding and future behavior to surrounding people. In this paper, considering people present various cues to other people around them using gaze and face direction, we focus on the latter approach and propose a robot that presents cues to an opposing pedestrian by turning face. Another problem with the conventional research is that the evaluation of the pedestrian's ease of passing with the robot depends only on the subjective impression, so it was difficult to design the robot's behavior based on the temporal change of the ease of walking. In this paper, we evaluate the fluctuation of the pedestrian's moving velocity vector as an index of the ease of walking and analyze the temporal change. We have conducted preliminary experiments in which 12 subjects passed by the robot and compared the three types of presentation methods using the face. By presenting information using a face, we confirmed that the subjects tended to have better impressions of walking based on subjective evaluation and that the walking was relatively easy to walk for several seconds while approaching the robot based on analyzing the fluctuation of the moving speed vector.
|
|
15:00-17:00, Paper WeCT3.2 | |
Augmented Robotics for Learners: A Case Study on Optics |
JOHAL, Wafa (École Polytechnique Fédérale De Lausanne), Robu, Olguta (EPFL), Dame, Amaury (Oxford University), Magnenat, Stéphane (EPFL), Mondada, Francesco (EPFL) |
Keywords: Robots in Education, Therapy and Rehabilitation, Virtual and Augmented Tele-presence Environments, Novel Interfaces and Interaction Modalities
Abstract: In recent years, robots have been surfing on a trendy wave as standard devices for teaching programming. The tangibility of robotics platforms allows for collaborative and interactive learning. Moreover, with these robot platforms, we also observe the occurrence of a shift of visual attention from the screen (on which the programming is done) to the physical environments (i.e. the robot). In this paper, we describe an experiment aiming at studying the effect of using augmented reality (AR) representations of sensor data in a robotic learning activity. We designed an AR system able to display in real-time the data of the InfraRed sensors of the Thymio robot. In order to evaluate the impact of AR on the learners' understanding on how these sensors worked, we designed a pedagogical lesson that can run with or without the AR rendering. Two different age groups of students participated in this between-subject experiment, counting a total of 74 children. The tests were the same for the experimental (AR) and control group (NOAR). The exercises differed only through the use of AR. Our results show that AR was worth being used for younger groups dealing with difficult concepts. We discuss our findings and propose future works to establish guidelines for designing AR robotic learning sessions.
|
|
15:00-17:00, Paper WeCT3.3 | |
Incremental Estimation of Users’ Expertise Level |
Carreno, Pamela (University of Waterloo), Dahiya, Abhinav (University of Waterloo), Smith, Stephen L. (University of Waterloo), Kulic, Dana (University of Waterloo) |
Keywords: Detecting and Understanding Human Activity, Cooperation and Collaboration in Human-Robot Teams, Human Factors and Ergonomics
Abstract: Estimating a user's expertise level based on observations of their actions will result in better human-robot collaboration, by enabling the robot to adjust its behaviour and the assistance it provides according to the skills of the particular user it's interacting with. This paper details an approach to incrementally and continually estimate the expertise of a user whose goal is to optimally complete a given task. The user's expertise level, here represented as a scalar parameter, is estimated by evaluating how far their actions are from optimal; the closer to optimal the user's choices are, the more expert the user is considered to be. The proposed approach was tested using data from an online study where participants were asked to complete various instances of a simulated kitting task. An optimal planner was used to estimate the ``goodness" of all available actions at any given task state. We found that our expertise level estimates correlate strongly with observed after-task performance metrics and that it is possible to differentiate novices from experts after observing, in average, 33% of the errors made by the novices.
|
|
15:00-17:00, Paper WeCT3.4 | |
Autonomous Chess Playing Robot |
Rath, Prabin Kumar (NIT Rourkela), Mahapatro, Neelam (NIT Rourkela), nath, prasanmit (NIT Rourkela), Dash, Ratnakar (National Institute of Technology Rourkela) |
Keywords: Robot Companions and Social Robots, Creating Human-Robot Relationships, Innovative Robot Designs
Abstract: Chess is an ancient strategy board game that is played on an 8x8 board. Although digital games have become attractive today, chess still retains its popularity in the onscreen version of the game. There has also been considerable development in the chess game engines to play against a human counterpart. The objective of this work is to integrate these chess engines with an actual board game experience and create an autonomous chess player. The system is designed around the use of an open source chess engine and a computer numeric control (CNC) controlled magnetic moving mechanism for moving around the chess pieces. The moves from the human counterpart are taken through an overhead computer vision system. The robot makes the game much more interactive and builds a link between the human and computer system.
|
|
15:00-17:00, Paper WeCT3.5 | |
Human-Robot Team: Effects of Communication in Analyzing Trust |
Ciocirlan, Stefan-Dan (University Politehnica of Bucharest), Agrigoroaie, Roxana (ENSTA-ParisTech), Tapus, Adriana (ENSTA-ParisTech) |
Keywords: Cooperation and Collaboration in Human-Robot Teams, Creating Human-Robot Relationships, Social Intelligence for Robots
Abstract: Trust is related to the performance of human teams, making it a significant characteristic, which needs to be analyzed inside human-robot teams. Trust was researched for a long time in other domains like social sciences, psychology, and economics. Building trust within a team is formed through common tasks and it depends on team performance and communication. By applying an online game based tasks for human-robot teams, the effects of three communication conditions ( communication without text and verbal interaction, communication with text and verbal interaction related/not related to the task) on trust are analyzed. Additionally, we found that the participants' background is linked to the trust in the interaction with the robot. The results show that in a human-robot team the human trust will increase more over time when he/she is working with a robot that uses text and verbal interaction communication related to the task. They further suggest that human trust will decrease to a lower extent when the robot fails in doing the tasks if it uses text and verbal communication with the human. with the human.
|
|
15:00-17:00, Paper WeCT3.6 | |
Probabilistic Obstacle Avoidance and Object Following: An Overlap of Gaussians Approach |
Bhatt, Dhaivat (IIIT-Hyderabad), Garg, Akash (Delhi Technological University), GOPALAKRISHNAN, BHARATH (IIIT HYDERABAD), Krishna, Madhava (IIIT Hyderabad) |
Keywords: Motion Planning and Navigation in Human-Centered Environments
Abstract: Autonomous navigation and obstacle avoidance are core capabilities that enable robots to execute tasks in the real world. We propose a new approach to collision avoidance that accounts for uncertainty in the states of the agent and the obstacles. We first demonstrate that measures of entropy— used in current approaches for uncertainty-aware obstacle avoidance—are an inappropriate design choice. We then pro- pose an algorithm that solves an optimal control sequence with a guaranteed risk bound, using a measure of overlap between the two distributions that represent the state of the robot and the obstacle, respectively. Furthermore, we provide closed form expressions that can characterize the overlap as a function of the control input. The proposed approach enables model- predictive control framework to generate bounded-confidence control commands. An extensive set of simulations have been conducted in various constrained environments in order to demonstrate the efficacy of the proposed approach over the prior art. We demonstrate the usefulness of the proposed scheme under tight spaces where computing risk-sensitive control maneuvers is vital. We also show how this framework generalizes to other problems, such as object-following.
|
|
15:00-17:00, Paper WeCT3.7 | |
Improving Robot Tutoring Interactions through Help-Seeking Behaviors |
Jordan, Kristin (University of Southern California), Pakkar, Roxanna (University of Southern California), Mataric, Maja (University of Southern California) |
Keywords: Robots in Education, Therapy and Rehabilitation, Creating Human-Robot Relationships, Applications of Social Robots
Abstract: Robot tutors have great potential for supporting personalized learning, in both home and classroom settings. To be effective, robot tutors must encourage users to seek help as needed during the learning process. We conducted a between-subjects study with N = 45 participants to compare different types of learner help-seeking behaviors–pressing an on-screen button, pressing a physical button, and raising a hand–and assess how help-seeking behavior preferences relate to perceptions of the robot tutor. The results indicate that hand raising was seen as the hardest method for a user to perform but the most useful and beneficial, with positive trends in students’ intention to use a robot.
|
|
15:00-17:00, Paper WeCT3.8 | |
Coupling of Arm Movements During Human-Robot Interaction: The Handover Case |
Ferreira Duarte, Nuno (Instituto Superior Técnico, Lisbon), Rakovic, Mirko (University of Novi Sad, Faculty of Technical Sciences), Santos-Victor, José (Instituto Superior Técnico - Lisbon) |
Keywords: Non-verbal Cues and Expressiveness, Social Intelligence for Robots, Cooperation and Collaboration in Human-Robot Teams
Abstract: Collaboration involves understanding the action of others, as well as acting in a way that can be understood by others. One of those tasks is the handover. In this paper, we study the behaviour of humans during the handover and design the mechanisms allowing a robot to learn from that behaviour. We analyse and model the arm movements of humans while handing over objects to one another. The contributions of this paper are the following: (i) a computational model that captures the behaviour of the “giver” and “receiver” of the object, by coupling the arm motion; (ii) discuss this approach amidst a previous coupling strategy; and (iii) embedded the model in the iCub robot for human to robot handovers . Our results show that: (i) the robot can coordinate with the human to timely and safely receive the object; (ii) the robot behaves in a “human-like” manner while receiving the object; and (iii) our approach has significant advantages to the previous approach.
|
|
15:00-17:00, Paper WeCT3.9 | |
Towards Situational Awareness from Robotic Group Motion |
Levillain, Florent (Ensadlab-Reflective Interaction), St-Onge, David (Ecole De Technologie Superieure), Beltrame, Giovanni (Ecole Polytechnique De Montreal), Zibetti, Elisabetta (CHART-LUTIN) |
Keywords: Multi-modal Situation Awareness and Spatial Cognition, Novel Interfaces and Interaction Modalities, Cooperation and Collaboration in Human-Robot Teams
Abstract: The control of multiple robots in the context of tele-exploration tasks is often attentionally taxing, resulting in a loss of situational awareness for operators. Unmanned aerial vehicle swarms require significantly more multitasking than controlling a plane, thus making it necessary to devise intuitive feedback sources and control methods for these robots. The purpose of this article is to examine a swarm’s nonverbal behaviour as a possible way to increase situational awareness and reduce the operator’s cognitive load by soliciting intuitions about the swarm’s behaviour. To progress on the definition of a database of nonverbal expressions for robot swarms, we first define categories of communicative intents based on spontaneous descriptions of common swarm behaviours. The obtained typology confirms that the first two levels (as defined by Endsley: elements of environment and comprehension of the situation) can be shared through swarms motion-based communication. We then investigate group motion parameters potentially connected to these communicative intents. Results are that synchronized movement and tendency to form figures help convey meaningful information to the operator. We then discuss how this can be applied to realistic scenarios for the intuitive command of remote robotic teams.
|
|
15:00-17:00, Paper WeCT3.10 | |
Analysis of Factors Influencing the Impression of Speaker Individuality in Android Robots |
Mikata, Ryusuke (ATR), Ishi, Carlos Toshinori (ATR), Minato, Takashi (ATR), Ishiguro, Hiroshi (Osaka University) |
Keywords: Personalities for Robotic or Virtual Characters, Androids, Non-verbal Cues and Expressiveness
Abstract: Humans use not only verbal information but also non-verbal information in daily communication. Among the non-verbal information, we have proposed methods for automatically generating hand gestures in android robots, with the purpose of generating natural human-like motion. In this study, we investigate the effects of hand gesture models trained/designed for different speakers on the impression of the individuality through android robots. We consider that it is possible to express individuality in the robot, by creating hand motion that are unique to that individual. Three factors were taken into account: the appearance of the robot, the voice, and the hand motion. Subjective evaluation experiments were conducted by comparing motions generated in two android robots, two speaker voices, and two motion types, to evaluate how each modality affects the impression of the speaker individuality. Evaluation results indicated that all these three factors affect the impression of speaker individuality, while different trends were found depending on whether or not the android is copy of an existent person.
|
|
15:00-17:00, Paper WeCT3.11 | |
Synthesizing Unnatural Grasping in Humanoid Robots Using Fuzzy Logic |
Dayal, Udai, Arun (Birla Institute of Technology), Biswas, Shiladitya (Birla Institute of Technology), Penisetty, Sree Aslesh (Birla Institute of Technology, Mesra, Ranchi) |
Keywords: Anthropomorphic Robots and Virtual Humans, Machine Learning and Adaptation, Assistive Robotics
Abstract: This paper presents a whole body grasping algorithm using fuzzy logic. Firstly, a comprehensive analysis of the human body was performed by decomposing it into a simplistic stick diagram and examining all types of grasps possible. The theory of combinations namely, Enumerative Combinatorics was used in order to calculate the total number of grasps possible by the human body. The paper focuses largely upon the grasps which can be physically accomplished by the NAO humanoid robot developed by Aldebaran Robotics. Finally, a fuzzy logic based algorithm was implemented to assign grasping weightage to the body parts, i.e. arms, torso, head, etc. of the robot depending upon the position of the object to be grasped.
|
|
WeCT4 |
Room T4 |
Poster Slot 4 |
Poster Session |
Chair: Behera, Laxmidhar | IIT Kanpur |
|
15:00-17:00, Paper WeCT4.1 | |
Classroom Group Formation Model Based on Socion Theory Considering Communication in Social Networking Services |
Naito, Kosuke (Nagoya Institute of Technology), Kato, Shohei (Nagoya Institute of Technology) |
Keywords: Robots in Education, Therapy and Rehabilitation, Social Intelligence for Robots, Monitoring of Behaviour and Internal States of Humans
Abstract: In recent years, Social Networking Services (SNS) have become popular among young people. Unfortunately, as SNS usage has increased, cyberbullying has also increased and has become a social problem. Several previous studies have employed multi-agent simulation, which can be used to analyze human relationships, to identify bullying mechanisms. In this study, we model SNSs in a classroom and apply multi-agent simulation to analyze the influence of SNSs on classroom friendships. We focus on junior high school students as our research object. In the proposed model, which is based on socion theory, an agent can communicate with other agents using two types of networks: a classroom network and SNS networks, via a network which recognized by each individual (in socion theory, people have mental networks that reflect society). Agents communicate face to face (FTF) in the classroom and using SNS in their SNS groups. In addition, agents have social skills and are categorized based on these social skills. In this study, we simulate friendship relations considering SNSs and discuss the influence of SNSs on classroom relationships based on the simulation results. We performed two simulations; one only involved FTF communication and the others involved both FTF and SNS communication. We compare the two simulations and discuss the results. We found that, compared to FTF communication, the average likability rating of agents increased with SNS communication. On the other hand, we also found that specific agents were rejected. We consider that sharing information over SNSs is related to increased bullying. In conclusion, we discuss applying to educational robots from results.
|
|
15:00-17:00, Paper WeCT4.2 | |
Design of an Integrated Gripper with a Suction System for Grasping in Cluttered Environment |
Kang, Long (Hanyang University), Seo, Jong-Tae (Hanyang University), Kim, Sang-Hwa (Hanyang University), Yi, Byung-Ju (Hanyang University) |
Keywords: Innovative Robot Designs, Assistive Robotics, Anthropomorphic Robots and Virtual Humans
Abstract: Development of an integrated gripping system for grasping various objects in different work environments is very useful in many practical applications, such as warehouse automation. In this paper, we propose a linkage-driven underactuated gripper combined with a suction mechanism. This underactuated gripper has two fingers which can be controlled independently and each finger is constructed by stacking one five-bar mechanism over one double parallelogram. The special architecture allows for installing all of the actuators on the base. The suction mechanism is used to grasp objects in narrow space and enhance the grasp stability. The ability of grasping various objects is confirmed through practical grasping experiment on a commercial robot arm.
|
|
15:00-17:00, Paper WeCT4.3 | |
A Robust Position Estimation Algorithm under Unusual Large Range Errors |
Kim, Moonki (Korean Institute of Science and Technology), Lee, Ji Yang (Korean Institute of Science and Technology), Kim, Jung-Hee (Korea Institute of Science and Technology), Hassen, Nigatu (Korean Institute of Science and Technology), Kim, Doik (KIST) |
Keywords: Motion Planning and Navigation in Human-Centered Environments
Abstract: This paper provides a robust fusion algorithm for accurate position estimation under uncertain large errors in range measurements. Many researchers have studied UWB, IMU, odometry integration algorithms for estimating position, velocity, attitude and IMU biases. However, the fused results of these conventional algorithms are likely to be affected by the unusual ranging measurement errors. Therefore, in order to improve the positioning accuracy under the large errors of the UWB range, a robust fusion algorithm is required. In this paper, instead of using the range directly, the estimated range that the errors is reduced is used. Reducing the errors is possible by using the odometry velocity that is relatively accurate and is independent of range measurement. The robustness and accuracy of the proposed algorithm is verified by a mobile robot with real-time positioning and trajectory control under large range errors which are occurred randomly.
|
|
15:00-17:00, Paper WeCT4.4 | |
Factors Influencing the Human Preferred Interaction Distance |
rajamohan, vineeth (University of Nevada, Reno), scully-allison, connor (University of Nevada, Reno), dascalu, sergiu (University of Nevada, Reno), Feil-Seifer, David (University of Nevada, Reno) |
Keywords: Creating Human-Robot Relationships, Non-verbal Cues and Expressiveness
Abstract: Nonverbal interactions are a key component of human communication. Since robots have become significant by trying to get close to human beings, it is important that they follow social rules governing the use of space. Prior research has conceptualized personal space as physical zones which are based on static distances. This work examined how preferred interaction distance can change given different interaction scenarios. We conducted a user study using three different robot heights. We also examined the difference in preferred interaction distance when a robot approaches a human and, conversely, when a human approaches a robot. Factors included in quantitative analysis are the participants' gender, robot's height, and method of approach. Subjective measures included human comfort and perceived safety. The results obtained through this study shows that robot height, participant gender and method of approach were significant factors influencing measured proxemic zones and accordingly participant comfort. Subjective data showed that experiment respondents regarded robots in a more favorable light following their participation in this study. Furthermore, the NAO was perceived most positively by respondents according to various metrics and the PR2 Tall, most negatively.
|
|
15:00-17:00, Paper WeCT4.5 | |
Perception of Social Intelligence in Robots Performing False-Belief Tasks |
Sturgeon, Stephanie (University of Nevada, Reno), Palmer, Andrew (University of Nevada, Reno), Blankenburg, Janelle (University of Nevada, Reno), Feil-Seifer, David (University of Nevada, Reno) |
Keywords: Social Intelligence for Robots, Cognitive Skills and Mental Models, Social Presence for Robots and Virtual Humans
Abstract: This study evaluated how a robot demonstrating a Theory of Mind (ToM) influenced human perception of social intelligence and animacy in a human-robot interaction. Data was gathered through an online survey where participants watched a video depicting a NAO robot either failing or passing the Sally-Anne false-belief task. Participants (N = 60) were randomly assigned to either the Pass or Fail condition. A Perceived Social Intelligence Survey and the perceived intelligence and animacy subsections of the Godspeed Questionnaire Series (GQS) were used as measures. The GQS was given before viewing the task to measure participant expectations, and again after to test changes in opinion. Our findings show that robots demonstrating ToM significantly increase perceived social intelligence, while robots demonstrating ToM deficiencies are perceived as less socially intelligent.
|
|
15:00-17:00, Paper WeCT4.6 | |
Dynamic Calibration between a Mobile Robot and SLAM Device for Navigation |
Ishikawa, Ryoichi (The University of Tokyo), Oishi, Takeshi (The University of Tokyo), Ikeuchi, Katsushi (Microsoft) |
Keywords: Motion Planning and Navigation in Human-Centered Environments
Abstract: In this paper, we propose a dynamic calibration between a mobile robot and a device using simultaneous localization and mapping (SLAM) technology, which we termed as the SLAM device, for a robot navigation system. The navigation framework assumes loose mounting of SLAM device for easy use and requires an online adjustment to remove localization errors. The online adjustment method dynamically corrects not only the calibration errors between the SLAM device and the part of the robot to which the device is attached but also the robot encoder errors by calibrating the whole body of the robot. The online adjustment assumes that the information of the external environment and shape information of the robot are consistent. In addition to the online adjustment, we also present an offline calibration between a robot and device. The offline calibration is motion-based and we clarify the most efficient method based on the number of degrees-of-freedom of the robot movement. Our method can be easily used for various types of robots with sufficiently precise localization for navigation. In the experiments, we confirm the parameters obtained via two types of offline calibration based on the degree of freedom of robot movement. We also validate the effectiveness of the online adjustment method by plotting localized position errors during a robot’s intense movement. Finally, we demonstrate the navigation using a SLAM device.
|
|
15:00-17:00, Paper WeCT4.7 | |
Development of a Teach Pendant for Humanoid Robotics with Cartesian and Joint-Space Control Modalities |
Otarbay, Zhenis (Nazarbayev University), Assylgali, Iliyas (Nazarbayev University), Yskak, Asset (Nazarbayev University), Folgheraiter, Michele (Nazarbayev University) |
Keywords: Androids, HRI and Collaboration in Manufacturing Environments, Innovative Robot Designs
Abstract: This paper presents the design, the construction and testing of a teach pendant for humanoid robotics applications. The system is equipped with a touch-based Graphical User Interface (GUI) from which the robot's joints and the robot's end-effectors can be easily controlled in the joint and Cartesian space respectively. A visual representation of the legs pose were integrated in the interface allowing the operator to test the motion of the limbs before their actual execution on the real robot. The forward and inverse kinematic models were formalized according to the Denavit-Hartenberg convention and implemented in Python 3 with the support of the Tkinter, NumPy and Matplotlib libraries. The chassis of the teach-pendant was designed using SolidWorks software to accommodate a 9-inch display with a touch sensor, a 5000 mAh battery, a Raspberry pi 3, and an ATmega168 microcontroller. On the frontal panel, rotary encoders and different buttons are present to access the menu and precisely tune the control variables.
|
|
15:00-17:00, Paper WeCT4.8 | |
Influencing Hand-Washing Behaviour with a Social Robot: HRI Study with School Children in Rural India |
Deshmukh, Amol (University of Glasgow), K Babu, Sooraj (AMMACHI Labs, Amrita Vishwa Vidyapeetham, Amritapuri, India), Radhakrishnan, Unnikrishnan (Amrita University), Ramesh, Shanker (AMMACHI Labs, Amrita Vishwa Vidyapeetham, Amritapuri, India), A, Parameswari (Ammachilabs, Amrita Vishwa Vidyapeetham, Amritapuri, India), Rao R, Bhavani (Amrita Vishwa Vidyapeetham University) |
Keywords: Applications of Social Robots, Social Presence for Robots and Virtual Humans, Innovative Robot Designs
Abstract: The work presented in this paper reports the influence of a social robot on hand washing behaviour on school children in rural India with a significant presence of indigenous tribes. We describe the design choices of our social robot to cater the requirements of the intervention. The custom built wall mounted social robot encouraged 100 children to wash their hand at appropriate time (before meal and after toilet) using the correct handwashing technique via a poster on a wall. The results indicate that the intervention using the robot was found to be effective (40% rise) at increasing levels of hand washing with soap and with a better handwashing technique in ecologically valid settings.
|
|
15:00-17:00, Paper WeCT4.9 | |
Aggressive Bee: A New Vision for Missile Guidance Applications |
Jada, Chakravarthi (RGUKT-NUZVID), Urlana, Ashok (RGUKT-NUZVID), Baswani, Pavan (RGUKT-NUZVID), Shaik, Gouse Basha (RGUKT-NUZVID) |
Keywords: Evaluation Methods and New Methodologies, Motion Planning and Navigation in Human-Centered Environments, Curiosity, Intentionality and Initiative in Interaction
Abstract: This paper presents the idea of inspiration from aggressive bee for target tracking and extremely quick attack.The experimental setup made ready and multiple target movements are given using various aggravators and for each movement, the bee-target motion episode is recorded. The bee-target position tuple is generated for all points, all trajectories, and all episodes. Three approaches namely the kinematic model, error based model and an energy-based approach are implemented to derive the bee tracking and attacking behaviour. All the approaches are explained step by step and conclusions are given with prospective works and future applications are mentioned at the end.
|
|
15:00-17:00, Paper WeCT4.10 | |
Chasing and Aiming of a Moving Target |
Agarwal, Suryansh (IIT Kanpur), Hanchinal, Suraj Veerabhadra (IIT Kanpur), Chaudhary, Ashok Kumar (IIT Kanpur), Behera, Laxmidhar (IIT Kanpur) |
Keywords: Degrees of Autonomy and Teleoperation, Innovative Robot Designs, Cooperation and Collaboration in Human-Robot Teams
Abstract: This paper proposes an approximate solution for accurate pitch calculation countering the effect of air resistance, chasing algorithm dynamic pipeline integrated using Behavior Tree so that an autonomous robot can follow, target and aim with shooting precision on a moving target. The air resistance is taken into account for achieving shooting precision through linear as well as polynomial functions of distance of target from the robot, for which the effectiveness is checked using simulations as well as theoretical derivations. The stability and effectiveness of the chasing algorithm is based on the fact that robot tracks the pose of the target in real time and its tri-vision module helps it in localising better. The reactivity of the proposed pipeline is maintained through behavior tree intelligence, which structures as well as dynamically make decisions. The above has been experimentally validated using the standard DJI robot as a proof of utility in real-time applications.
|
|
15:00-17:00, Paper WeCT4.11 | |
Investigations on Gesture Holding Durations at Speech Interruptions in Dialogue Robots |
Ishi, Carlos Toshinori (ATR), Mikata, Ryusuke (ATR), Minato, Takashi (ATR), Ishiguro, Hiroshi (Osaka University) |
Keywords: Non-verbal Cues and Expressiveness, Androids, Multimodal Interaction and Conversational Skills
Abstract: Hand gestures commonly occur in daily dialogue interactions, and have important functions in human-human as well as human-robot communication. In this study, we consider one issue regarding speech interruptions by the dialogue partner in an android robot dialogue system. Specifically, we conducted a subjective experiment to evaluate the effects of holding duration control after speech interruptions, in our android robot. Evaluation results indicated that gesture holding durations around 0.5 to 2 seconds after an interruption look natural, while longer durations may cause impression of displeasure by the robot.
|
|
WeCT5 |
Room T5 |
Poster Slot 5 |
Poster Session |
Chair: Behera, Laxmidhar | IIT Kanpur |
|
15:00-17:00, Paper WeCT5.1 | |
Human-Robot Handovers with Signal Temporal Logic Specifications |
Kshirsagar, Alap (Cornell University), Kress-Gazit, Hadas (Cornell University), Hoffman, Guy (Cornell University) |
Keywords: HRI and Collaboration in Manufacturing Environments, Motion Planning and Navigation in Human-Centered Environments
Abstract: We present a formal methods based approach to human-robot handovers. Specifically, we use the automatic synthesis of a robot controller from specifications in Signal Temporal Logic (STL). This allows users to specify and dynamically change the robot's behaviors using high-level abstractions of goals and constraints rather than by tuning controller parameters. Also, in contrast to existing controllers, this controller can provide guarantees on the timing of each of the handover phases. We replicate the behavior of existing handover strategies from the literature to illustrate the proposed approach. We are currently implementing this approach on a collaborative robot arm and we will evaluate it's usability through human-participant experiments.
|
|
15:00-17:00, Paper WeCT5.2 | |
Development and Performance Evaluation of Onboard Auto-Pilot System for an Aerial Vehicle |
Kumar, Abhinay (IIT Jodhpur), Comandur, Venkatesan (IIT-Jodhpur) |
Keywords: Robots in Education, Therapy and Rehabilitation
Abstract: Design of hover capable unmanned aerial vehicles has been an active area of research for the past several years. The stabilization of these unstable vehicles require estimation of orientation, velocity, position, external wind speed etc. and providing appropriate control signal to the actuators so that the vehicle can perform the desired tasks. This paper presents the key aspects that need to be addressed to design an onboard flight control system for a multi-rotor vehicle. Several test rigs were developed to estimate the parameters of the propulsion system, and tuning of gains for autonomous stabilization, prior to free flight. The performance of the vehicle in free flight was tested for control inputs in orientation and altitude. It is observed that the tracking in orientation is fairly accurate for small angle inputs, i.e., less than 5 deg. but deviates more with higher angle set points.
|
|
15:00-17:00, Paper WeCT5.3 | |
A Body Contact-Driven Pupil Response Pet-Robot for Enhancing Familiarity |
Sejima, Yoshihiro (Kansai University), Kawamoto, Hiroki (Okayama Prefectural University), Sato, Yoichiro (Okayama Prefectural University), Watanabe, Tomio (Okayama Prefectural University) |
Keywords: Multimodal Interaction and Conversational Skills, Non-verbal Cues and Expressiveness, Creating Human-Robot Relationships
Abstract: Pupil response is closely related to human affect and interest. Focusing on the pupil response in human-robot interaction, we have developed a pupil response system using hemisphere displays and confirmed that the pupil response is effective for enhancing affective conveyance in human-robot interaction. In this study, for the basic research of realizing a friendly communication during embodied interaction between human and robot, we developed a body contact-driven pupil response pet-robot for enhancing familiarity in human-robot interaction. This robot generates the pupil response based on the body contact using small displays in which the 3D models of pupil and iris are represented. Then, we carried out an evaluation experiment by using a sensory evaluation with the pet-robot. The results demonstrated that the pet-robot with the pupil response is effective for enhancing familiarity strongly in human-robot interaction.
|
|
15:00-17:00, Paper WeCT5.4 | |
Development of a Finger Rehabilitation System Considering Motion Sense and Vision Based on Mirror Therapy |
Ota, Shunsuke (University of Toyama), JINDAI, Mitsuru (University of Toyama), Yasuda, Toshiyuki (University of Toyama) |
Keywords: Robots in Education, Therapy and Rehabilitation, Embodiment, Empathy and Intersubjectivity
Abstract: Mirror therapy has been undertaken as one of rehabilitation for symptoms such as hemiplegia. In this rehabilitation, it is effective to support the movement of the finger on the paralyzed side at the same time as the movement of the finger on the healthy side in the mirror. Furthermore, effective rehabilitation can be expected by improving a difference between vision and motion sense. Therefore, in this paper, we develop a finger rehabilitation system considering motion sense and vision based on mirror therapy. Furthermore, the rehabilitation movement preferred by humans is considered by sensory evaluation experiment, and the effectiveness of the developed finger rehabilitation system is demonstrated.
|
|
15:00-17:00, Paper WeCT5.5 | |
Extended Hybrid Code Network for Hospital Receptionist Robot |
Hwang, Eui Jun (The University of Auckland), Ahn, Byeong-Kyu (Sungkyunkwan University), MacDonald, Bruce (University of Auckland), Ahn, Ho Seok (The University of Auckland, Auckland) |
Keywords: Assistive Robotics, Machine Learning and Adaptation, Multimodal Interaction and Conversational Skills
Abstract: Task-oriented dialogue system has a vital role in service robots. This paper introduces a preliminary result for a robot dialogue system in the context of hospital receptionist. The system includes Hybrid Code Network (HCN) which is an RNN based end-to-end dialogue system and an RNN based gesture selection module that select gesture according to robot utterance. The proposed system has been applied to a real robot platform NAO and tested based on sample hospital receptionist scenario.
|
|
15:00-17:00, Paper WeCT5.6 | |
Investigating the Understandability and Efficiency of Directional Cues in Robot Navigation |
Neggers, Margot (Eindhoven University of Technology), Ruijten, Peter (Eindhoven University of Technology), Cuijpers, Raymond (Eindhoven University of Technology), IJsselsteijn, Wijnand (Technische Universiteit Eindhoven) |
Keywords: Motion Planning and Navigation in Human-Centered Environments
Abstract: Understanding a robot’s directional cues not only depends on their clarity but also on how people perceive them. In the current study the effectiveness of three directional cues (LEDs, Speech and Movement) is tested in three scenarios where a robot and a person cross paths. Participants had to reach a target in a grid in as few moves as possible, without colliding with the robot. We measured Perceived Message Understanding of the cues, interaction time to measure efficiency and asked participant to rate their subjective perception of the cues. Results showed that the LEDs cue was rated lowest in terms of Perceived Message Understanding, the Speech cue was evaluated as the most friendly and the Movement cue was the most efficient as shown by faster interaction times.
|
|
15:00-17:00, Paper WeCT5.7 | |
Multi-Robot Formation Control Using Reinforcement Learning |
Rawat, Abhay (International Institute of Information Technology, Hyderabad), Karlapalem, Kamalakar (IIIT-Hyderabad) |
Keywords: Machine Learning and Adaptation, Motion Planning and Navigation in Human-Centered Environments
Abstract: In this paper, we present a machine learning approach to move a group of robots in a formation. We model the problem as a multi-agent reinforcement learning problem. Our aim is to design a control policy for maintaining the desired formation among a number of agents (robots) while moving towards the desired goal. This is achieved by training our agents to track two agents of the group and maintain the formation with respect to those agents. We consider all agents to be homogeneous and model them as unicycle [1]. In contrast to the leader-follower approach, where each agent has an independent goal, our approach aims to train the agents to be cooperative and work towards the common goal. Our motivation to use this method is to make a fully decentralized multi-agent formation system and scalable for a number of agents.
|
|
15:00-17:00, Paper WeCT5.8 | |
A Pilot Study for a Robot-Mediated Listening Comprehension Intervention for Children with ASD |
Louie, Wing-Yue Geoffrey (Oakland University), ABBAS, Ibrahim (Oakland University), Korneder, Jessica (Oakland University,) |
Keywords: Robots in Education, Therapy and Rehabilitation, Applications of Social Robots, Assistive Robotics
Abstract: Autism spectrum disorder (ASD) is a life-long developmental condition which affects an individual’s ability to communicate and relate to others. Despite such challenges, early intervention during childhood development has shown to have positive long-term benefits for individuals with ASD. Namely, early childhood development of communicative speech skills has shown to improve future literacy and academic achievement. However, the delivery of such interventions is often time-consuming. Socially assistive robots are a potential strategic technology which could help support intervention delivery for children with ASD and increase the number of individuals that healthcare professionals can positively impact. In this work, we present a pilot study to evaluate the efficacy of a robot-mediated listening comprehension intervention for children with ASD.
|
|
15:00-17:00, Paper WeCT5.9 | |
Contextual Non-Verbal Behaviour Generation for Humanoid Robot Using Text Sentiment |
Deshmukh, Amol (University of Glasgow), Foster, Mary Ellen (University of Glasgow), Mazel, Alexandre (Aldebaran-Robotics) |
|
|
15:00-17:00, Paper WeCT5.10 | |
Towards Automatic Synthesis and Instantiation of Proactive Behaviour |
Buyukgoz, Sera (SoftBank Robotics Europe, Sorbonne University), Chetouani, Mohamed (Sorbonne University), Pandey, Amit Kumar (Hanson Robotics) |
Keywords: Curiosity, Intentionality and Initiative in Interaction, Social Intelligence for Robots, Computational Architectures
Abstract: This paper contributes to the research efforts for designing a unified framework of proactive behaviour. Despite existing definitions of proactive behaviours in variety of different fields, no such unified framework is available. We propose a framework that considers different aspects of proactivity, such as anticipating user needs, improvement in knowledge, interacting and engagement, and considering user's actions and feelings. Our proactive framework is based on Markov Decision Processes (MDP). Moreover, the probability of state transitions are computed according to past actions of both the robot and the user. The architecture is illustrated by a memory card game task, where the robot and the user try to solve the task together.
|
|
15:00-17:00, Paper WeCT5.11 | |
An Agent Model Introducing Interpersonal Sentiments for Enhancement of Friendliness |
Fukuta, Kazuaki (Nagoya Institute of Technology), Kato, Shohei (Nagoya Institute of Technology) |
Keywords: Interaction with Believable Characters, Motivations and Emotions in Robotics
Abstract: We frequently communicate interactively with robots and there is considerable demand for friendly communication agents to make such interactions smooth, low- stress, and enjoyable. Thus, we propose an agent model with interpersonal sentiments. The interpersonal sentiments make the agent behave consistently and produce lasting changes by accumulating experiences. The proposed agent also adapts its behavior to suit individual users. The proposed model has an emotion, a mood, and a sentiment. We conduct experiments to verify that the proposed model is effective relative to enhancing agent friendliness. Participants had conversations with the agents. We analyze the participants’ evaluation of them using the Semantic Differential to confirm whether our proposed method is effective for agents’ friendliness enhancement.
|
| |