HRI 2011 https://humanrobotinteraction.org/2011 6th ACM/IEEE International Conference on Human-Robot Interaction, Lausanne, Switzerland. March 6-9, 2011 Mon, 04 Apr 2011 18:05:56 +0000 en-US hourly 1 https://wordpress.org/?v=4.7 Recipients of Best Paper Awards https://humanrobotinteraction.org/2011/2011/03/best-paper-awards/ Thu, 24 Mar 2011 20:23:57 +0000 https://hri2011.net/?p=421

Three papers were awarded Best Paper under three categories at this year’s conference. The awards were presented to the authors at the closing ceremony of the conference.

“Best Student Paper” Award

Michael Gielniak & Andrea Thomaz for their paper “Spatiotemporal Correspondence as a Metric for Human-like Robot Motion”

“Most Interesting Experimental Finding” Award

Jonathan Mumm & Bilge Mutlu for their paper “Human-Robot Proxemics: Physical and Psychological Distancing in Human-Robot Interaction”

“Most Interesting Interaction” Award

Steve Yohanan & Karon E. MacLean for their paper “Design and Assessment of the Haptic Creature’s Affect Display”

]]>
Swiss Social Event https://humanrobotinteraction.org/2011/2011/03/swiss-social-event/ Tue, 08 Mar 2011 09:58:06 +0000 https://hri2011.net/?p=413

The social event will be a cruise on Lake Geneva, leaving at 6:15pm sharp on Tuesday evening. Please arrive at the boat Lausanne in the port (Ouchy) no later than 6:00pm. Directions on how to reach the port may be found here.

]]>
Instructions for LBR Authors https://humanrobotinteraction.org/2011/2011/02/instructions-for-lbr-authors/ Mon, 28 Feb 2011 17:56:24 +0000 https://hri2011.net/?p=409

LBR presentations will be conducted during a reception/poster session. Poster presenters should make sure that their poster are ready by the time the poster session starts. Poster setup can begin 30 minutes prior to the start of the poster session. Posters should be no larger than 32″ x 46″ (840 cm x 1180 cm) in size in Portrait or Vertical format to comfortably fit on the easels and poster boards. We will provide pushpins for mounting posters. Please do not use any other material for mounting purposes (e.g., adhesive or tape). Presenters are responsible for taking down their posters at the end of the poster session.

Note: This year there will not be one minute madness sessions, so there is no need to prepare slides.

]]>
Plenary Speakers https://humanrobotinteraction.org/2011/2011/02/plenary-speakers/ Mon, 28 Feb 2011 16:49:47 +0000 https://hri2011.net/?p=401

Embodied Language Learning and Interaction with the Humanoid Robot iCub

Dr. Angelo CangelosiUniversity of Plymouth

Recent theoretical and experimental research on action and language processing clearly demonstrates the strict interaction between language and action, and the role of embodiment in cognition. These studies have important implication for the design of communication and linguistic capabilities in cognitive systems and robots, and have led to the new interdisciplinary approach of Cognitive Developmental Robotics. In the European FP7 project “ITALK” we follow this integrated view of action and language learning for the development of cognitive capabilities in the humanoid robot iCub. The robot’s cognitive development is the results of interaction with the physical world (e.g. object manipulation learning) and from cooperation and communication between robots and humans (Cangelosi et al., 2010). During the talk we will psent ongoing results from iCub experiments. These include human-robot interaction experiments with the iCub on the embodiment biases in early word acquisition (“Modi” experiment; Morse et al. 2010), studies on word order cues for lexical development and the sensorimotor bases of action words (Marocco et al 2010), and recent experiments on action and language compositionality. The talk will also introduce the simulation software of the iCub robot, an open source software tool to perform cognitive modeling experiments in simulation (Tikhanoff et al. in pss).

Professor Angelo Cangelosi is the director of the Centre for Robotics and Neural Systems of the University of Plymouth. Cangelosi’s main research expertise is on language and cognitive modelling in cognitive systems (e.g. humanoid robot iCub and cognitive agents), on language evolution and grounding in multi-agent systems, and the application of bio-inspired techniques to robot control (e.g. swarm of UAVs). He currently is the coordinator of the Integrating Project “ITALK: Integration and Transfer of Action and Language Knowledge in robots” (2009-2012, italkproject.org), the Marie Curie ITN “RobotDoC: Robotics for Development of Cognition” (2009-2013, robotdoc.org) and the UK EPSRC project “VALUE: Vision, Action, and Language Unified by Embodiment (Cognitive Systems Foresight). Cangelosi has produced more than 150 scientific publications in the field, is Editor-in-Chief of the journal Interaction Studies, has chaired numerous workshops and conferences including General Chair of the forthcoming IEEE ICDL-EpiRob 2011 Conference (Frankfurt, 24-27 August 2011), and is a regular speaker at international conferences and seminars.

References

Cangelosi A., Metta G., Sagerer G., Nolfi S., Nehaniv C.L., Fischer K., Tani J., Belpaeme B., Sandini G., Fadiga L., Wrede B., Rohlfing K., Tuci E., Dautenhahn K., Saunders J., Zeschel A. (2010). Integration of action and language knowledge: A roadmap for developmental robotics. IEEE Transactions on Autonomous Mental Development, 2(3), 167-195

Marocco D., Cangelosi A., Fischer K., Belpaeme T. (2010). Grounding action words in the sensorimotor interaction with the world: Experiments with a simulated iCub humanoid robot. Frontiers in Neurorobotics, 4:7,

Morse A.F., Belpaeme T., Cangelosi A., Smith L.B. (2010). Thinking with your body: Modelling spatial biases in categorization using a real humanoid robot. Proceedings of 2010 Annual Meeting of the Cognitive Science Society. Portland, pp 1362-1368

Tikhanoff V., Cangelosi A., Metta G. (in pss). Language understanding in humanoid robots: iCub simulation experiments. IEEE Transactions on Autonomous Mental Development.


Embodied Object Recognition and Metacognition

Dr. Randall C. O’ReillyUniversity of Colorado

One of the great unsolved questions in our field is how the human brain, and simulations thereof, can achieve the kind of common-sense understanding that is widely believed to be essential for robust intelligence.  Many have argued that embodiment is important for developing common-sense understanding, but exactly how this occurs at a mechanistic level remains unclear.  In the process of building an embodied virtual robot that learns from experience in a virtual environment, my colleagues and I have developed several insights into this process.   At a general level, embodiment provides access to a rich, continuous source of training signals that, in conjunction with the proper neural structures, naturally support the learning of complex sensory-motor abilities, which then provide the foundation for more abstract cognitive abilities.  A specific instance is learning to recognize objects in cluttered scenes, which requires learning what is figure vs. (back)ground. We have demonstrated how visual learning in a 3D environment can provide training signals for learning weaker 2D depth and figure/ground cues. This learning process also requires bidirectional excitatory connectivity and associated interactive attractor dynamics, which we show provide numerous benefits for object recognition more generally.  Finally, the virtual robot can extract graded signals of recognition confidence, and use these to select what it explores in the environment.  These “metacognitive” signals can also be communicated to others so that they can better determine when to trust the robot or not (collaborative work with Christian Lebiere using ACT-R framework).

Bio:  Dr. O’Reilly is Professor of Psychology and Neuroscience at the University of Colorado, Boulder. He has authored over 50 journal articles and an influential textbook on computational cognitive neuroscience. His work focuses on biologically-based computational models of learning mechanisms in different brain areas, including hippocampus, pfrontal cortex & basal ganglia, and posterior visual cortex. He has received significant funding from NIH, NSF, ONR, and DARPA. He is a primary author of the Emergent neural network simulation environment. O’Reilly completed a postdoctoral position at the Massachusetts Institute of Technology, earned his M.S. and Ph.D. degrees in Psychology from Carnegie Mellon University and was awarded an A.B. degree with highest honors in Psychology from Harvard University.


Gesture, language and cognition

Dr. Sotaro KitaUniversity of Birmingham

We (humans) produce gestures spontaneously not only when we speak (“co-speech gestures”), but also when we think without speaking (“co-thought” gestures). I will psent studies that shed light on the cognitive architecture for gesture production. I will first review the evidence that co-speech gestures are highly sensitive to what goes on in speech production. For example, gestural repsentation of motion events varies as a function of the linguistic structures used to encode motion events.  Gestures are produced more frequently when it is difficult to organise ideas for linguistic expssion. Despite these pieces of evidence for a tight link between gesture and language, there are indications that gesture production is dissociable from speech production. Furthermore, new evidence shows that there are important parallelisms between co-speech gestures and co-thought gestures, suggesting that these two types of gestures are produced from the same mechanism, which is outside of speech production processes. I will conclude that gestures are produced from a mechanism that is inherently independent from, but highly interactive with, the speech production process. I will propose a cognitive architecture in which gesture production is related to action generation, spatial cognition, and speech production in an intricate way.

Dr. Sotaro Kita received a BA and MA at the University of Tokyo and a PhD at the University of Chicago.  He has worked at the Max Planck Institute for Psycholingusitics and is now a Reader at the University of Birmingham.

]]>
Conference Technical Program https://humanrobotinteraction.org/2011/2011/02/conference-technical-program/ Fri, 04 Feb 2011 02:02:36 +0000 https://hri2011.net/?p=368

Monday March 7

08.30 Registration Open
08.45 Welcome
09.00 Plenary: Dr. Sotaro Kita, University of Birmingham
10.10 10 min coffee break
Paper session 1: Telepresence Chair:  Brian Scassellati
10.20 Exploring Use Cases for Telepresence Robots Katherine Tsui, Munjal Desai, Holly Yanco
10.40 Supporting successful aging with mobile remote presence systems Jenay M. Beer, Leila Takayama
11.00 20 min coffee break Post-session Video: Projector Robot for Augmented Children’s Play Jong-gil Ahn, Hyeonsuk Yang, Gerard J. Kim, Namgyu Kim, Kyoung Choi, Hyemin Yeon, Eunja Hyun, Miheon Jo, Jeonghye Han
Paper session 2: People and robots working together Chair:  Frank Pollick
11.20 Improved Human-Robot Team Performance Using Chaski, A Human-inspired Plan Execution System Julie Shah, James Wiken, Brian Williams, Cynthia Breazeal
11.40 A conversational robot in an elderly care center: an ethnographic study Alessandra Maria Sabelli, Takayuki Kanda, Norihiro Hagita
12.00 Evaluating the Applicability of Current Models of Workload to Peer-based Human-robot Teams Caroline Harriot, Tao Zhang, Julie Adams
12.20 Lunch & Lab Tour
Paper session 3: Anthropormorphic Chair: Vanessa Evers
13.50 Interpersonal Variation in Understanding Robots as Social Actors Kerstin Fischer
14.10 Effects of Anticipated Human-Robot Interaction and Predictability of Robot Behavior on Perceptions of Anthropomorphism Friederike Eyssel, Dieta Kuchenbrandt, Bobinger Simon
14.30 Expressing thought: Improving robot readability with animation principles Leila Takayama, Doug Dooley, Wendy Ju
14.50 10 min coffee break
Paper session 4: Life-like motion Chair: Tony Belpaeme
15.00 Spatiotemporal Correspondence as a Metric for Human-like Robot Motion Michael Gielniak, Andrea Thomaz
15.20 An Assistive Tele-Operated Anthropomorphic Robot Hand: Osaka City University Hand II Raafat Mahmoud, Atsushi Ueno, Shoji Tatsumi
15.40 Effects Related to Synchrony and Repertoire in Perceptions of Robot Dance Eleanor Avrunin, Justin Hart, Ashley Douglas
16.00 20 min coffee break Post-session Video: LightHead Robotic Face Frédéric Delaunay, Joachim de Greeff, Tony Belpaeme
16.20-16.50 Panel – HRI in the Real World Chairs: Henrik Christensen and Jenny Burke
18.00-20.00 Reception and Posters

Towards an Online Voice-Based Gender and Internal State Detection Model
Amir Aly and Adriana Tapus ENSTA-ParisTech

Policy Adaptation with Tactile Feedback
Brenna D. Argall, Eric L. Sauser, and Aude G. Billard Ecole Polytechnique Fédérale de Lausanne

Perception by Proxy: Humans Helping Robots to See in a Manipulation Task
J. Alan Atherton and Michael A. Goodrich Brigham Young University

A Comparison of Unsupervised Learning Algorithms for Gesture Clustering
Adrian Ball, David Rye, Fabio Ramos, and Mari Velonaki The University of Sydney

The Crucial Role of Robot Self-Awareness in HRI
Manuel Birlo and Adriana Tapus ENSTA-ParisTech

Development of a Context Model Based on Video Analysis Roland Buchner, Astrid Weiss, and Manfred Tscheligi University of Salzburg

Using Depth Information to Improve Face Detection
Walker Burgin1, Caroline Pantofaru2, and William D. Smart1,2 (1) Washington University, St. Louis, (2) Willow Garage, Inc.

Interactive Methods of Tele-Operating a Single Unmanned Ground Vehicle on a Small Screen Interface
Wei Liang Kenny Chua1, Chuan Huat Foo1, and Yong Siang Lee2 (1) DSO National Laboratories, (2) Nanyang Technological University

Child’s Recognition of Emotions in Robot’s Face and Body
Iris Cohen1,2, Rosemarijn Looije1, and Mark A. Neerincx1,3 (1) TNO, (2) University Utrecht, (3) Delft University of Technology

Things that Tweet, Check-In and are Befriended. Two Explorations on Robotics & Social Media.
Henriette Cramer and Sebastian Büttner SICS, Sweden

A Pilot Study to Understand Requirements of a Shopping Mall Robot
Chandan Datta1, Anuj Kapuria2, and Ritukar Vijay2 (1) University of Auckland, (2) Hitech Robotic Systemz, Ltd.

Managing Social Constraints on Recharge Behaviour for Robot Companions Using Memory
Amol A. Deshmukh1, Mei Yii Lim1, Michael Kriegel1, Ruth Aylett1, Kyron Du-Casse2, Koay Kheng L2, and Kerstin Dautenhahn2 (1) Heriot-Watt University, UK, (2) University of Hertfordshire, UK

Designing Interruptive Behaviors of a Public Environmental Monitoring Robot
Vanessa Evers1, Roelof de Vries1, and Paulo Alvito2 (1) University of Amsterdam, (2) IdMind

Interactional Disparities in English and Arabic Native Speakers with a Bi-Lingual Robot Receptionist
Imran Fanaswala, Brett Browning, and Majd Sakr Carnegie Mellon University, Qatar

Comparative Analysis of Human Motion Trajectory Prediction Using Minimum Variance Curvature
Gonzalo Ferrer and Alberto Sanfeliu Institut de Robòtica i Informatica Industrial, Spain

Anthropomorphic Design for an Interactive Urban Robot – The Right Design Approach?
Florian Föerster, Astrid Weiss, and Manfred Tscheligi University of Salzburg

Tactile Sensing: A Key Technology for Safe Physical Human Robot Interaction Markus Fritzsche, Norbert Elkmann, and Erik Schulenburg Fraunhofer IFF

The Chanty Bear: A New Application for HRI Research
Kotaro Funakoshi1, Tomoya Mizumoto2, Ryo Nagata3, and Mikio Nakano1 (1) Honda Research Institute, Japan, (2) Nara Institute of Science and Technology, Japan, (3) Konan University, Japan

A Case for Low-Dose Robotics in Autism Therapy
Michael A. Goodrich, Mark Colton, Bonnie Brinton, and Martin Fujiki Brigham Young University

Learning from Failure: Extended Abstract
Daniel H. Grollman and Aude G. Billard Ecole Polytechnique Fédérale de Lausanne

Exploring the Influence of Age, Gender, Education and Computer Experience on Robot Acceptance by Older Adults
Marcel Heerink Windesheim Flevoland University of Applied Sciences

A Memory Game for Human-Robot Interaction
Sergio Hernandez-Mendez, Luis Alberto Morgado-Ramirez, Ana Cristina Ramirez-Hernandez, Luis F. Marin-Urias, Antonio Marin-Hernandez, and Fernando Montes-Gonzalez Universidad Veracruzana

Tele-Operation Between USA and Japan Using Humanoid Robot Hand/Arm
Makoto Honda1, Takanori Miyoshi1, Takashi Imamura1, Masayuki Okabe1, Faisal M. Yazadi2, and Kazuhiko Terashima1 (1) Toyohashi University of Technology, (2) CyberGlove Systems LLC

Universal Robots as Solutions to Wicked Problems: Debunking a Robotic Myth
Mattias Jacobsson and Henriette Cramer SICS, Sweden

Experience Centred Design for a Robotic Eating Aid
Javier Jiménez Villarreal and Sara Ljungblad SICS, Sweden

Upper-Limb Exercises for Stroke Patients Through the Direct Engagement of an Embodied Agent
Hee-Tae Jung, Jennifer Baird, Yu-Kyong Choe, and Roderic A. Grupen University of Massachusetts, Amherst

The New Ontological Category Hypothesis in Human-Robot Interaction
Peter H. Kahn, Jr.1, Aimee L. Reichert1, Heather E. Gary1, Takayuki Kanda2, Hiroshi Ishiguro2,3, Solace Shen1, Jolina H. Ruckert1, and Brian Gill4 (1) University of Washington, (2) ATR, Japan, (3) Osaka University, (4) Seattle Pacific University

RIDE: Mixed-Mode Control for Mobile Robot Teams
Erik Karulf1, Marshall Strother1, Parker Dunton1, and William D. Smart1,2 (1) Washington University, St. Louis, (2) Willow Garage, Inc.

User Recognition Based on Continuous Monitoring and Tracking
Hye-Jin Kim, Ho Sub Yoon, and Jae Hong Kim Electronics and Telecommunications Research Institute, Korea

Terrain-Adaptive and User-Friendly Remote Control of Wheel-Track Hybrid Mobile Robot Platform
Yoon-Gu Kim, Jung-Hwan Kwak, and Jinung An Daegu Gyeongbuk Institute of Science & Technology

Assisted-Care Robot Dealing with Multiple Requests in Multi-party Settings
Yoshinori Kobayashi, Masahiko Gyoda, Tomoya Tabata, Yoshinori Kuno, Keiichi Yamazaki, Momoyo Shibuya, and Yukiko Seki Saitama University

From Cartoons to Robots Part 2: Facial Regions as Cues to Recognize Emotions
Tomoko Koda1, Tomoharu Sano1, and Zsofia Ruttkay2 (1) Osaka Institute of Technology, (2) Moholy-Nagy University of Art and Design

Understanding Hierarchical Natural Language Commands for Robotic Navigation and Manipulation
Thomas Kollar, Steven Dickerson, Stefanie Tellex, Ashis Gopal Banerjee, Matthew Walter, Seth Teller, and Nicholas Roy MIT CSAIL

Gaze Motion Planning for Android Robot
Yutaka Kondo, Masato Kawamura, Kentaro Takemura, Jun Takamatsu, and Tsukasa Ogasawara Nara Institute of Science and Technology

Perception of Visual Scene and Intonation Patterns of Robot Utterances
Ivana Kruijff-Korbayová1, Raveesh Meena1, and Pirita Pyykkönen2 (1) German Research Center for Artificial Intelligence, (2) Saarland University

Towards Proactive Assistant Robots for Human Assembly Tasks
Woo Young Kwon and Il Hong Suh Hanyang University

A Panorama Interface for Telepresence Robots
Daniel A. Lazewatsky1 and William D. Smart1,2 (1) Washington University, St. Louis, (2) Willow Garage, Inc.

Predictability or Adaptivity? Designing Robot Handoffs Modeled from Trained Dogs and People
Min Kyung Lee1, Jodi Forlizzi1, Sara Kiesler1, Maya Cakmak2, and Siddhartha Srinivasa3 (1) Carnegie Mellon University, (2) Georgia Institute of Technology, (3) Intel Laboratories, Pittsburgh

Understanding Users’ Perception of Privacy in Human-Robot Interaction
Min Kyung Lee, Karen P. Tang, Jodi Forlizzi, and Sara Kiesler Carnegie Mellon University

Utilitarian vs. Hedonic Robots: Role of Parasocial Tendency and Anthropomorphism in Shaping User Attitudes
Namseok Lee1, Hochul Shin1, and S. Shyam Sundar2 (1) Sungkyunkwan University, (2) The Pennsylvania State University

Incremental Learning of Primitive Skills from Demonstration of a Task
Sang Hyoung Lee, Hyung Kyu Kim, and Il Hong Suh Hanyang University

Hitting a Robot vs. Hitting a Human: Is it the Same?
Sau-lai Lee1 and Ivy Yee-man Lau2 (1) Nanyang Technological University, (2) Singapore Management University

Recognition and Incremental Learning of Scenario-Oriented Human Behavior Patterns by Two Threshold Model
Gi Hyun Lim, Byoungjun Chung, and Il Hong Suh Hanyang University

Beyond Speculative Ethics in HRI? Ethical Considerations and the Relation to Empirical Data
Sara Ljungblad1, Stina Nylander2, and Mie Nørgaard2 (1) SICS, Sweden, (2) The IT University of Copenhagen

Team-based Interactions with Heterogeneous Robots Through a Novel HRI Software Architecture
Meghann Lomas1, Vera Zaychik Moffitt1, Patrick Craven1, E. Vincent Cross, II1, Jerry L. Franke1, and James S. Taylor2 (1) Lockheed Martin Advanced Technology Laboratories, (2) Naval Surface Warfare Center, Panama City Division

The Applicability of Gricean Maxims in Social Robotics Polite Dialogue
Looi Qin En1 and See Swee Lan2 (1) Hwa Chong Institution, Singapore, (2) Institute for Infocomm Research, Singapore

Polonius: A Wizard of Oz Interface for HRI Experiments
David V. Lu1 and William D. Smart1,2 (1) Washington University, St. Louis, (2) Willow Garage, Inc.

Expressing Emotions Through Robots: A Case Study Using Off-the-Shelf Programming Interfaces
Vimitha Manohar, Shamma al Marzooqi, and Jacob W. Crandall Masdar Institute of Science and Technology

Recognition of Spatial Dynamics for Predicting Social Interaction
Ross Mead, Amin Atrash, and Maja J. Matarić University of Southern California

Make Your Wishes to ‘Genie in the Lamp’: Physical Push with a Socially Intelligent Robot
Hye-Jin Min and Jong C. Park Korea Advanced Institute of Science and Technology

A Communication Structure for Human-Robot Itinerary Requests
Nicole Mirnig, Astrid Weiss, and Manfred Tscheligi University of Salzburg

Cognitive Objects for Human-Computer Interaction and Human-Robot Interaction
Andreas Möller, Luis Roalter, and Matthias Kranz Technische Universität München

Inferring Social Gaze from Conversational Structure and Timing
Robin R. Murphy, Jessica Gonzales, and Vasant Srinivasan Texas A & M University

Exploring Sketching for Robot Collaboration
Matei Negulescu1 and Tetsunari Inamaru2 (1) University of Waterloo, (2) National Institute of Informatics

Exploring Influences of Robot Anxiety into HRI
Tatsuya Nomura1,2, Takayuki Kanda2, Sachie Yamada3, and Tomohiro Suzuki4 (1) Ryukoku University, (2) ATR Intelligent Robotics & Communication Laboratories, (3) Iwate Prefectural University, (4) Tokyo Future University

Collaboration with an Autonomous Humanoid Robot: A Little Gesture Goes a Long Way
Kevin O’Brien, Joel Sutherland, Charles Rich, and Candace L. Sidner Worcester Polytechnic Institute

User Observation & Dataset Collection for Robot Training
Caroline Pantofaru Willow Garage, Inc.

The Effect of Robot’s Behavior vs. Appearance on Communication with Humans
Eunil Park1, Hwayeon Kong1, Hyeong-taek Lim1, Jongsik Lee1, Sangseok You1, and Angel P. del Pobil1,2 (1) Sungkyunkwan University, (2) Universitat Jaume-I

Activity Recognition from the Interactions Between an Assistive Robotic Walker and Human Users
Mitesh Patel, Jaime Valls Miro, and Gamini Dissanayake University of Technology Sydney

Web-based Object Category Learning using Human-Robot Interaction Cues
Christian I. Penaloza, Yasushi Mae, Tatsuo Arai, Kenichi Ohara, and Tomohito Takubo Osaka University

Mission Specialist Interfaces in Unmanned Aerial Systems
Joshua M. Peschel and Robin R. Murphy Texas A & M University

Attitude of German Museum Visitors Towards an Interactive Art Guide Robot
Karola Pitsch, Sebastian Wrede, Jens-Christian Seele, Luise Süssenbach Bielefeld University

Integration of a Low-Cost RGB-D Sensor in a Social Robot for Gesture Recognition
Arnaud Ramey, Vìctor González-Pacheco, and Miguel A. Salichs University Carlos III of Madrid

Tangible Interfaces for Robot Teleoperation
Gabriele Randelli, Matteo Venanzi and Daniele Nardi Sapienza University of Rome

Generalizing Behavior Obtained from Sparse Demonstration
Marcia Riley and Gordon Cheng Technical University Munich

Adapting Robot Behavior to User’s Capabilities: A Dance instruction Study
Raquel Ros1, Yiannis Demiris1, Ilaria Baroni2, and Marco Nalin2 (1) Imperial College, (2) Fondazione Centro San Raffaele del Monte Tabor

Unity in Multiplicity: Searching for Complexity of Persona in HRI
Jolina H. Ruckert University of Washington

Designing a Robot Through Prototyping in the Wild
Selma Šabanović, Sarah Reeder, Bobak Kechavarzi, and Zachary Zimmerman Indiana University

Are Specialist Robots Better than Generalist Robots?
Young June Sah1, Bomee Yoo1, and S. Shyam Sundar2 (1) Sungkyunkwan University, (2) The Pennsylvania State University

Perceptions and Knowledge About Robots in Children of 11 Years Old in Mexico City
Eduardo Benìtez Sandoval, Mauricio Reyes Castillo, and John Alexander Rey Galindo National Autonomous University of Mexico

Generation of Meaningful Robot Expressions with Active Learning
Giovanni Saponaro and Alexandre Bernardino Instituto Superior Técnico, Portugal

Random Movement Strategies in Self-Exploration for a Humanoid Robot
Guido Schillaci and Verena Vanessa Hafner Humboldt University, Berlin

Thinking “As” or Thinking “As If”: The Role of Pretense in Children’s and Young Adults’ Attributions to a Robot Dinosaur
Rachel Severson University of Oslo

Who is More Expressive During Child-Robot Interaction: Pakistani or Dutch Children?
Suleman Shahid1, Emiel Krahmer1, Marc Swerts1, and Omar Mubin2 (1) Tilburg University, The Netherlands, (2) Eindhoven University of Technology

The Curious Case of Human-Robot Morality
Solace Shen University of Washington

A Comparison of Machine Learning Techniques for Modeling Human-Robot Interaction with Children with Autism
Elaine Short, David Feil-Seifer, and Maja Matarić University of Southern California

A Survey of Social Gaze
Vasant Srinivasan and Robin R. Murphy Texas A & M University

A Toolkit for Exploring the Role of Voice in Human-Robot Interaction
Zachary Henkel1, Vasant Srinivasan1, Robin R. Murphy1, Victoria Groom2, and Clifford Nass2 (1) Texas A & M University, (2) Stanford University

An Information-Theoretic Approach to Modeling and Quantifying Assistive Robotics HRI
Martin F. Stoelen1, Alberto Jardón Huete1, Virginia Fernández1, Carlos Balaguer1, and Fabio Bonsignorio1,2 (1) Universidad Carlos III de Madrid, (2) Heron Robots, Italy

Information Provision-Timing Control for Informational Assistance Robot
Hiroaki Sugiyama and Yasuhiro Minami NTT Communication Science Laboratories

Future Robotic Computer: A New Type of Computing Device with Robotic Functions
Young-Ho Suh1, Hyun Kim1, Joo-Haeng Lee1, Joonmyun Cho1, Moohun Lee1, Jeongnam Yeom1, and Eun-Sun Cho2 (1) Electronics and Telecommunication Research Institute, Republic of Korea, (2) Chungnam National University, Republic of Korea

StyROC: Stylus Robot Overlay Control & StyRAC: Stylus Robot Arm Control
Teo Chee Hong, Keng Kiang Tan, Wei Liang Kenny Chua, and Kok Tiong John Soo DSO National Laboratories, Singapore

The Implementation of Care-Receiving Robot at an English Learning School for Children
Fumihide Tanaka and Madhumita Ghosh University of Tsukuba

Linking Children by Telerobotics: Experimental Field and the First Target
Fumihide Tanaka and Toshimitsu Takahashi University of Tsukuba

A Theoretical Heider’s Based Model for Opinion-Changes Analysis during a Robotization Social Process
Bertrand Tondu University of Toulouse

Understanding Spatial Concepts from User Actions
Elin Anna Topp Lund University

A Model of the User’s Proximity for Bayesian Inference
Elena Torta, Raymond H. Cuijpers, and James F. Juola Eindhoven University of Technology

Look Where I’m Going and Go Where I’m Looking: Camera-Up Map for Unmanned Aerial Vehicles
R. Brian Valimont and Sheryl L. Chappell SA Technologies, Inc.

Head Pose Estimation For a Domestic Robot
David van der Pol, Raymond H. Cuijpers, and James F. Juola Eindhoven University of Technology

DOMER: A Wizard of Oz Interface for Using Interactive Robots to Scaffold Social Skills for Children with Autism Spectrum Disorders
Michael Villano, Charles R. Crowell, Kristin Wier, Karen Tang, Brynn Thomas, Nicole Shea, Lauren M. Schmitt, and Joshua J. Diehl University of Notre Dame

Between Real-World and Virtual Agents: The Disembodied Robot Thibault
Voisin1, Hirotaka Osawa2, Seiji Yamada3, and Michita Imai1 (1) Keio University, (2) Japan Science and Technology Agency, (3) National Institute of Informatics

An Android in the Field
Astrid M. von der Pütten1, Nicole Krämer1, Christian Becker-Asano2, and Hiroshi Ishiguro3 (1) University of Duisburg-Essen, (2) Albert-Ludwigs-Universität Freiburg, (3) Osaka University

A Human Detection System for Proxemics Interaction
Xiao Wang, Xavier Clady, and Consuelo Granata Université Pierre et Marie Curie

Human Visual Augmentation Using Wearable Glasses with Multiple Cameras and Information Fusion of Human Eye Tracking and Scene Understanding
Seung-Ho Yang, Hyun-Woo Kim, and Min Young Kim Kyungpook National University

Rhythmic Reference of a Human While a Rope Turning Task
Kenta Yonekura1, Chyon Hae Kim2, Kazuhiro Nakadai2, Hiroshi Tsujino2, and Shigeki Sugano3 (1) Tsukuba University, (2) Honda Research Institute Japan, (3) Waseda University

A Relation between Young Children’s Computer Utilization and Their Use of Education Robots
Hyunmin Yoon Korea Institute of Science and Technology

MAWARI: An Interactive Social Interface
Yuta Yoshiike, P. Ravindra S. De Silva, and Michio Okada Toyohashi University of Technology

When the Robot Criticizes You… Self-Serving Bias in Human-Robot Interaction Sangseok You1, Jiaqi Nie1, Kiseul Suh1, and S. Shyam Sundar2 (1) Sungkyunkwan University, (2) The Pennsylvania State University

 

Tuesday March 8

08.30 Registration Open
09.00 Plenary: Dr. Randall C. O’Reilly, University of Colorado
10.10 10 min coffee break
Paper session 5: Engagement Chair: Selma Sabanovic
10:20 Vision-based Contingency Detection Jinhan Lee, Jeffrey F. Kiser, Aaron F. Bobick, Andrea L. Thomaz
10:40 Automatic Analysis of Affective Postures and Body Motion to Detect Engagement with a Game Companion Jyotirmay Sanghvi, Ginevra Castellano, Iolanda Leite, André Pereira, Peter W. McOwan, Ana Paiva
11:00 A Robotic Game to Evaluate Interfaces Used to Show and Teach Visual Objects to a Robot in Real World Condition Pierre Rouanet, Fabien Danieau, Pierre-Yves Oudeyer
11:20 20 min coffee break Post-session Video: Sociable Spotlights: A Flock of Interactive Artifacts Naoki Ohshima, Yuta Yamaguchi, Ravindra S. De Silva, Michio Okada
Paper session 6: Engagement and proxemics Chair: Andriana Tapus
11:40 Automated Detection and Classification of Positive vs. Negative Robot Interactions With Children With Autism Using Distance-Based Features David Feil-Seifer, Maja Matarić
12:00 Human-Robot Proxemics: Physical and Psychological Distancing in Human-Robot Interaction Jonathan Mumm, Bilge Mutlu
12:20 Lunch & Lab Tour
Paper session 7: Humans teaching robots Chair: Miguel Salichs
13:50 Human and Robot Perception in Large-scale Learning from Demonstration Christopher Crick, Sarah Osentoski, Graylin Jay, Odest Chadwicke Jenkins
14:10 Robots That Express Emotion Elicit Better Human Teaching Dan Leyzberg, Eleanor Avrunin, Jenny Liu, Brian Scassellati
14:30 Usability of Force-Based Controllers in Physical Human-Robot Interaction Marta Lopez Infante, Ville Kyrki
14:50 Coffee Break – 10m
Paper Session 8: Multi-robot control Chair: Holly Yanco
15:00 Scalable Target Detection for Large Robot Teams Huadong Wang, Andreas Kolling, Nathan Brooks, Sean Owens, Shafiq Abedin, Paul Scerri, Pei-ju Lee, Shih-Yi Chien, Michael Lewis, Katia Sycara
15:20 Effects of Unreliable Automation and Individual Differences on Supervisory Control of Multiple Ground Robots Jessie Y.C. Chen, Michael J. Barnes, Caitlin Kenny
15:40 How Many Social Robots Can One Operator Control? Kuanhao Zheng, Dylan F. Glas, Takayuki Kanda, Hiroshi Ishiguro, Norihiro Hagita
16:00 Coffee Break – 20m Post-session Video: Survivor Buddy: A Social Medium Robot Zachary Henkel, Negar Rashidi, Aaron Rice, Robin Murphy
16:20 – 17:05 Video Session Chairs: Jacob Crandall and Martin Saerbeck
Dynamic of Interpersonal Coordination Yasutaka Takeda, Yuta Yoshiike, Ravindra S. De Silva, Michio Okada
Multi-faceted Liveliness in Human-Robot Interaction Victor Ng-Thow-Hing, Ravi Kiran Sarvadevabhatla, Deepak Ramachandran, Erik Vinkhuyzen, Luke Plurokowski, Maurice Chu, Ingrid C. Li
The Life of iCub, A Little Humanoid Robot Learning from Humans through Tactile Sensing Eric Sauser, Brenna Argall, Aude Billard
The Floating Head Experiment David St-Onge, Nicolas Reeves, Christian Kroos, Maher Hanafi, Damith Herath, Stelarc
Chief Cook and Keepon in the Bot’s Funk Eric Sauser, Marek Michalowski, Aude Billard, Hideki Kozima
Caregiving Intervention for Children with Autism Spectrum Disorders using an Animal Robot Kwangsu Cho, Christine Shin
Humanoid Robot Control using Depth Camera Halit Bener Suay, Sonia Chernova
ShakeTime! A Deceptive Robot Referee Marynel Vázquez, Alexander May, Aaron Steinfeld, Wei-Hsuan Chen
Tots on Bots Madeline E. Smith, Sharon Stansfield, Carole W. Dennis
A Wheelchair Which Can Automatically Move Alongside a Caregiver Yoshinori Kobayashi, Yuki Kinpara, Erii Takano, Yoshinori Kuno, Keiichi Yamazaki, Akiko Yamazaki
Who explains it? Avoiding the Feeling of Third-Person Helpers in Auditory Instruction for Older People Hirotaka Osawa, Jarrod Orszulak, Kathryn M. Godfrey, Seiji Yamada, Joseph F. Coughlin
Snappy: Snapshot-Based Robot Interaction for Arranging Objects Sunao Hashimoto, Andrei Ostanin, Masahiko Inami, Takeo Igarashi
Robot Games for Elderly Søren Tranberg
Selecting and Commanding Groups in a Multi-Robot Vision Based System Brian Milligan, Greg Mori, Richard T. Vaughan

 

Wednesday March 9

08.30 Registration Open
09.00 Plenary: Dr. Angelo Cangelosi, University of Plymouth
10.10 10 min coffee break
Paper session 9: Ontologies Chair: Aaron Steinfeld
10:20 Intelligent Humanoid Robot with Japanese Wikipedia Ontology and Robot Action Ontology Shotaro Kobayashi, Susumu Tamagawa, Takeshi Morita, Takahira Yamaguchi
10:40 Using Semantic Technologies to Describe Robotic Embodiments Alex Juarez, Christoph Bartneck, Loe Feijs
11:00 20 min coffee break
Paper Session 10: User preferences Chair: Andrea Thomaz
11:20 Robot Self-Initiative and Personalization by Learning through Repeated Interactions Martin Mason, Manuel C. Lopes
11:40 Modeling Environments from a Route Perspective Yoichi Morales, Satoru Satake, Takayuki Kanda, Norihiro Hagita
12:00 Do Elderly People Prefer a Conversational Humanoid as a Shopping Assistant Partner in Supermarkets? Yamato Iwamura, Masahiro Shiomi, Takayuki Kanda, Hiroshi Ishiguro, Norihiro Hagita
12:20 Lunch & Lab Tour
Paper Session 11: Robot touch Chair: Takayuki Kanda
13:50 Touched By a Robot: An Investigation of Subjective Responses to Robot-initiated Touch Tiffany L. Chen, Chih-Hung King, Andrea L. Thomaz, Charles C. Kemp
14:10 Effect of Robot’s Active Touch on People’s Motivation Kayako Nakagawa, Masahiro Shiomi, Kazuhiko Shinozawa, Reo Matsumura, Hiroshi Ishiguro, Norihiro Hagita
14:30 Design and Assessment of the Haptic Creature’s Affect Display Steve Yohanan, Karon E. MacLean
14:50 20 min coffee break
Paper Session 12:  Nonverbal interaction Chair:  Michael Goodrich
15:10 Learning to Interpret Pointing Gestures with a Time-of-Flight Camera David Droeschel, JörgStückler, Sven Behnke
15:20 Using Spatial and Temporal Contrast for Fluent Robot-Human Hand-overs Maya Cakmak, Siddhartha S. Srinivasa, Min Kyung Lee, Sara Kiesler, Jodi Forlizzi
15:40 Nonverbal Robot-Group Interaction Using an Imitated Gaze Cue Nathan Kirchner, Alen Alempijevic, Gamini Dissanayake
16:10 Closing and Awards

 

]]>
Funding for Student Volunteers https://humanrobotinteraction.org/2011/2011/01/funding-for-student-volunteers/ Sat, 15 Jan 2011 05:09:52 +0000 https://hri2011.net/?p=354

We are pleased to announce that there is funding available to partially support travel to HRI2011 for Student Volunteers.  Students receiving support will be expected to volunteer time to help with on-site registration and with general duties, as needed, from 5 March (early on-site registration, tutorials and workshops) to 9 March, 2011.

Being a Student Volunteer is a great way to enter the HRI research community, meet other students in your field, and attend one of most important conferences in HRI. We are looking to include students with diverse backgrounds in HRI and from all parts of the world.  To request funding, please send the following information to Selma Sabanovic (selmas@indiana.edu).

(1) Provide:
1. Your Name;
2. Your Affiliation (name and address of college/university);
3. Your Contact Information (email address and telephone number(s)).

(2) Indicate:
1. Whether you are a part-time or full-time student, and include the name of your faculty advisor with email and telephone contact information;
2. Whether or not you are a (co)author of a paper or a late-breaking report at HRI2011;
3. If you are receiving or requesting funding from the Young Researchers Workshop on 5 March 2011;
4. Please include information about your area of study.

(3) A statement of your requested funding amount and a brief justification of your funding needs (250 words maximum).

Note that you do not have to have an accepted paper/poster at HRI2011 to be considered for financial support, although students with accepted work will be given priority. The deadline for applying for Student Volunteer funds by submitting the requested information is February 1, 2011.

]]>
EPFL Lab Tours https://humanrobotinteraction.org/2011/2011/01/epfl-lab-tours/ Tue, 11 Jan 2011 23:51:57 +0000 https://hri2011.net/?p=342

A number of labs on the EPFL campus will be offering tours during the conference (see list below). Attendees are asked to register for the tours by the early registration deadline (Jan 31, 2011) through the main conference registration system.

Chair in Non-invasive Brain-machine Interface (CNBI), lead by Prof. José del R. Millán
https://cnbi.epfl.ch/
Research on the direct use of human brain signals to control devices and interact with our environment.

Laboratory of Intelligent Systems (LIS), lead by Prof. Dario Floreano
https://lis.epfl.ch/
Three interconnected research areas: design and development of autonomous robots, bioinspired artificial intelligence, and theoretical and experimental work with biological systems.

Learning Algorithms and Systems Laboratory (LASA), lead by Prof. Aude Billard
https://lasa.epfl.ch/
Research fields include: Learning and Dynamical Systems, Neural Computation and Modeling, Human-Machine Interaction, Humanoids Robotics, Mechatronics, Design of Therapeutic and Educational Devices.

MOBOTS group within the Laboratoire de Systems Robotiques (LSRO), lead by Dr. Francesco Mondada
https://mobots.epfl.ch/
System design for miniature autonomous mobile robots, based on strong competences in digital electronics and system integration.

Pedagogical Research and Support (CRAFT), hosted by Dr. Frédéric Kaplan
https://craft.epfl.ch
Robotic objects and human-robot ecologies, within the larger goal of contributing to the quality of the training with training actions, counseling, evaluation and faculty development of teachers.

]]>
Conferences Registration Now Open https://humanrobotinteraction.org/2011/2011/01/conferences-registration-now-open/ Sun, 02 Jan 2011 23:16:36 +0000 https://hri2011.net/?p=304

Conference registration is now available at the registration site. Below are information on registration deadlines and rates.

Deadlines

Early Registration, Dec 23, 2010 – Jan 31, 2011
Late Registration, Feb. 1, 2010 – Mar. 5, 2011
On-site Registration, Mar. 6, 2011- Mar. 9, 2011

Rates

Registration fees, to be collected in Swiss Francs (CHF), are as follows:


ACM/SIG/IEEE/
HFES/AAAI member
Student
member
Non-member Student
non-member
Early registration 499 CHF 252 CHF 662 CHF 336 CHF
Late registration 604 CHF 357 CHF 798 CHF 473 CHF
On-site registration 709 CHF 462 CHF 945 CHF 615 CHF

All workshop and tutorial attendees must also register for the conference. Information on the workshops and tutorials offered at HRI 2011 can be seen here. Below are the registration rates for tutorials and workshops. If you have any questions about the registration, please contact the Registration Chair.


All members
Full-day workshop/tutorial 106 CHF
Half-day workshop/tutorial 53 CHF
]]>
Tutorials & Workshops https://humanrobotinteraction.org/2011/2011/01/tutorials-workshops/ Sat, 01 Jan 2011 01:12:16 +0000 https://hri2011.net/?p=294

Workshop sessions will take place on Sunday March 6th in Building BC (map). Registration opens at 8:30.

Tutorials

Brain mediated Human-Robot interaction

Jose del R. Millan, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
E-mail: jose.millan {at} epfl.ch

Ricardo Chavarriaga, (Contact Organizer), Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
E-mail: ricardo.chavarriaga {at} epfl.ch

The use of brain-generated signals for human-robot interaction has gained increasing attention in the last years. Indeed brain-controlled robots can potentially be employed to substitute motor capabilities (e.g. brain-controlled prosthetics for amputees or patients with spinal cord injuries); to help in the restoration of such functions (e.g. as a tool for stroke rehabilitation) as well as non-clinical applications like telepresence or entertainment. This half-day tutorial gives an introduction to the field of brain-computer interfaces and presents several design principles required to successfully employ them for robot control. [More information]

Location: Building BC, room BC02

Times: Start: 9:00 am
Morning Break: 11:00am-11:30am
End: 1:00 pm

Workshops

Robots with Children: Practices for Human-Robot Symbiosis

Naomi Miyake, University of Tokyo, Japan
E-mail: nmiyake {at} p.u-tokyo.ac.jp

Hiroshi Ishiguro, Osaka University, Japan
E-mail : ishiguro {at} sys.es.osaka-u.ac.jp

Kerstin Dautenhahn, University of Hertfordshire, UK
E-mail: K.Dautenhahn {at} herts.ac.uk

Tatsuya Nomura, (Contact Organizer), Ryukoku University, Japan
E-mail: nomura {at} rins.ryukoku.ac.jp

On considering symbiosis of humans and robots, its benefits and risks should be taken into account for persons in weaker positions of the society, in particular, children. On the other hand, several robotics applications have been developed, including education and welfare for children. In this stage, it is important that more researchers from interdisciplinary research fields, including robotics, computer science, psychology, sociology, and pedagogy, share an opportunity to discuss about the potential of “robots with children”. This half-day workshop aims at providing with the forum where researchers from these interdisciplinary fields discuss about how symbiosis of robots and children should and can be realized, from the perspectives of engineering, psychology, education, and welfare. [More information]

Location: Building BC, room BC02

Times: Start: 2:00pm
Afternoon Break: 4:00pm-4:30pm
End: 6:00 pm

Social Robotic Telepresence

Prof. Silvia Coradeschi, Örebro Univeristy, Sweden
E-mail: silvia.coradeschi {at} oru.se

Dr. Amy Loutfi, (Contact Organizer), Örebro University, Sweden
E-mail: amy.loutfi {at} oru.se

Annica Kristoffersson, Örebro University, Sweden
E-mail: annica.kristoffersson {at} oru.se

Dr. Gabriella Cortellessa, ISTC-CNR, Italy
E-mail: gabriella.cortellessa {at} istc.cnr.it

Prof. Kerstin Severinson Eklundh, Royal Institute of Technology (KTH), Sweden.
E-mail: kse {at} csc.kth.se

Robotic telepresence, also known as telerobotics is a subfield of telepresence whose aim is to increase presence via embodiment in a robotic platform. In particular, robotic telepresence can be an effective tool to enhance social interaction suited to certain groups of users such as the elderly. The aim of this workshop is to address various aspects important for social robotic telepresence which include but are not limited to, (1) the mechanical design, (2) the user interface design, (3) the interaction between the remotely embodied person and the locally embodied person and (4) the perception of social robotic telepresence systems. Furthermore, we are interested in discovering the added value of spatial presence in the context of social telepresence and comparisons between robotic and and non-robotic systems are of interest. We welcome contributions concerning results reached from the above mentioned areas of interest, user evaluation and methodologies, as well as reports from the deployment of social robotic solutions into real world contexts. [More information]

Location: Building BC, room BC03

Times: Start: 9:00 am
Morning Break: 11:00am-11:30am
Lunch: 1pm-2pm
Afternoon Break: 4pm-4:30pm
End: 6:00 pm

The role of expectations in intuitive human-robot interaction

Prof. Dr. Verena Hafner, Humboldt-Universität zu Berlin, Germany
E-mail: hafner {at} informatik.hu-berlin.de

Dr. rer. nat. Manja Lohse, (Contact Organizer), Bielefeld University, Germany
E-mail: mlohse {at} techfak.uni-bielefeld.de

Prof. Joachim Meyer, Ben-Gurion University of the Negev, Israel
E-mail: Joachim {at} bgu.ac.il

Prof. Yukie Nagai, Osaka University, Japan
E-mail: yukie {at} ams.eng.osaka-u.ac.jp

Dr.-Ing. Britta Wrede, Bielefeld University, Germany
E-mail: bwrede {at} techfak.uni-bielefeld.de

Human interaction is highly intuitive: we infer reactions of our opponents mainly from what we have learned in years of experience and often assume that other people have the same knowledge about certain situations, abilities, and expectations as we do. In human- robot interaction (HRI) we cannot take for granted that this is equally true since HRI is asymmetrical. In other words, robots have different abilities, knowledge, and expectations than humans. They need to react appropriately to human expectations and behaviour. With this respect, scientific advances have been made to date for applications in entertainment and service robotics that largely depend on intuitive interaction. However, HRI today is often still unnatural, slow, and unsatisfactory for the human interlocutor. Both the sensorimotor interaction with environment and interlocutor, and the social aspects of the interaction still need to be researched and improved. Therefore, this full-day workshop aims to bring together researchers from different scientific fields to discuss these crosscutting issues and to exchange views on what are the preconditions and principles of intuitive interaction. [More information]

Location: Building BC, room BC01

Times: Start: 9:00 am
Morning Break: 11:00am-11:30am
Lunch: 1pm-2pm
Afternoon Break: 4pm-4:30pm
End: 6:00 pm

HRI Pioneers Workshop 2011

Thomas Kollar, (Contact Organizer), Massachusetts Institute of Technology, USA

Astrid Weiss, PhD University of Salzburg, Austria

Jason Monast, University of Denver, USA

Anja Austermann, PhD SOKENDAI, Japan

David Lu, Washington University of Saint Louis, USA

Mitesh Patel, University of Technology Sydney, Australia

Elena Gribovskaya, EPFL, Switzerland

Chandan Datta, University of Auckland, New Zealand

Richard Kelley, University of Nevada, USA

Hirotaka Osawa, PhD Japan Science and Technology Agency, Japan

Lanny Lin Brigham, Young University, USA

The field of human-robot interaction is new but growing rapidly. While there are now several established researchers in the field, many of the current human-robotic interaction practitioners are students or recently graduated. This workshop, to be held in conjunction with the HRI 2011 conference, aims to bring together this group of researchers to present their current research to an audience of their peers in a setting that is less formal and more interactive than the main HRI conference; to talk about the important issues in their field; and to hear about what their colleagues are doing. Participants are encouraged to actively engage and form relationships with others by discussing fundamental topics in HRI and by engaging in hands-on group activities. [More information]

Location: Building BC, room BC04 * Please note the room change.

Times: Start: 9:00 am
Morning Break: 11:00am-11:30am
Lunch: 1pm-2pm
Afternoon Break: 4pm-4:30pm
End: 6:00 pm

]]>
HRI Pioneers Workshop 2011 https://humanrobotinteraction.org/2011/2010/11/hri-pioneers-workshop-2011/ Mon, 22 Nov 2010 03:51:01 +0000 https://hri2011.net/?p=253

The field of human-robot interaction is new but growing rapidly. While there are now several established researchers in the field, many of the current human-robotic interaction practitioners are students or recently graduated. This workshop, to be held in conjunction with the HRI 2011 conference, aims to bring together this group of researchers to present their current research to an audience of their peers in a setting that is less formal and more interactive than the main HRI conference; to talk about the important issues in their field; and to hear about what their colleagues are doing. Participants are encouraged to actively engage and form relationships with others by discussing fundamental topics in HRI and by engaging in hands-on group activities.

Below are important dates and the list of organizers. Please visit the HRI Pioneers Workshop website for more information.

Important Dates

  • 29 December 2010 – Applications due
  • 29 January 2011 – Notification of acceptance
  • 7 February 2011 – Submission of final manuscript
  • 6 March 2011 – HRI 2011 Young Pioneers Workshop
  • 7-9 March 2011 – HRI 2011 Conference

Organizers

  • Thomas Kollar, (General Chair) — Massachusetts Institute of Technology, USA
  • Astrid Weiss, PhD (Co-Chair) — University of Salzburg, Austria
  • Jason Monast (Finance Chair) — University of Denver, USA
  • Anja Austermann, PhD (Website Chair) — SOKENDAI, Japan
  • David Lu (Proceedings Chair) — Washington University of Saint Louis, USA
  • Mitesh Patel (Communication Chair) — University of Technology Sydney, Australia
  • Elena Gribovskaya (Local Organisation Chair) — EPFL, Switzerland
  • Chandan Datta (Practical Session Chair) — University of Auckland, New Zealand
  • Richard Kelley (Program Committee Chair) — University of Nevada, USA
  • Hirotaka Osawa, PhD — Japan Science and Technology Agency, Japan
  • Lanny Lin (Panel Chair) — Brigham Young University, USA
]]>