ALIZ-E project

  • Increase font size
  • Default font size
  • Decrease font size
Home Publications

Children's perception of a Robotic Companion in a mildly constrained setting.

Research Area: Uncategorized Year: 2011
Type of Publication: In Proceedings
Authors:
  • Marco Nalin
  • Linda Bergamini
  • Alessio Giusti
  • Ilaria Baroni
  • Alberto Sanna
Book title: IEEE/ACM Human-Robot Interaction 2011 Conference (Robots with Children Workshop)
BibTex:
Abstract:
This paper presents the results of a study, conducted by Scientific Institute San Raffaele in Milan, involving 35 children in between 8 and 11 years old. The purpose of this study was to assess the children's perception of a robotic companion. The interaction was organized in small groups (3-4 children per session), for a quite short duration (15min), and was structured in form of game, where the children had to discover how to activate all the robot's \capabilities" (four in total, one of which including physical contact with the robot). The robot was controlled through a Wizard of Oz interface, thanks to which an operator was able to activate the specific behaviors. The study demonstrated that all the kids accept favorably the presence of the robot, and that they are willing to spend more time with it. Furthermore the study indicated that children have the tendency to humanize the robot, assigning it functions, behaviors and emotions that are typical of human beings. Another interesting result is that all the children claimed (through a proper questionnaire) that the robot could be able to support them, in case they were feeling down or worried about something.


Random Research Highlight

As a step towards bridging the gap between linguistic and non-verbal communication during multi-modal interaction, we worked on non-linguistic utterances (think of the clicks and beeps known from film characters such as R2D2 or Wall-E), a powerful way to communicate with both children and adults. Through ALIZ-E we now have a good understanding of how people interpret non-linguistic utterances – much in the same way as they interpret language – and know how to effectively use non-linguistic utterances in human-robot interaction; for example, mixing language and non-linguistic utterances works better than just using non-linguistic utterances on their own.