The ALIZ-E project set out in 2010 to build the artificial intelligence (AI) for small social robots and to study how young people would respond to these robots. At the time we knew how to build robots that would interact with people for several minutes, but we did not know how to build robots that remained engaging for a longer period of time, such as hours or perhaps even days. Long-term human-robot interaction has tremendous potential, as the robots can then be used to bond with people, which can in turn be used to provide support and education. As an application domain, the project focused on children with diabetes, whom the robots help by offering training and entertainment.
Building the software and hardware for a social robot is a formidable challenge and required a team with a very diverse skillset. ALIZ-E has experts in human-robot interaction (Plymouth University and the Netherlands Organisation for Applied Scientific Research), natural language processing (the Deutsches Forschungszentrum für Künstliche Intelligenz and the National Research Council), robot hardware and software (Aldebaran Robotics), machine learning (Imperial College London), emotion (University of Hertfordshire) and artificial perception (Vrije Universiteit Brussel). In addition we teamed up with a hospital, who was willing to evaluate prototypes of our social robot (Fondazione Centro San Raffaele).
To test all the components of the software, and to see how children responded to the robots, the team has run dozens of studies during which hundreds of children between 7 and 11 years of age played with and learned from the ALIZ-E robot. The most promising AI components were integrated on a robot which was rolled out to hospitals and summer camps, where children diagnosed with diabetes got to interact with the robot. Children played quizzes and games with the robot, and even danced with the robot, all the while increasing their understanding of what diabetes is and how they could better manage their condition. Because of the presence of the robot in the paediatric ward, visits to the hospital no longer became something to be anxious about. Even more, children felt more confident now that they had a robot friend supporting them.
The work has led to a raft of scientific insights on how children relate to social robots, and how robots need to be designed to maximise their impact when used for educational or therapeutic purposes. For example, we now know that when a robot gives personalised responses, using your name instead of just "you", and adapts its personality to that of the child, children are more forgiving of mistakes made by the robot, and will retain more of what the robot has taught them. Along the way we discovered many surprising facts, from the quirky (we now know how R2D2-like sounds can be used to communicate emotions to children) to the profound (the physical presence of a robot is key to learning, just delivering information without the robot being present does not result in learning).
But most important to us has been the response from the children and the adults, from parents to paediatricians: all have been tremendously supportive of our efforts. When we rolled out a robot that was less than perfect (and building a perfect robot is challenging, our robot after all these years still keeps falling over at random times), we were met with understanding. When the robot worked, we were met with unbridled enthusiasm. Children, parents, nurses and doctors all actively engaged in the design of the interaction, and willingly gave thousands of hours to help us test the robot. Over the course of 4.5 years they helped us build a robot which, we hope, makes a real difference.
The ALIZ-E team
The ALIZ-E scientific and technological goals
- Prolonged human-robot interaction over a range of days instead of in the here and now
- Robotic companions in child-robot interaction. Different from adult-robot interaction, more promising applications
- Robust “any-depth" interaction. Robustness against low-quality perception and interpretation
- Out of the lab into the real world: the robot will be evaluated in paediatrics department
- Long-term memory and self-sustained long-term interaction. Key to long-term interaction is having a personalised adaptive memory storing experiences and interaction episodes
- Analysis and synthesis of emotion and affect in human-robot interaction
- Pervasive machine learning and adaptation. Learning experiences will be unstructured. Learning will rely on an array of different approaches
- Cloud computing as computational resource on autonomous systems
The ALIZ-E scientific and technological contributions
- Robots use a distributed model of long-term memory, which make the robot adapt to people
- Robots rely on adaptive and sustainable non-verbal interaction, taking an embodied perspective to affective interaction
- User and task modelling is adaptive, allowing the robot to adapt its behaviour to different user profiles and employ user-specific strategies to achieve a goal
- Verbal interaction aimed at long-term interaction is strongly coupled with non-verbal interaction
- Evaluation involved young users outside a lab environment
- The integration of cognitive components was based on Urbi middleware and cloud computing solutions