Facial Automaton for Conveying Emotions as a Social Rehabilitation Tool for People with  Autism

 Introduction

               The human face is the main organ of expression, capable of transmitting emotions that are almost instantly recognised by fellow beings. In this paper, we describe the development of a lifelike facial display based on the principles of biomimetic engineering.A number of paradigms that can be used for developing believable emotional displays, borrowing from elements of anthropomorphic mechanics and control, and materials science, are outlined.

Image

Image

FACE (Facial Automaton for Conveying Emotions) [Pioggia 2004, Pioggia 2005] on the other hand follows a biomimetic approach. In FACE the biological behaviour is mimicked by means of dedicated smart soft materials and structures, intelligent control strategy,algorithms and artificial neural networks. It is part of an innovative android-based treatment which focuses on core aspects of the autistic disorder, namely social attention andthe recognition of emotional expressions. FACE is a social believable artifact able to interact with the external environment, interpreting and conveying emotions through non verbal communication.

This approach provides a structured environment that people with autism could Consider to be “social”, helping them to accept the human interlocutor and to learn through imitation. On the basis of a dedicated therapeutic protocol, FACE is able to engage in social interaction by modifying its behaviour in response to the patient’s behaviour. Following an imitation-based learning strategy, we hope to verify that such a system can help children with autism to learn, interpret and use emotional information. If such learned skills can be extended to a social context, the whole FACE system will serve as an invaluable therapeutic tool for ASD, which we call FACE-T (T as in “therapy”). The FACE-T system consists of FACE itself, a sensorised life-shirt and the therapeutic protocol.

Image

Fig. 1.  The latest prototype of FACE.

Human-like robots which embody emotional states expression, empathy and non-verbal communication have also been proposed for autism therapy [25]. They can be thought of as a sort of robot based affective computing. Our group has pioneered this approach through the use of a lifelike android FACE (Facial Automation for Conveying Emotions) which presents emotional information through non-verbal communication. Our hypothesis is that adaptive therapy using a robot endowed with the ability to sense, adapt and respond to a patient’spostulated emotional and mental states will enable autistic subjects to learn empathy and gradually enhance their social competence. In particular, the therapy could help autistic subjects to interpret emotional states of an interlocutor, through familiarity and contextual information presented in a stepwise and controlled manner.  The FACE robot is capable, in its present embodiment, of mimicking a limited set of facial expressions, which are more easily accepted by autistic patients because of their simple and stereotypical nature.Figures a,b,c, show snapshots of an experimental session.

In figure a the subjectshows is shown to completely focus his attention on FACE. In figure (a) a spontaneous approaching of the subject for eye contact with FACE is shown. While figure (b) shows the non verbal requesting of the subject through a conventional gesture (a wink).Fig. (a).

Image

Figa). Experimental session: focus of attention on FACE

Image

Fig. b). Experimental session: spontaneous approaching for eye contact with FACE (S4).

Image

Fig. c). Experimental session: non verbal requesting through conventional gesture (S4).

Each of the three therapeutic stages are based on a framework that includes various phases: subject-robot familiarization, subject recognition of the robot expressions, shared attention and name calling.

In the first step, the subject is led through a sitting with the help of a trained therapist, who concentrates on inducing spontaneous behaviour and reaction, imitation, and emotion recognition while the robot generates a pre-programmed series of expressions.

In the second stage the therapeutic set up involves FACE and a therapist who can intervene and decide which expression the robot should show during the patient’s interrogation and exploration.

In the third stage the adaptive therapeutic set up involves FACE and the therapist operates both as supervisor and observer.

II. MATERIALS AND METHODS

The FACET set-up in which the android guided therapytakes place includes a room equipped with motorizedcameras, directional microphones, and other acquisition systems as

A.Face Robot Hardware

            FACE is an android used as emotion conveying system. It consists of a female face made of FlubberTM, a skin-like silicone based rubber patented by HansonRobotics. Android faces produced by D. Hanson have been used in other robots, with their own software architectures, like the Ibn Sina Robot [29], the Javier Movellan’s robot at UCSD [30],and the INDIGO project [31FACE servo motors are all integrated in the android skull except for the 5 neck servos that allowi pitch, roll and yaw movement of the head. The android has a CCD camera in the right eye used for face tracking of the subject through an OpenCV based face tracking algorithm [32].

B. Face Control

The entire FACET system behaviour is controlled by custom made software responsible for monitoring the environment, the subject, and the robot. A number of subsystems control the different features of the system with the goal of combining reactive and deliberate behaviours.

This architecture guarantees signals synchronization.

Image

Fig 1: Connection scheme of the FACET platform.

In orange are indicated the different roblets that communicate with the FACE control unit where the brain and the bodymap are hosted (in blue). The supervision and therapist control is allowed through the FACE configurator roblet (ingreen).

In fact these files represent a still facial expression. Face movements are achieved by means of interpolation of known positions. This is a standard approach in the context of 3D animation that has greatly influenced the design of our control system, and it enables forward and backward compatibility with well-known graphic programs. This first abstraction layer, designed to decouple software from specific hardware, is used by another layer whose purpose is to receive requests for facial expressions adaptation and combine them appropriately. The control software isinherently concurrent and different behavioural modules are expected to send requests for facial expression adaptation without having to care for possible conflicts.

C.Sensorized Shirt

The sensorized shirt is based on e-textiles and has been developed in collaboration with Smartex Srl, Prato, Italy. It gathers, computes and transmits HR, HRV, RR, skin conductance, skin temperature and respiratory rate, all of which are known to be bodily correlates of emotional states [6].Three key points make up the sensing shirt; these are the fabric electrodes based on interconnecting conductive fibers,a piezoresistive network and a wearable wireless communication unit [34]. Electrodes and connections are interwoven within the textile by means of natural and synthetic conductiveyarns.

.D. Eye Tracking

There is a growing body of research that makes use of eye-tracking technology to study attention disorders and visual processing in ASD. A typical gaze patterns were already described for individuals with ASD when presented with social scenes and faces [35] [36]. Reduced attention to the face but not to the actions of a demonstrator to be imitated has been found by Vivanti et al. [35] in a group of children with autism Gaze tracking is thus a critical and useful indicator of a subject’s interest and emotional involvement during a therapeutic session with FACE

Image

Figure 2: The head band of HATCAM showing the mirror and camera

III. RESULTS

A. PHASE 1

In order to obtain a preliminary evaluation of the behaviour of a child with autism when exposed to a homebuilt version of FACE (shown in figure 3) with a restricted set of emotional expressions, we set up a preliminary experiment in which the reactions of two children, one a normally developing child and the other with autism, were monitored.During the session with FACE, after a preliminary explorative phase, the autistic child (7 years) attributed the robot with emotion (sadness), and did not show any sign of fear as confirmed by the therapist furthermore, the heartbeat monitoring do not shows any significant change. This experiment suggested that autistic children can develop positive “social” interactions with an expressive system, possibly because the range of actuated emotional states is limited, reproducible and easy to process.

Image

Figure 3: The earliest version of FACE, with a restricted set of facial. expressions.

B.PHASE 2

          Following the initial evaluation in the first phase, the robot was improved aesthetically as well as functionally by employing improved elastomers and increasing the degrees of freedom in the movement of its facial skin. To evaluate the behaviour of autistic children in therapist guided sessions with FACE, the reactions of 4 subjects (3 male and 1female), between 7 and 20 years old, were monitored and compared as reported in[27[28]. The children were diagnosed as autistic subjects using two specific diagnostic instruments: ADI-R (Autism Diagnostic Interview Revised) and ADOS-G (Autism Diagnostic Observation ScheduleGeneric).

Image

Figure 4: The second version of FACE, designed and built by the Academy  of Arts, Carrara, Italy in collaboration with the University of Pisa.

C. PHASE 3

          The current version of FACE is moresophisticated and more believable than the previous 2 version(figure 5). It is endowed with face tracking, such that it automatically follows the subject’s head movements, and auto blinking routine allowing attention sharing to be conducted in a more natural fashion.Further more, the 32 degrees of freedom of the facial skin enables the implementation a wide range of expressions and movements. Clinical trials are on-going.

Image

Figure 5: The current FACE robot, developed by Hanson Robotics

(hansonrobotics.com), in collaboration with the University of Pisa.

IV. FUTURE DEVELOPMENTS

As underlined in section 1, our goal is adaptive therapy, tailored to each patient’s needs and learning skills. Our experimental  paradigm is based on modulation of the robot’s social behaviour according to the self-adaptive control algorithm schematized in fig. 6. The robot’s mood and expressions can be built up in complexity to incorporate scenarios and sessions which include hidden meanings and innuendoes, the interpretations of which are amongst the most well-known social limitations of ASDs

A.Setting up mood sates

           The robot’s “mood state” is its base line behaviour which influences the automatic selection of expressions during the therapy. The Operator or the therapist could also manually select expressions through two dedicated GUI (Graphical User Interface).  In order to maintain a stable and strong robot-subject emphatic link it is important to prevent what we call the “Joker Effect”. The Joker Effect is the emphatic misalignment typical of sociopaths who are not able to regulate their behaviour to the social context. The Joker effect as in the Batman movie consists in the generation of unsuitable expressions for the therapeutic context and could induce fear or discomfort in the subject.

Image

Figure 6: The adaptive control algorithm scheme 

FACE will choose autonomously its expressions from a predeterminate set according to its “mood state”.

In order to prevent the Joker Effect selected expressions are not fully actuated but they can be modulated in intensity from 0 to 1 or avoided. The final expression intensity is modulated according to the subject state postulated by the physiological signals merging the motor position of the selected expression with that of the neutral expression.

For instance the robot eye-camera will be used to track the subject’s face and to turn the robot’s neck and eyes to keep the subject in view as a real person would do. Furthermore a 3D simulator of FACE is being developed in order to study and design facial expressions.

The FACE editor will be extended with a vision module to acquire facial expressions from a 3D camera.

Image

Figure 7: The FACE editor in the 3D simulator mode.

Conclusion

            The aim of FACE-T is to act as a human-machine interface for non verbal communication.The learning process in FACE is based on imitating predefined stereotypical behaviours which can be represented in terms of FAPs followed by a continuous interaction with its environment. At present FACE is applied to enhance social and emotive abilities in children with autism. The experimental sessions allowed us to collect preliminary data in terms of therapeutic treatment for patients with disorders in the autistic spectrum.

However, what is behind FACE? There is the application of the smart soft matter, algorithms and robotics, there is the attempt to understand the complexity of biological behaviour, there are people with autism.

Advertisements