skip to content
 

Speaker Spotlight: Professor Hatice Gunes

Imagine having a wellbeing coach available 24/7 who can reliably improve your mood. You never feel judged, the coach is always on top form, and sessions are tailored to your personality to keep you mentally well.

Hatice Gunes, Professor of Affective Intelligence & Robotics, and Leader of Affective Intelligence & Robotics Lab, Department of Computer Science and Technology, University of Cambridge, hopes to make this vision a reality – in robot form.

Professor Gunes and her team are working to create emotionally intelligent robots that can help make us more resilient to life’s challenges. Two of these wellbeing robots, Pepper and Nao, are set to make an appearance at the Cambridge Festival on the 9th of April between 10am-12.30pm.

Cambridge Festival: What does the robot look like?  Does it look like a computer, or does it look more like R2D2 from Star Wars for instance?

Hatice Gunes: Robots come in different forms and sizes – a robot could look like very mechanical, like an animal eg a dog (zoomorphic robots) or even like a human (humanoid robots). Even for human looking robots it is a spectrum – see for example a collection of such robot images as part of the ABOT dataset here (click on view 100).

CF: Why use a robot rather than a real person? 

HG: The usage and need for a robot very much depends on the application and the context. Ideally, they would be used alongside human experts to improve some aspects of human lives rather than replacing the human experts. For example, they could be used for robot-enhanced therapy for children with autism spectrum disorders alongside the experts, or in care homes to physically/mentally engage the elderly for therapeutic purposes, in addition to the regular support they would get, or they could be used as tele-operated entities to provide physical access to humans when they cannot be physically in a certain location or space (eg during the pandemic – see here or here for further examples).  

CF: How do responses to a robot differ than to a human being?

HG: This again depends on the application scenario and the context they are used in. We cannot claim the responses will always be different, as we have shown in our mindfulness related work, both modes can be equally effective depending on the scenario and the application.

On the contrary, previous works in child-robot interaction field (CHRI) have shown that children are less reluctant to divulge secret information to a human rather than a robot – eg, a study that used both humans and robots to interview children about bullying showed that children were significantly more likely to report that fellow students were teased about their looks to the robot interviewer in comparison to the human interviewer, see here

CF: Why did you decide to create a humanoid robot, rather than an app or a device?

HG: My team does not focus on creating the physical form and shape of the robots, instead we use the commercially available ones and focus on building new interfaces, algorithms and models to be used / deployed on those robots for different applications.

The choice will depend on the needs of the potential users and the application scenario. For example, virtual conversational agents, in the form of mobile applications and chat-bots, have helped with improving the accessibility of mindfulness training. However, most of these agents are not interactive and even when they are, they rely on text-based and non-adaptive communication with very little human-feedback, resulting in individuals losing interest and dropping out from using such mental health interventions. A robot instead is embodied, has a physical presence and this really helps with engaging people.

CF: How emotionally intelligent are these robots?  And what are the limitations?

HG: Designing robots with socio-emotional skills is vital for sustained long-term usage. These skills are important for robots to be able to provide not only physical, but also social support to human users. They also need to engage in and sustain long-term interactions with them in a variety of application domains that require human-robot interaction, including healthcare, education, entertainment, manufacturing, and many others.  But creating these skills in robots is very challenging.

The availability of commercial robotic platforms and developments in collaborative academic research provide us a positive outlook. However, the capabilities of current social robots are quite limited. Some major limitations are related to the physical capabilities of these robots – battery life is short (they easily get heated up and cannot run for more than certain number of hours), the hardware is not sufficient to run in real time complex computer vision and machine learning algorithms on the robots and usually additional computers or cloud services are needed.  Also, the commercial robots do not adapt, they always provide the same response / reaction to a particular human action which can negatively impact uptake once the novelty effect wears off.

CF: What has your research into wellbeing robots and how people interact with them revealed so far? Has it been successful?

HG: Our study on mindfulness training delivered over five weeks by both a human coach and a teleoperated robot coach showed that both evoke positive responses from the participants across all the sessions.  Human coach is rated significantly higher in terms of Animacy, Likeability and Perceived Intelligence. We also found that participants’ personality traits such as Conscientiousness and Neuroticism influence their perception of the robot coach – which suggests that personalising and adapting to individual attributes and needs is an important step moving forward in this research.  

Therefore, we started focusing on how to equip social robots with lifelong and continual learning capabilities, so they learn as they go, and adapt and personalise.  

This is something that the current commercial / prototype humanoid robots cannot do.

CF: What is the next stage for your work in this area? How far away do you think we are from robots being used in classic therapy?

See above – we are focussing on adaptation and personalisation aspects and making the robot coach fully autonomous. Most studies use teleoperated or scripted robots, so there is still a lot to be done in this area.

Finally, I would argue for using robots alongside human therapists in classic therapy – see my earlier comments.