University of Arizona is developing AI that can read human emotions

AI has built its reputation in outperforming humans in manual tasks– now it's learning to be a better team player.
22 January 2020

Universities are adding the human touch to AI. Source: Shutterstock

Virtual assistants (VA) are ubiquitous to the extent that there are up to 41.4 million monthly active users of Siri in the US, according to Verto Analytics. 

The figure is more likely to be higher as emerging VA like Cortana and Alexa are dominating the market. Consumers use these artificial intelligent helpers to communicate and carry out daily tasks, including ordering groceries, listening to podcasts, scheduling an appointment, and for some—job hunting

While that’s just one example of the way interaction with machines is becoming a greater part of our daily lives– both at work and at home– it’s important that we make technology that engages us to interact with it. In other words, as the technology replaces certain human interactions, we need to make AI that’s easy to ‘get on with.’

With that in mind, academics are now exploring how AI can be made more ‘sociable’. Researchers from the University of Arizona were awarded US$7.5 million to develop an AI agent that can understand human interactions, pick up social cues, and help teams achieve their goals through information gathered. 

A gap in the current market was noted as popular VAs are excellent in retrieving information and executing tasks but are unable to pick up social cues. For instance, Siri may not be able to distinguish the intonation and tones of a user yelling or speaking politely. 

A research scientist of the project, Adarsh Pyarelal, stated the new project is aimed at understanding human interactions and gaining ‘intelligence’ in inferring and deciphering social cues we often see in communications, which commercial VAs are incapable of.

“The goal […] is to develop artificial intelligence with a ‘theory of mind,’ and create AI that is a good teammate to humans.

“The thing that makes a human a good teammate is having a good sense of what other people on their team are thinking, so they can predict what their teammates are going to do with some level of certainty,” Pyarelal stated. 

The project, named Theory of Mind-Baed Cognitive Architecture for Teams (ToMCAT), will train AI agents by running them in a Minecraft video game simulation where they will be paired with human players. 

The AI agents will collect information about human players and their interactions while completing custom-designed missions. 

With the use of webcams and microphones, the ToMCAT agents will analyze the facial expressions and movements of human players. To capture biological reactions (heart rate, brain activity), human players are geared with a head cap and connected to an electrocardiogram machine. 

After gathering sufficient information through observation, the ToMCAT agent will join arms with human players and even suggest ways for the team to achieve their goals.  

In brief, the learning cycle of AI agents begins with observation, acquisition of knowledge, and in the end, participation in achieving goals with human players. 

The gradual learning curve enables AI agents to observe human interactions and understand social cues. 

The researchers are hoping that an AI model with some degree of social awareness may drive better decision making in high-stress scenarios. 

In a work environment, a socially savvy AI agent may assist and encourage team collaboration. By leveraging its analysis of human relationships and interaction, the AI agent could be useful in assigning teams based on compatibility and working style or suggesting solutions for teams in highly intense discussions. 

Interestingly, the added ‘human touch’ in AI models may tilt the balance between human and robot managers. As employees have a preference for human managers when it comes to interpersonal interaction, in which 45 percent believe managers have a better grasp in understanding human emotions and exhibit empathy. 

This may no longer be the case as AI agents are trained to observe and understand human emotions, making them socially aware and possibly, acquire the ability to ’emphasize.’

Though the development of socially-savvy AI models is still in a nascent stage, it may be too early to determine that AI would be attuned to human emotions to the extent that it can transform human relationships.