In a recent study conducted by Imperial College London, it was found that humans demonstrate empathy and tend to protect virtual agents from Artificial Intelligence (AI) that are excluded during playful interactions.
The researchers used a virtual game called "Cyberball," in which participants toss a virtual ball at each other on a screen. The goal was to observe how 244 individuals, aged 18 to 62, would react to seeing an AI agent being eliminated by another human player.
In some versions of the game, the human player shared the ball fairly with the AI agent. In others, the AI agent was deliberately excluded, receiving the ball less frequently. The results showed that most participants sought to correct this unfairness by passing the ball more frequently to the excluded AI agent. Notably, older participants were even more likely to notice and act against exclusion.
Implications for Virtual Agent Design
The research suggests that humans have a natural inclination to treat AI agents as social beings. Jianan Zhou, lead author of the study and a member of Imperial's Dyson School of Design Engineering, commented: "This study offers unique insights into how humans interact with AI, with interesting implications for the design of these systems and for our understanding of psychology."
As virtual agents become more common in collaborative tasks and daily interactions, understanding this dynamic becomes crucial. Researchers warn that while this trend can be beneficial in collaborative work environments, it can be concerning when virtual agents begin to replace human interactions in social or mental health contexts.
Dr. Nejra van Zalk, co-senior author of the study, added: "Our results raise important questions about how people perceive and interact with these agents. Avoiding overly human-like agent designs could help people better distinguish between virtual and real interactions."
Next Steps in Research
Recognizing that the virtual game scenario may not fully represent real-life human interactions, the researchers plan to conduct additional experiments. These will involve face-to-face conversations with virtual agents in different contexts, allowing them to assess whether the results extend to other forms of interaction.
Conclusion
This study highlights the need to consider human perceptions when designing AI agents. By understanding that people can treat these agents as social beings, developers have the opportunity to create more conscious and responsible experiences, balancing technological efficiency with ethical and psychological considerations.