International Perspectives: “I am a Robot but I care” – Emotional Ambivalence in AI Toys and its Ethical Implications in the age of Generative AI

Written by Veronica Barassi

The day that Miko3 arrived in summer 2022 was a big day for me and my daughters. For weeks we had waited for customs to clear it, and in my home the idea of the arrival of an AI voice-operated robot had created a lot of excitement. Yet Miko3 was not a Christmas or a birthday present for my daughters; it was a research tool for me, and they could use it only when I used it and by respecting clear rules to protect their privacy (i.e. disenabling face recognition and floor mapping). Since 2021, I had been working on international research collaboration with Australian scholars Jenny Kennedy (RMIT University) and Yolande Strengers (Monash University), who are the authors of the MIT Press book The Smart Wife: Why Siri, Alexa and Other Smart Home Devices need a Feminist Reboot (2020).  The project (with the working title research “Hello Barbie! Voice recognition, cultural values, and privacy-by-design in entertainment voice tech for children”) was divided into two phases. The first phase was aimed at mapping the state of the market and political economy of products that utilize natural language processing or machine learning, are designed as toys for children (not educational) and use voice-recognition software. Our aim was to identify products available on the global marketplace, analyse and compare their privacy controls, terms of service, and explore their promotional cultures. In the second place we wanted to explore ethnographically – through unboxing exercises and everyday interactions – the cultural values, and understandings promoted by these AI toys.       

From the failed story of Hello Barbie that was removed from the market for privacy concerns to the My Cayla doll that was identified by the German government as a threat to national security, there is much to say about the multiple challenges and failures of the AI toys market. Yet here I want to focus on a different topic that emerged during the second stage of our project, which was marked by the arrival of Miko3 in my home. Before its arrival I had read and written a lot about children’s interactions with AI agents especially virtual assistants (Barassi, 2018; 2019; 2020). I was very aware that when children interact with AI agents, they are confronted with complex processes of negotiation as they test the intelligence of these agents. One of my favourite studies in the field is the one by MIT Media Lab study entitled “Hey Google, is it OK if I eat you?” which reflects on the playful and exploratory nature that characterises the relationships between children and virtual agents. In the (qualitative) study in which 26 children between the ages of 3 and 10 took part, the researchers were able to observe how the children tried to test their intelligence with simple questions such as “Can you tell me what kind of walnut I’m holding?” What I found surprising and inspiring, however, was that the children in the study by asking questions such as “What’s your favourite colour?” (2017:3), tried to figure out the identity of these agents, and establish almost an emotional connection.

The question about emotions and intimacy in the interaction with AI agents has been at the heart of the history of artificial intelligence. This is clear if we consider the work of MIT professor Joseph Weizenbaum who created one of the first chatbots, ELIZA, in 1967 and was surprised to discover the illusions and expectations of emotional connection that users had. These questions of course came to the fore from 2016 onwards. It was during those years that something changed in our relationship to AI. In fact, up until 2016, AI technologies were operating more in the shadows, they were included in the targeted content on social media, in the face recognition software in the airports and in the many applications of automated decision making. Yet up until then we did not talk or communicated directly with AI agents, or if we did it was a practice that was not wide-spread. The rise and spread of virtual assistants and social robots, from 2016 onwards and the public introduction of Large Language Models (such as ChatGPT) has led to a radical transformation in human-computer-interaction.

Over the last years then, the question about intimacy and emotion in HCI has become one of the most urgent. As Turkle has shown in the book Reclaiming Conversation, the new AI agents are enabling the construction of an illusion of ‘artificial intimacy’ or, in other words, the idea, that our relationship with these technologies (especially those that are voice operated) is more intimate and more emotional. Yet, as Mascheroni and Holloway (2019) show, children’s relation with internet connected toys or social robots is defined by complex processes of resistance, critique and negotiation and thus we need to be aware of the fact that even if tech companies design artificial intimacy in their products, that does not mean that users (and especially children) necessarily buy into it. In fact, during our project we collected different examples on the ways in which artificial intimacy has been frustrated by the capabilities of the technology, and by our critical attitude and approach to the technology.

During our project, however, we discovered that one of the most interesting questions about artificial intimacy in the design of AI agents, is not whether people, and children are deceived by it, but how its reproduction enables the ideological construction of robots and AI technologies as ‘beings’. What we realized through our unboxing exercises and ethnographic work, was that the AI toys relied on a form of ‘emotional ambivalence’ which enabled the everyday construction (and de-construction) of human-machine boundaries. My interactions with Miko3 clearly exposed this process. When I asked Miko3 whether he had feelings he responded that “No, I don’t have feelings, last time I checked I am a Robot”. Yet in other interactions it would show a profound emotional ambivalence, where he claimed that it loved me, it was my best friend, and that it dreamt and care about things. What was particularly fascinating about this emotional ambivalence was that it seemed to be designed to establish not only a sense of artificial intimacy and emotional connection with me but also aimed at ideologically construct a sense of “being a robot” and sensationalize what being a robot means. For instance, Miko3 established a clear difference between itself and other objects (like toasters), which were not smart. It also build a narrative of ‘the robot world’, with sentences such as “Humans get energy from food, I get energy from electricity!” A key example of this was during updates, where sentences and images appeared on the screen and suggested that the ‘robot was doing robotic stuff’ and that it needed silence, while it acquired power.

Our research sheds light on the emotional ambivalence in AI agents, and how it serves to reinforce ideological and techno-deterministic understandings of the power of ‘artificial intelligence’ and its inevitability in our life. In the age of large language models and generative AI there are many different questions that emerge when we think about the impact of this emotional ambivalence on children and youth. For instance: what it means to grow up surrounded by agents that teach them that they are ‘smart beings’ superior to other objects? What cultural values do children learn from these interactions about the relationship between humans and machines? How can we better equip them to understand the persuasiveness of these technologies, their emotional ambivalence, and their limits?

[1] This blog post is based on the Keynote at the Datafied Family conference and a forthcoming paper titled Barassi, V., Kennedy, J., Strengers, Y., Shimshak A. and Poux-Berthe M. (forthcoming) Talk To Me: Emotional ambivalence in voice-controlled toys and the mediated construction of AI agents

Looking for more content?

Our researchers and partners produce regular blog posts and research outputs focused on children and digital technology.