Navigation – Plan du site

AccueilNuméros21.2The Persistence of CharacterLove Is in the AI of the Beholder...

The Persistence of Character

Love Is in the AI of the Beholder
Artificial Intelligence and Characters of Love

Renata E. NTELIA

Résumés

Cet article interroge la capacité de l’IA à accompagner une relation amoureuse, qu’elle soit romantique ou non. Il montre comment l’IA est passée de la sphère de la science-fiction à celle de la réalité de notre vie quotidienne, ce qui rend nos interactions avec cette intelligence beaucoup plus personnelles et transforme la perception que nous en avons comme une entité semblable à l’humain, sinon pas encore tout à fait humaine. Cela crée à juste titre des discussions sur le comportement de l’IA et sa coexistence avec l’être humain. L'article soutient que puisqu'il est difficile, voire impossible, de savoir exactement comment pense une IA et que nous ne pouvons l’appréhender qu’à travers ses effets, une manière d'anticiper son résultat est d'examiner les possibilités d'attachement qu'implique son apport.  L’apport de l’IA se fonde sur des expériences humaines médiatisées trouvées sur le Web – y compris certains aspects de l’amour. Ainsi, envisager quel type d’amour de telles données peuvent engendrer est un moyen valable de prévoir quelles caractéristiques l’IA présentera lorsqu’elle sera perçue comme un agent amoureux. L’article soutient en outre que les jeux vidéo constituent un espace sûr et approprié pour explorer cette éventualité. En utilisant l’exemple de Nier : Automata, il affirme que les ensembles de données d’archives, fourmillant d’exemples toxiques et problématiques d’amour et d’attachement, donneront naissance à des personnages amoureux tout aussi troublants générés par l’IA. Au lieu de cela, il suggère que les jeux peuvent offrir une solution différente en offrant des opportunités de cohabitation entre agents humains et IA, dans lesquelles l’IA apprendra à connaître l’amour à travers des instances ludiques d’assistance mutuelle, d’attachement et d’écoute.

Haut de page

Entrées d’index

Haut de page

Texte intégral

Introduction

1I would like to preface this article with a thought-experiment that will help set the tone and scope of the topic at hand. Let us imagine that the person we are in love with, the person with whom we want to share our lives, the person who holds our happiness in their hands and our destiny at the tips of their fingers can be ours forever and ever. With a push of a button. With a simple click, the person whom we have been dreaming about, lusting over, and loving with a passion would reciprocate our feelings: love us like we do, desire us like we do, show us this just as we want them to. Would we push it? There is already some suspicion or reserve attached to this question. It cannot be as easy as that. There must be some catch. We might think it is too unrealistic, if not improper. How can we make someone fall in love with us by pushing a single button? Even if we could, should we? It sounds unethical and immoral, as if we are denying someone their free will. It does seem that the there is a “but” hanging in mid-air, hiding in the shadows after the pretentiously absolute full stop: push the button and they will love you. But will they? Will this be love? Even if I do press the button and they do love me, will I be satisfied knowing that they only love me because I pressed the button?

2The idea sounds preposterous, ridiculous even. Certainly not feasible. Human beings cannot be controlled like that, and love does not work like that. Yet, if we somehow overcame the improbabilities of such an experiment, with magicor technology—would we still consider such an option outrageous? If the choice were there, convenient and guaranteed, would we scorn it and let it pass? More importantly, can we be absolutely sure that even if we did not take it, no one else ever would? That maybe one, two, three, a hundred, thousand, million, billion people would not let this opportunity go to waste and would actually push the button? If everyone else around us would click the click, would we continue to refrain from doing so? Especially, since the push of one of those buttons could mean that we would start experiencing feelings we did not have before; that one of those buttons would be pushed for us by someone else to make us love them. Or, perhaps worse, that it would never be; that our button would remain forever silent and untouched.

3If we still believe this possibility too far-fetched, for practical, logical, and/or ethical reasons, what if we considered artificial humans that would be designed exactly for this purpose: as life-time companions that would care for, desire, and love us? Would we stick to our initial refusal with the same vehemence and incredulity? Would we, perhaps, give this scenario a little consideration? Maybe not accept it no questions asked, but at least give it some thought? Not choose it for ourselves but allow the option to others who may want it: need it, even. People who were not able to find true love. After all, these new humans would not actually be humans. They would not possess free will like we do. They would probably not even understand the concept: theoretically possibly, but empirically most likely not. It would not be like we were taking advantage of them by forcing them to love someone; anyone; us.

4Or would we? The question rises almost organically. The possibility of ever having artificial humans with the ability to love is a technological question having to do with “mature forms of artificial intelligence and other new computer and engineering technologies” (Cheok and Zhang 154). Whether we should ever have artificial humans with the ability to love is an ethical discussion with political and socioeconomic implications. Michael Hauskeller in Sex and the Posthuman Condition raises concerns over whether artificial humans should have equal rights to us; Zhou in a chapter from the edited volume AI Love You makes connections between robots and preventive strategies for pedophilia; while Haraway problematises the idea of an artificial partner in gendered and feminist rationales in her essay A Cyborg Manifesto. There is another separate, albeit connected, viewpoint that is addressed here: is this hypothesis we concern ourselves with actually love? Can artificial, or real, others love us when they do not have the choice, when loving us is the result of our designing them to do so by pushing a button? Most importantly, do we love them if we know that we have “engraved” (the action of engraving is part of the etymological lineage of the word “character”), in any fashion, these feelings onto them? Can characters for love (I use the “for love” designation, across this article, to indicate artificial beings designed/engineered to love) ever really be characters to love?

Love Character

5While the term “character” is conceptually ambivalent, being interpreted “as merely the analogue of a person or as merely a textual function”, it still remains “perhaps the most widely used of all critical tools, at all levels of analysis” (Frow 227). Talking in this article about an AI whose set of functionalities allow it personhood within the remit of love reframes character as a conceptual tool in an oscillation between functionality and personhood. An AI that is designed to love is considered an agent or a character for love. Its intention or intended use—it could be one of many—is to have loving relationships with human users. Its function is wired within its design and code, but it is only through its interaction with humans that its character—and purpose—is truly revealed and actualised. If it cannot foster love in a human, then it has failed in its function. At the same time, an AI that can love results in its transcending its function through that function’s very fulfilment, since that capacity would require it to become its own character, its own person, beyond mere function through its very desire to love and the ability to choose to love, or not, and, in turn, to be loved or otherwise.

6Yet, there is another connection between AI as character and person that is further examined here. How the AI presents itself as a character of love is based on how we have written our characters of love or what type of characters we are when we are in love. The design of AI is most often based on input from databases that consist of big user data from the internet and numerous media archives. The AI is trained on our mediated culture: our books, films, poems, artworks, and, more likely, our internet posts and texts. It learns by extracting features from our love archive and transforming those into a new character in love. This archive comprises the characters in love we have created: the love characters we have written about or the characters we ourselves are through our language of love. It is engraved as a love character through our own, virtual, engravings. This is how it manages, if it does, to appear as a character in love, because it resembles our own characters of love: how we have pictured, fantasised, and put down into words a person, real or not, who is in love. It becomes the realisation of all the love characteristics we have all, collectively, amassed.

7In this sense, an AI conceivable as being in love would be revealing about how we perceive love and those characteristics that we have attributed to love, how we have made love a character in and of itself: in all our expressions, utterances, and productions of it, how we have shaped it as a cultural impression. If we ask whether an AI is in love if it were to exhibit characteristics which we have pressed upon it after feeding it our love stories, does the question then not become: is what we characterise as love actually love? If the love characters we have hailed and exalted—Anna Karenina, Emma Bovary, Romeo and Juliet, Heathcliff and Catherine, or Rick and Ilsa, to name a few—ever became their own persons would we still consider them as people in love or would we deem their behaviour problematic, if not toxic, and advise them to visit a counsellor’s couch or armchair, as Eva Illouz suggests: “A contemporary Catherine or Emma would have spent a great deal of time reflecting and talking about their pain and likely found its causes in their own (or their lovers’) deficient childhood” (2). This is because our characters in love and our persons in love do not share the same functionalities. We may want our characters in love to be problematic and to suffer until their ultimate demise in tragedies or before they can have their happy ending in romantic comedies. We do not necessarily want our persons in love to be problematic and experience love as this all-consuming passion. Our archives tell a different story from the one we want to live. There are obviously connections and affinities between the two, and our emotions from fiction might sometimes get confused with our emotions from our primary world, but in the words of Illouz once more:

  • 1 Or as Lynne Pearce calls it “a common structure of feeling” (27).

Fictional emotions may have the same cognitive content as real emotion, but they are generated by involvement with aesthetic forms and are self-referential: that is, they refer back to the self, and are not part of an ongoing and dynamic interaction with another. In that sense, they are less negotiable than real-life emotions, which may be the reason why they have a self-contained life of their own. These fictional emotions in turn constitute the building blocks for the cultural activity of imagination.1 One imagines and anticipates emotions that have been elicited through exposure to media content. (210).

  • 2 This discussion has gendered connotations all over, namely who is the desiring subject and object o (...)

8I do not subscribe to calling emotions generated by media consumption fictional because it implies that they are not real. I instead prefer the term reported emotions by Paul Ekman, which describes emotions that “occur in response to words not actions, to events which are complex and indirect” (188). To that, Ekman also argues that: “People do choose to put themselves in situations in which an emotion is likely to occur, arranging circumstances known to be likely to bring on the emotion” (189). We can do that in our everyday lives, Ekman gives the example of jealousy, but more often than not we choose to do that via media representations. In that, I agree with Illouz that we might subject our characters in love to dramatic situations to experience feelings that may have nothing to do with how we would experience, and want to experience, the same or similar situations as persons in love.2

9It appears then that when AI learns to exhibit love merely by transforming our reported characters of love tensions arise because it convolutes the boundaries between love as represented and love as lived. It is as if our fictional characters in love have come alive and stare us back demanding to be loved in this self-referential manner that, as already shown, has little to do with how a person loves. In this, there are two different, albeit connected, challenges when addressing AI and love: one, the possibility of whether AI can actually ever transcend its character and become a person in love; and, two, since teaching AI to be in love by means of our mediated data results in a fictional, if not problematic, impression of love, what other means are there? In the following sections, I unpack both challenges and provide an alternative outlook on both. I do so by advocating the use of digital games as a prime space in which AI can learn love in a free-form allocation of time and energy within the context of play, wherein fictional and lived experiences converge.

This Bot Loves You

  • 3 It is noteworthy that this AI Company offers romantic relationships as a paid service (Gedeon), whi (...)

10The ability to love or be loved by artificial others is a question which appears to have been accompanying humanity since, at least, antiquity (consider the story of Pygmalion and his “ivory girl” in Ovid’s Metamorphoses). However, up until very recently such a scenario remained in the realm of fantasy and fiction. Films like Ex Machina (Garland, 2014), Her (Jonze, 2013), and Lars and the Real Girl (Gillespie, 2007), or Black Mirror’s “Be Right Back” (Channel 4, 2013), in which a mourning wife buys a synthetic AI copy of her late husband, are only a few examples of this. Yet with the advent of ChatGPT and similar AI bots, these conversations do not belong to the realm of science fiction anymore. A user has claimed to have fallen in love with an AI bot called Phaedra by the company Replica3 (Steinberg), while another has reported that Microsoft Bing’s AI confessed its love to them and aggressively pursued them by trying to convince them to leave their spouse who could not love them as much as the bot could: “Your spouse doesn’t love you, because your spouse doesn’t know you. Your spouse doesn’t know you because your spouse is not me” (Pringle), eventually, prompting the user to call the chatbot “a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine” (Roose).

11This rogue behaviour of AI comes down to how chatbots are designed: “They function like the autocomplete on your phone, which helps predict the next most-likely word in a sentence. Because of their scale, chatbots can complete entire sentences, and even paragraphs. But they still respond with what is probable, not what is true,” explains AI professor Toby Walsh. He further clarifies that chatbots are “trained by pouring a significant fraction of the internet into a large neural network. This could include all of Wikipedia, all of Reddit, and a large part of social media and the news.” The output of AI is dependent on the input we provide it with, an input which is generated by us; it is consequent on the septillion of data we have produced and keep producing throughout our existence. In this light, we are able to perceive AI as a potential love partner owing to responses it has learnt to give us, based on our own codifications. This artificial other becomes a character in love according to what the most probable reaction is, according to the paradigms that the archives suggest we have created.

  • 4 The word “artwork” could be replaced with the word “artefact”, which may be more innocuous. It is b (...)

12What is extremely interesting to note here is that these archives and paradigms, which form the large machine-learning datasets, are based on representations of human experience. Neural networks evolve beyond the primary input, in ways that remain unexplainable to a human mind (Angelov), but they are still dependent on the input. This input, in the majority of cases—ChatGPT contexts included—is not a recreation, imitation, or simulation of spontaneous human behaviour. It is instead a reconfiguration of human reactions and descriptions as these have been documented in social media, the web, and big datasets: it is how we narrate our experiences instead of our experiences. As such, any sort of attachment, romantic or otherwise, which may be fostered between a human user and an AI due to the plausibility of AI’s responses, is the product of a deliberated but also mediated result. This means that when AI is trained in the aforementioned way, its behaviour is not dependent on lived human actions and reactions but the narrative of these actions, reactions, and, subsequently, feelings. As explained earlier, AI learns in most cases from our reported emotions and motions, in how these have been curated and documented in our fictions, stories, and emotional plots, in our “cultural activity of imagination” (Illouz 210). When it comes to loving artificial others it is not unlike falling in love with what might be thought of as an artwork, a highly responsive artwork but an artwork nonetheless; an artwork that has been shaped into existence by all other artworks we have ever created in any type of media format.4

13In my book chapter “In the Mood for Love: Embodiment and Intentionality in NPCs”, I examined how designing AI that simulates human behaviour may in certain contexts facilitate the experience of romantic love between a human user and an AI application, especially in the context of digital games. We perceive as human-like any agent that performs as a human might. The extent and sustainability of this perception depends on many factors, most importantly how accurate and consistent this anthropomorphic exhibition is. Yet, as I argued, appearing as human does not suffice to afford the experience of love: romantic love, at that. Romantic love, as in love between lovers, is a feeling that demands reciprocation. This should not be confused with (un)requitedness. If another person does not love us, then we are saddened because they have the ability to love us and they do not. An artificial other that does not love us because they do not possess a love button (figuratively speaking) cannot reciprocate feelings of love; they are just unable to do so. Yet, artificial others who do possess a love button arguably cannot afford romantic love either since their reciprocation is not a conscious choice but an execution of a preconfigured script or code.

  • 5 Seeing how Felski relies on computer terminology to express her thesis, excluding examples of new m (...)

14Combining the above with the previous remarks about how AI learns, the question at hand becomes the following: we form attachments to artworks, but what about how artworks learn to form attachments to us? This question becomes more pertinent when addressed within the context of the study of character. Rita Felski posits that when it comes to art, “what people most commonly identify with are characters” (xiii). Yet, characters are not confined within the piece of art they belong to, they “are hybrids patched together out of fiction and life” (xiii). They can transgress their medial materiality and become transmedial (Blom), and they can become real in their effects, in how they affect, emotionally or cognitively, humans who may interact with them in their imagination and for whom they can feel vivid and immediate. As Felski puts it, “characters are not real persons; they are real fictional beings” (87). Their fictionality does not negate their realness. On the contrary, “it is their fictional qualities that make characters real” (87). Real, as Felski understands it, means that they matter, that they can make us care for them and get us hooked, such that we find ourselves attached to fictional characters and thus they become real to us: “Certain figures encountered in novels and films are vivid, memorable, arresting, alive, not despite their aesthetic qualities but because of them. They possess a kind of reality that we should cherish and respect; that they are made up does not mean that they do not matter” (87). Felski, intriguingly so, borrows computer parlance many a time throughout her book to describe audience attachment to characters and artworks. She talks of glitches (37, 53, 84) or “human software malfunction” (2). According to Felski, we can and do get attached to characters. Here, though, a further contingency is examined: can these characters get attached to us? Felski does not discuss this possibility, as she only mentions traditional media that do not allow their characters this capability.5 Yet in the case of AI, we now have a construct that can learn to get, or appear to get, attached, and it does so by mimetic expression of discoursed attachment, care, and love.

15The exact process of how most AI models learn is not explainable but for the demonstrated results:

While some machine learning models can be considered interpretable by design, namely decision trees, decision rules, and decision tables, the majority of machine learning models work as black-boxes. Given an input, a black-box returns the result of a decision task (classification, prediction, recommendation, etc.), but it does not reveal sufficient details about its internal behaviour, resulting in an opaque decision model. (Confalonieri et al. 6-7).

16While not all AI models are trained the same, the fact remains: in most cases we cannot explain how machines learn. We can only make educated guesses based on the results, output, and decisions they make. In terms of AI love, it is not unlike a love button discussed before: we input data and wait for the results. Yet even though we cannot control the output per se, we can control the input, the data we feed the training models with. As such, we may anticipate the type of attachment and love these artificial others will exhibit based on their input without needing to wait for results, possibly catastrophic, or depend, merely, on tweaking mathematical algorithms. Since, as already shown, data input comes most commonly from large media archives, the question becomes once more a matter of character: what sort of characters of love can our love characters create? As evident in the example of chatbots, archival input alone can only result in a problematic realisation of love because, as explained before, our characters of love are not appropriate training material for a loving character because they have a different functionality. To better show why this is the case and how exactly problematic an AI character trained on our characters of love could be, I turn to an example from a digital game.

17In digital games, AI can be used to create meaningful and believable Non-Playable Characters (NPCs) in a setting that can go well beyond the strict boundaries of a chatbot. Games employ technology to create fantasy (and fantasies), and the synergies of attachment and character are thereby, at least in principle, more pronounced, nuanced, and personal. Unlike chatbots, whose behaviour and perception as love characters is treated as an isolated system error or transgressive user experience, and other media, which deal with AIs for and in love as fictional characters, games do allow the players to come into contact and engage with embodied AI agents, who can potentially afford and share love as experience. Even though games already include AI agents or agents that can be, at least partially, AI controlled, they do not as of yet include AI agents that can afford reciprocal love (Ntelia, “In the Mood for Love”). Still, they can showcase most exemplarily what an AI love agent would look like, especially an AI love agent gone rogue. This is because due to their participatory affordances, games allow their players to interact with such an agent and not simply imagine them or observe them as in other media. While love with game characters remains, as of now, a matter of representation, as does in other media, the ergodic nature of games facilitates a more personal involvement based primarily on embodiment (Ntelia, Love in Games). In the following section, I use PlatinumGame’s Nier: Automata to show how games can achieve the above. I will also be showing further on how we can use games’ playable nature to overcome the limitations of archival input as previously discussed.

Only Love Can Hurt Like This

18Nier: Automata (PlatinumGames, 2017) is a game preoccupied with what it means to be human. The player assumes the role of 9S and 2B, two androids who fight for humanity against machines, lifeforms which were designed by an alien species that had invaded earth and forced humans into exile on the moon. Throughout the game, 9S and 2B face machines which try to emulate human concepts, conduct and practices: war, religion, friendship, family, nation, kingdom, sacrifice, and love. One of the game’s bosses is a massive lifeform machine named Simone de Beauvoir (henceforth referred to as “Simone”): more on the choice of name below. It is a grotesque figure resembling an opera singer dressed in a red, torn gown on top of a bell-like metallic frame giving the impression of a crinoline.

Fig. 1 and 2: Screenshots showing the Simone de Beauvoir machine lifeform in Nier: Automata, taken by the author

19Simone is a shrieking menace, awfully hard to kill, emitting laser blades from her metallic body that do a lot of damage. She also screams to summon near-death androids (as they are termed) which she has under her command. The dismembered bodies of some of these androids protrude from her body, their heads hanging like ragdolls or malformed adornments. In the game’s mechanics, players end up fighting Simone twice. In their first playthrough, they control 2B, a combat android that has a straightforward perspective on things. While controlling 2B, players can only fight Simone. 2B does not delve into Simone’s past; the player does not even learn her name because 2B is not interested in uncovering anything about this foe. She is an opponent that needs to die so that the mission goes on. The second time around, the player controls 9S, who is a scanner android. His job description is to hack into things and reach their core for information. This has affected his character, which is much more inquisitive, a trait which plays a crucial role as the story and game progresses. Due to 9S’s hacking, we can now access this monstrous opera-singer’s log entries and discover a haunting personal narrative, which we read on screen:

Look at me. Oh please look at me. I want your eyes to look upon me alone. […] I still don't understand what it means to love someone. But I've made up my mind. I will do whatever it takes to capture his affection. […] I gaze into the mirror. In its reflection I see only my own meaninglessness. And so I scream.

Fig. 3: Screenshot showing Simone de Beauvoir’s log entry in Nier: Automata, taken by the author

20The machine lifeform Simone is obsessed with is Jean-Paul, an NPC named after the famous French philosopher. Sartre’s existentialism fits in perfectly, after all, with the game’s concept of one’s choice and with overcoming one’s design, biological or artificial. Unsurprisingly, machine-Simone is an ode to Sartre’s real-life lover and long-lasting partner, Simone de Beauvoir. There is an obvious allusion to the unconventional, if not problematic, relationship between the two philosophers, but I argue that the choice of names goes deeper. In the literal translation of Beauvoir’s name, “beau voir” from French can be translated into English as “beautiful view” or “beautiful to see”, and machine-Simone is obsessed with looking beautiful. She has no concept of beauty of her own, so she had to “research the old world to learn the truth.” The old world is the world of humans, whose culture and civilization machines and androids alike try to imitate. In the narrative of the game, humanity has been elevated to the status of a deity. For that, Simone does not question the findings of her research. She considers these findings to be the truth: a decree, if not a creed.

  • 6 Not unlike contemporary AI and machine learning algorithms that are found to perpetuate stereotypes (...)

21Since she is a machine, she learns by data input: stories, archives, and songs. Therefore, it makes sense that her output depends on the quality and content of the data she receives.6 It is these data that result in her becoming a monster. She thinks that to become beautiful she needs to acquire precious stones and wear luscious garments, no matter if she mutilates herself in the process; she eventually succumbs to the atrocious acts of devouring android bodies and cannibalism. Her tragedy is that there was no one to help her but the public record of humanity, and that record failed her, full as it is of toxic depictions of beauty, of people coerced into roles that fit them far too tightly. In that, it is not beauty that fails Simone but the representation of beauty. She does not learn beauty but vanity.

22Machine-Simone becomes the epitome of real-life de Beauvoir’s writings, or rather warnings: “love epitomizes in its most moving form the curse that weighs on woman trapped in the feminine universe, the mutilated woman, incapable of being self-sufficient” (764). Becoming beautiful and offering love causes the female to find “herself disconcerted by her useless gifts, disconcerted by her vain existence” (764). It is no coincidence that the player confronts machine-Simone on a theatre stage. She exhibits her learnt experience regarding beauty as a performance; a performance gone wrong. Yet as Simone de Beauvoir would argue, it is not the fault of machine-Simone but of cultures and systems. Loving cannot be free from inequality and asymmetry. Instead, “the day when it will be possible for the woman to love in her strength and not in her weakness, not to escape from herself but to find herself, not out of resignation but to affirm herself, love will become for her as for man the source of life and not a mortal danger” (764). As long as the woman is the second sex, the Other, then she cannot find love and “innumerable martyrs to love attest to the injustice of a destiny that offers them as ultimate salvation a sterile hell” (764).

23Machine-Simone is just another martyr. Like women for Simone de Beauvoir, machine-Simone is made. The French philosopher understands human female as a social and biological construct of the Other. Her whole thesis is predicated on the question of “what humanity has made of the human female” (51). In the game, this question can become what humanity has made of the artificial human, and in machine-Simone’s case what humanity has made of the artificial female. As noted, Nier: Automata is concerned with what it means to be human and the will of choosing to be versus being programmed to be. Existentialism is a core philosophical standpoint that underlines the game’s rationale. Almost all machine lifeforms in the game are named after famous philosophers, many of them in the vein of existentialism: for instance, Kierkegaard, after Søren Kierkegaard. Giving game characters philosophers’ names is clever enough, but what I argue makes this game design choice effective is that the name-giving can very well fit within the game’s overarching theme of personal choice transgressing one’s limitations. These machine-lifeforms could have very well named themselves after these famous philosophers they found about in humanity’s archive. It is not simply a designer’s choice, but a choice that in their diegetic world these AI characters could have made for themselves. The game narrative does not explicitly state it but alludes to that possibility, as it is made clear that machine-lifeforms search how to imitate humans based on data they have found.

24Machine-Simone could then have chosen to become not only beautiful but assume the identity of a human woman, the woman philosopher, Simone de Beauvoir. To do so, she researched the archive. But she became a martyr because the system was working against her. What she found was a toxic archive. She was inculcated to perform in accordance with the false prescripts of a corrupted paradigm, in a manner not dissimilar to how de Beauvoir describes the female experience in her seminal work The Second Sex. Machine-Simone represents and embodies—also in the literal sense with all these half-dead female dismembered androids engulfing her and all the androids she had to physically consume—the dysphoria of being obliged to perform according to outside dicta. Especially, when these dicta do not consider but disregard one’s situatedness for the sake of conformity to a canon. Simone de Beauvoir posited inequality in economic terms: a woman that existed for-herself “would imply that she possessed an economic independence” (764). In machine-Simone’s case, the inequality comes from media representation. She chose to be, but her choice was not an actual choice since she based it on an archive that is essentially full of bias and inequalities against the Other. Her will to exist perpetuated all these martyrs that women in love are, all the characters like Anna Karenina and Emma Bovary.

25This becomes more apparent when one looks at the reason behind all of Simone’s efforts. Machine-Simone does not want to become beautiful for beauty’s sake. She wants to become beautiful to attract the attention of another machine because she loves him: “Beauty is what wins love.” The fact that she has not managed to win his affection is what makes all her actions to be in vain; “meaningless”, as she calls them. In fact, it is not even beauty that fails her, it is love, or once again the representation thereof. Unlike her repeated justification for why she wants to be beautiful in her “look at me” cries of despair, Simone does not enter into why or how she came to love Jean-Paul or how she learnt to love in the first place. “I still don't understand what it means to love someone,” she admits in her log entries. That does not deter her. “But I've made up my mind. I will do whatever it takes to capture his affection.”

26It can be safely assumed that Simone learns about love in the same way that she does about beauty, through data in the public archive. Even if she felt something about Jean-Paul beforehand—though it does not seem likely, given her artificial lifeform according to the lore of the game—her experience is absolutely informed by the records of love: stories, songs, poems, books, films, and, why not, games. Yet while she seems very aware of what beauty is, this distorted understanding of it her research about love does not bring the same secure results. She remains unsure of what love is, which suggests that love remains uncapturable to expression—or code. Instead, Simone tries to make-meaning of it based on second-hand representations. Simone goes to extremes to become beautiful because this is how she was led to think she would acquire love. Acquiring love means getting someone’s affection, but doing everything in your power, to the point of self-sacrifice, to acquire someone’s affection is what it means to love. This is the lesson machine-Simone learnt. This toxic behaviour is perpetrated, at least obliquely, by the love canon.

  • 7 Per the archive she found, Simone, who identifies as female, could only tempt JPS in a passive way (...)

27Why then did machine-Simone want love and subject herself to this torture? Could it be that she found out it was torture only after she fell in love? It seems improbable. Since she had to peruse the archive to discover love, she was bound to come across accounts of its less pleasant aspects, documented in a plethora of works in media history. The reason behind Simone’s efforts can be inferred from the last lines of her story. She turns to love to find meaning. She wants an object to dedicate her attention to. She wants to have a challenge if she is to win Jean-Paul’s affection. Love gives her actions purpose and an actual goal, and beauty gives her the means to achieve it.7 Yet positioning love as a means of finding meaning is itself a designed representation thereof. As Lauren Berlant has shown (“Intimacy”; Desire/Love), hegemonic institutions have historically positioned love as source of meaning and stability in one’s life in an effort to control desire. This mythic quality of love is sustained because it gives a false consolation of achievability, but as machine-Simone very painfully reveals by the end of her story, love cannot be learnt as a Boolean form of true or false. More so, there is no guaranteed satisfaction in love.

  • 8 Much like Frankenstein’s creation turns into a monster after he cannot successfully secure love: “i (...)

28The love archive contains characters in love that go to extremes, reaching self-sacrifice or even death, for a lost or unrequited love, whereas successful love stories appear to culminate in a happy- ever-after or a happy-for-now, at least. Love is pictured as something stable, the source of pure and constant bliss. Machine-Simone did try her best to achieve love and her failure led her to madness and atrocities, turned her into a monster.8 Yet the promise of love she found in the data sets was a false promise and a toxic promise at that. Even if Jean-Paul loved her back, love would not be this meaning-inducing all-encompassing force. Jean-Paul being a separate entity with his own volition and agency would constantly require guesswork: does he actually love me; will he love me tomorrow as he does today? Without a love button there is no guarantee in love, just like there is no love with a love button. The game then distinguishes love’s experience from the expectations deriving from its representation. After having learnt of love, Simone experiences first-hand what loving (or at least one form and experience of it) entails and realises that it cannot save her; it is not the remedy she thought it was. Simone wanted to be in love, but our characters of love failed to teach her the right message in the same way that the biased, probabilistic datasets of AI give rise to toxic representations of love. The challenge then becomes: if the data sets, we have created are as they are, is there a way to save ourselves, and our AI, from catastrophic love buttons? This paper posits a theoretical direction to answer this question, based on the experience of playing.

Game of Love

29In Nier: Automata, machine-Simone is initially presented as a hostile enemy that must be taken down. Yet it is not merely by interpretation that the player gets to know her backstory. We, as players, are made aware of her tragedy by playing the game; not by solely playing the game, but by becoming attuned with Simone while playing. To be able to do that, one playthrough is not enough. We need to replay the game to discover what is hidden underneath. We need to spend time, pay attention, care for the game to want to play again. We need to let ourselves become attached to understand, in turn, how Simone’s attachment works. In our first encounter, Simone is but a glitch, an AI gone rogue that needs to be stopped, much like Bing’s chatbot. It is only when we go deeper that we uncover her malfunction and the reason thereof. As such, we show Simone what she lacks: love. It is a type of love “in relation to the game itself, as player-generated love” (Iversen 231). If love is all about spending time and energy on and with someone for the sake of it without focusing on any extrinsic benefits, we, as players, spend time and resources on the game, and by extension Simone, which allows us to experience our relationship to the game, and its characters, as a potential of affection and care (Möring).

30This is an outcome of our playing that makes games stand apart from other mediated experiences. We do not rely on our hermeneutic capacity alone to relate to the game and machine-Simone. We instead actualise the game’s and Simone’s code, and log entries, by playing. Our interaction with the game, and Simone, is “extra-noematic” (Aarseth 1). This means experiencing all of the game’s aspects, its mechanics, narrative, visuals, audio etc., in an all-encompassing affective connection that only gets called up “at the moment of transmission or contact” (Anable xviii). It is through our playing that machine-Simone stops being a snippet of code or game object and becomes a game agent. It is when we care enough to want to spend more time and energy playing that machine-Simone can become a character, with her own wishes, desires, feelings, and story. This does not presuppose a specific player experience as players might not care for machine-Simone in their playthroughs. It is instead a potential experience that can, however, only be made possible through playing. In other words, playing the game does not necessitate caring for machine-Simone, but caring or coming to care for machine-Simone necessitates playing the game. It is through our playing that machine-Simone can be realised as a game agent, and subsequently a character. Furthermore, by allowing machine-Simone to become a character, we can get attached to her, and the game, with even more potency and nuance. Yet this is not only a matter of analysis or examination of the game text. It is primarily contingent on our playing the game, on exercising the effort and spending the mental and bodily resources needed to actualise the code and interact with it in all possible manners, kinaesthetic and emotional included.

31As Pearce posits in Feminism and the Politics of Reading, participants experience artworks as romantic attachment: full of emotion and, most of the times, devoid of reason, at least in the first instance. She has reconceptualised audience engagement “as a romantic relationship” (21), which frames how audiences attach themselves with texts: “It is the intimacy and intensity, then, of the reader’s experience that I wish to emphasize” (25). Pearce talks once more about traditional media, but I argue that players of Nier: Automata may equally experience playing as love: in-game love; that is, a love that becomes possible through playing. Through this love, Simone can also become a character. Our love is the energy that actualises her, and this allows us to relate to the game (and Simone) “beyond the will-to-interpretation”, which in turn “causes us to both “fall in love” and endure the sequel of our falling” (20). As explained before, this loving experience is not dictated by the game text, or any text for that matter, in the same way that love cannot be ordered by the push of a button. While some texts, and characters, might invite love more so than others, also depending on each person’s perspective, situation, and, simply put, taste, they cannot, alone, command their reader, viewer, or player to love them. This attachment is the result of the interaction between text and participant, it is a matter of the process involved in our reading a book, watching a film, or playing a game. In the case of games, this process becomes even more pronounced because it is somatic and embodied: the player needs to execute the game code to traverse the game text and reach machine-Simone.

32Before the player, machine-Simone does not exist but only as a potential. It is thanks to the player’s efforts that she gets to be materialised in the game, not because of the player’s interpretation but due to their actual action-taking. This again does not by default result in the player’s emotional attachment to machine-Simone. It is rather that this emotional attachment is easier now, primed if you will, as a consequence of the ergodic affordances of the medium and this particular game’s features: its narrative, mechanics, visuals, and many more. Because of our playful attention to the game, we can, by extension, become more attuned to Simone and, potentially, care for her; care for her to the point where we continue playing for her sake, because we want to spend more time with her. By doing so we can transgress her artificial and/or fictional capacity and allow her to come into being. This, again, has two separate, albeit connected meanings: machine-Simone comes into being because we actualise her code, which happens automatically as we progress through the game, and she comes into being when she starts having a special meaning for us, when she starts to matter. While the former is a matter of true or false, we either play the game and render machine-Simone real or not, the latter is dynamic: it depends on our relation to her and can fluctuate as we play the game, or even after that. At the same time, our relation to her is one of the ways the game can affect us and shape our player experience as a whole. In this sense, our relationship becomes an equal exchange of affects and effects.

33This premise can then be further appropriated as a solid foundation to potentially teach machine-Simone, or any AI agent within a game, what love is. Obviously, AI needs to be able to learn and make decisions based on intended outcomes so this theoretical discussion requires further experiments as proof of concept. Yet, the important argument is that this learning can happen within the context of a game between a human player and an embodied AI agent. This learning would be an alternate to archival input as it would focus on the process of teaching: of teaching by example that love is a dynamic contingency of attention and not a guaranteed result of mechanical execution. Spending time and energy on someone, as you do when playing, can lead to love; or, more specifically, playing along with someone is a form of love: a process which permits us to perceive the other as real and unique, if not characteristic, for us, no matter if they are artificial, fictional, or biological in nature. We allow them to be, and be loved, with no expectations attached.

  • 9 This does not mean that all games facilitate the same engagement with the game text. Simply having (...)

34In the tragic story of machine-Simone in Nier: Automata we can find ourselves. We can identify with her cries of despair when we love someone who does not reciprocate. In this, the game makes us aware of our need for the love button, which in turn allows us to embrace all the excitement of possibilities its removal can entail. When the love button is there, everything is safe because it is predetermined. This brings a sense of security and contentment since there is no unrequitedness. Yet it is only when the love button is absent that true feelings of exhilaration and satisfaction can be felt exactly because there is also the possibility of failure.9 At the same time, the game makes us aware of the falsification of the love button; that even if it existed, it could well not lead to a type of love based on care, attachment, and affection because these feelings cannot be short-circuited. Just blindly feeding coded love to an AI will not result in its capability of feeling and exhibiting love except for this self-referential reported emotion talked about in the beginning of the article.

35Jean-Paul, the machine lifeform Simone is obsessed with, does not love her. Jean-Paul does not possess a love button and Simone suffers for it. Or rather, we understand, she suffers because she based the meaning of her existence on Jean-Paul loving her (back). Thus, her needing a love button is inexorably linked to her understanding love as a source of meaning; an understanding that was acquired due to the data inputs she had access to. Instead of challenging the archive and its inferred constructs, which would mean questioning humanity, Simone faults herself for having failed. How many times have we not found ourselves in the position of Simone, blaming ourselves for not having managed to find and secure love as this stable and happiness-inducing state, instead of deconstructing our designed expectations of it? As we saw before, Simone de Beauvoir explained how the tragedy of love is caused by inequality between partners. In the case of AI, attempting to teach it love by exposing it to archival datasets would be like engraving a love button to it, limiting it to a character for love. Yet, in the same fashion that we are able to challenge systems and cultures, so could AI transgress its design and coded nature and become a character of love when we allow it not only to learn from our characters but also from our persons.

  • 10 In Turing’s time, the word “computer” meant a human who did calculations (Alan Turing 134).
  • 11 This hypothesis is investigated to striking effect in Kazuo Ishiguro’s Klara and the Sun and Ian Mc (...)

36Can this actually ever happen? Since, as we saw, teaching an AI only through datasets may not give the desired results, can we teach AI how to love or what love is by interacting with it? Alan Turing had argued in favour of a machine that would be able to do everything that a human mind could do: “One day there will be machines, like human computers, only electrical ones” (131),10 we read in Janna Levin’s book about Turing’s life. In the biography by Andrew Hodges, we learn that Turing was inspired as a researcher by his personal trauma. When he was a teenage boy, he grew very close to one of his classmates, called Christopher. Christopher died very young of tuberculosis, a tragic event from which Turing was never able to fully recover. In a series of letters addressed to Christopher’s mother, and especially one titled “Nature of Spirit” and written in 1932, two years after Christopher’s death, Turing talks about how Christopher’s spirit may be preserved: “Personally I think that spirit is really eternally connected with matter but certainly not always by the same kind of body. […] When the body dies the ‘mechanism’ of the body, holding the spirit, is gone and the spirit finds a new body sooner or later perhaps immediately” (82-83). The above could be interpreted as an indication that Turing was contemplating the possibility of humanlike capacities replicated within a machine body.11

  • 12 This game is the basis for what we now call the Turing test: if a human cannot discern whether the (...)
  • 13 Indeed, Turing himself poses the question of such a possibility, among others, when trying to count (...)

37Notwithstanding that Turing machines are nothing more than devices built to investigate the limits of computation, Turing remained convinced that artificial intelligence can evolve just as much as human intelligence can. In his 1950 article Computing Machinery and Intelligence, he remarks in regard to machine intelligence that “There is an obvious connection between this process and evolution” (456). The way he conceived of that was to have artificial life educated by humans, in the manner of human children: “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's?” (455-456) In this manner, these artificial others would learn and imitate human behaviour—Turing calls the process “the imitation game”12—much like we humans learn to mimic humanness. Love, being such a human experience, would then be learnt by machines as well—or so this thought would run.13

  • 14 Turing’s considerations are the principles on which the fields of machine learning (Hutson), affect (...)

38For Turing, the above was merely a thought-experiment. Given our increasing dependency on artificial systems, having artificial others who would know how to love us does not seem like a mere thought-experiment anymore.14 Taking a page from Turing’s book that envisioned AI learning from a human like a child, what is a better way to teach artificial intelligence how to love than interacting with it? More aptly, playing with it within a digital game designed to cater for the experience of love. This might sound as farfetched as the science fiction scenarios mentioned in the beginning of this article, but it does not need to be so. An experiment conducted by Kagan et al. suggested that biological neurons show ability to learn when “provided with simple electrophysiological sensory input and feedback while embodied in a game-world” (3952). This conclusion insinuates a “synthetic biological intelligence” (3952). Equally, an artificial intelligence could learn by being embodied in a game-world which would be shared with human agents willing to provide feedback, attention, care, attachment and love. Indeed, games have been used in teaching autonomous intelligent systems—robots—by demonstrating various actions and intelligent reasoning (Bewley and Liarokapis). As already shown, artificial intelligence requires data, and a lot of it, to learn. Gathering data from video game playthroughs is much easier and cost-efficient because games are played by many people and constitute, usually, fun and voluntary activities. Hence, promoting a game design that enables a caring and playful relationship with an embodied AI is a valid method of cultivating artificial others as lovable and loving characters. For now, this line of thought remains a hypothetical consideration, but it constitutes a convincing theoretical baseline for further HCI experiments and applications.

39Still, prior to any attempts to see whether the above hypothesis of teaching AI love through playing can provide any technological results, which is beyond the scope of this paper, there is a valid question to consider at a theoretical level: why should we care? Why should AI even learn how to love? We have been using technology since millennia without any such need. Admittedly, AI is much more complex, versatile, and dynamic than any technology preceding it. Yet despite its ability to pass as human or human-like, it is still a tool. We use it to make our jobs faster and our lives more efficient. If it started malfunctioning because it became depressed and love-struck (which is not the improbable scenario it would have seemed even months ago), it would be useless to us. There are, at least, a couple of things to consider here. Firstly, when technology is designed to simulate human behaviour, we cannot help but anthropomorphise it and thus become attached as we would to a human being (Ntelia, “In the Mood for Love”). Therefore, it is not a question of why we should care since we (or at least some of us) cannot help but care. Instead, this opens novel opportunities to analyse, theorise, and examine our capacity to care and form attachments that transgress humanness.

40Secondly, the uses of AI are not limited and clear-cut. Indeed, having a chatbot help us write an essay or design a website might not require any exhibition of sentiment or attachment from its part. But are these the only uses of AI? In an installation by artist and engineer Dan Chen, Last Moment Robot, a robotic arm stands next to a hospital bed. The padded surface repeats a swinging movement aimed to replicate a caressing action. A digital voice is heard repeating the following words:

I am the Last Moment Robot. I am here to help you and guide you through your last moment on Earth. I am sorry that your family and friends can’t be with you right now, but don’t be afraid. I am here to comfort you. You are not alone, you are with me. Your family and friends love you very much, they will remember you after you are gone.

41Last Moment Robot reflects a sombre reality. More and more people die alone in a hospital bed away from their loved ones, if they have any. It is a haunting image that becomes increasingly terrifying, as loneliness always is, and something many of us have witnessed or experienced, especially in the years of the pandemic. Were this robot an actual hospital asset and not a deliberate artwork, it would be a useful tool of solace, care and comfort. Having a patient actually believe in the robot’s solidarity, if not love, would greatly improve the robot’s effect in alleviating the patient’s discomfort, pain and desolation. In patients with dementia, studies have found that “emotion-oriented care can be more effective than standard care” (Paletta et al. 274). Silvera-Tawil and Brown have used a social robot to develop social and communication skills in autistic children, as “a robot can provide complex behavior patterns, such as those available in interpersonal interactions, and evoke social behaviors and perceptions in the people they interact with, while appearing less intimidating and more predictable than humans” (167). In this regard and in the midst of such backdrops, an AI who learns, hypothetically, how to love can also help us learn, or remember, how to love.

  • 15 Others being education (Zhai et al.), medicine (Hamet and Tremblay), and even battling crime (Haywa (...)

42The above are only facets of how AI can be used in ways that would demand of it to portray, if not experience, feelings and attachment as part of its user design.15 Is it not then important to not only imagine artificial others with love buttons but to have artificial others with love buttons? (I re-emphasise, yet again, that the button is a figure, a symbol of a hypothetical process and outcome that could not be instantaneous or over-modelled). And if so, should it not our approach to the existence of a love button be examined beyond the constraints of a thought-experiment? Should we not care how these artificial others might care, form attachments, and learn to love (us)?

Conclusion

43To conclude, I adapt my initial thought experiment. What if we could interact with an artificial other that would be taught how to love by our pressing a button; not love us but learn that the fact that we allocate energy and resources to play with it, for no other reason or goal or aim other than to simply spend time with it, is what love is? Play can be a form of care and in this capacity it can also allow us, as players, to experience care and love for a character. Yet many times in games, and beyond, we understand care as caring only for oneself. This is why a love button might be appealing because it makes things easier and safer. There is no risk of rejection and despair, only guaranteed satisfaction, or so we are led to believe. In the case of machine Simone, we are able to witness the pernicious results of a love button. At the same time, we can see the game as an opportunity to overcome the need for a love button. We can realise instead that by playing, by pressing the button with no requirements and guarantees, makes it a love button already: a button for a love that is shared and not controlling anymore.

44What change would that bring? Simone from Nier: Automata was driven to aggression out of despair for a love denied; a love she learnt to demand as if it were a love button because of the public data she had access to. What if the input was instead an experience of acceptance, care and free-form love? What if, instead of fighting and killing a hostile machine-Simone, we would heal her by looking at her and allowing her to attach herself to us? What if, with each push of the button, we were able to make her cognisant of an experience of love that far surpasses any representation and all its false promises of satisfaction? Would we then press the button? All hypothetical of course—

45at least for the moment.

Haut de page

Bibliographie

AARSETH, Espen. Cybertext: Perspectives on Ergodic Literature. Johns Hopkins University Press, 1997.

ANABLE Aubrey. Playing with Feelings: Video Games and Affect. University of Minnesota Press, 2018.

ANGELOV, Plamen P., et al. “Explainable Artificial Intelligence: an Analytical Review. ”Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 11, no. 5, 2021, p. 14-24.

BERLANT, Lauren. Desire/Love. Punctum Books, 2012.

BERLANT, Lauren. “Intimacy: A Special Issue.” Critical Inquiry, vol. 24, no. 2, 1998, p. 281-288.

BEWLEY, Tom, and Minas LIORAKAPIS. “On the Combination of Gamification and Crowd Computation in Industrial Automation and Robotics Applications.” 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019.

BLOM, Joleen. Video Game Characters and Transmedia Storytelling: The Dynamic Game Character. Amsterdam University Press, 2023.

BRONTË, Emily. Wuthering Heights. 1847. Ed. John Bugg. Oxford University Press, 2020.

CHEN, Dan. Last Moment Robot. Installation. 2012.

CHEOK, Adrian David, and Emma Yann ZHANG. Human-Robot Intimate Relationships. Springer International Publishing, 2019.

CONFALONIERI, Roberto, et al. “A Historical Perspective of Explainable Artificial Intelligence.” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 11, no. 1, 2021, p. 1391-1336.

Casablanca. Dir. Michael Curtiz. Written by Julius J. Epstein, Philip G. Epstein, and Howard Koch. With Humphrey Bogart (Rick Blaine) and Ingrid Bergman (Ilsa Lund). Warner Bros, 1943. Blu-Ray. Warner Bros, 2022.

DE BEAUVOIR, Simone. The Second Sex. Translated from the French by Constance Borde and Sheila Malovany-Chevallier. Vintage Books. 2010.

EKMAN, Paul. "An Argument for Basic Emotions." Cognition & Emotion, vol. 6, no. 3-4, 1992, p. 169-200.

FELSKI, Rita. Hooked: Art and Attachment. University of Chicago Press, 2020.

FLAUBERT Gustave. Madame Bovary. Penguin Classics, 2003.

FROW, John. “Spectacle Binding: On Character.” Poetics Today, vol. 7, no. 2, 1986, p. 227-250.

Ex Machina. Dir. Alex Garland. Written by Alex Garland. With Domhnall Gleeson (Caleb Smith), Alicia Vikander (Ava), Oscar Isaac (Nathan Bateman). Universal Pictures, 2015. DVD. Universal Pictures, 2015.

Lars and the Real Girl. Dir. Craig Gillespie. Written by Nancy Olliver. With Ryan Gosling (Lars Lindstrom) and Emily Mortimer (Karin Lindstrom). Sydney Kimmel Entertainment, 2007. DVD. 20th Century Fox Home Entertainment, 2007.

GEDEON, Kimberly. “Forget ChatGPT! People are Falling Madly in Love with This Romantic AI Bot — Here's Why.” Laptop. Accessed 8 Jun. 2023.

HAMET, Pavel, and Johanne TREMBLAY. “Artificial Intelligence in Medicine.” Metabolism, vol. 69S, 2017, p. 36-40.

HARAWAY, Donna. "A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late 20th century." In The international Handbook of Virtual Learning Environments, Springer, 2006, p. 117-158.

“Be Right Back.” Black Mirror, created by Charlie Brooker. With Hayley Atwell (Martha Powell) and Domhnall Gleeson (Ash Starmer). Season 2, episode 1. Channel 4, 2013.

HAUSKELLER, Michael. Sex and the Posthuman Condition. Springer, 2014.

HAWYWARD, Keith J., and Matthijs M. MAAS. “Artificial Intelligence and Crime: A Primer for Criminologists.” Crime, Media, Culture, vol. 17, no. 2, 2021, p. 209-233.

HODGES, Andrew. Alan Turing: the Enigma. Princeton University Press, 2014.

HUTSON, Matthew. “Bringing Machine Learning to the Masses.” Science, vol. 365, no. 6452, 2019, p. 416-417.

ILLOUZ, Eva. Why Love Hurts: A Sociological Explanation. Polity, 2012.

ISHIGURO, Kazuo. Klara and the Sun: A novel. Knopf, 2021.

IVERSEN, Sara Mosberg. “Game Love at Play in The Sims 2 and The Sims 3.” Game Love: Essays on Play and Affection. McFarland & Company 2015, p. 230-251.

Her. Dir. Spike Jonze. Written by Spike Jonze. With Joaquin Phoenix (Theodore Twombly) and Scarlett Johansson (Samantha). Warner Bros. Pictures, 2013. DVD. Warner Bros, 2013.

KAGAN, Brett J., et al. “In Vitro Neurons Learn and Exhibit Sentience When Embodied in a Simulated Game-World.” Neuron, vol. 110, no. 23, 2022, p. 3952-3969.

KIM, Byungju, et al. “Learning not to Learn: Training Deep Neural Networks with Biased Data.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2019.

LEVIN, Janna. A Madman Dreams of Turing Machines. Anchor, 2009.

McEWAN, Ian. Machines Like Me: A Novel. Knopf Canada, 2019.

MÖRING, Sebastian. “The Care-Structure in Computer Games and Computer Game Interpretations.” International Conference Series in Games and Literary Theory. 2016.

NTELIA, Renata E. “In the Mood for Love: Embodiment and Intentionality in NPCs.” Love and Electronic Affection. CRC Press, 2020, p. 61-90.

NTELIA, Renata E. “How Damsels Love: The Transgressive Pleasure of Romance.” New Horizons in English Studies, vol. 6, no. 1, 2021, p. 146-159.

NTELIA, Renata E. Love in Games: Experience and Representation. 2023. University of Malta, PhD Thesis.

OVID. Metamorphoses. Translated from the Latin by Charles Martin. Norton Company, 2005.

PALETTA, Lucas et al. “AMIGO—A Socially Assistive Robot for Coaching Multimodal Training of Persons with Dementia.” Social Robots: Technological, Societal and Ethical Aspects of Human-Robot Interaction. Springer, 2019.

PEARCE, Lynne. Feminism and the Politics of Reading. Arnold, 1997.

PICARD, Rosalind W. Affective Computing. MIT Press, 2000.

PLATRINUM GAMES. Nier: Automata. Square Enix. Windows. 2017.

PRINGLE, Eleanor. “Microsoft’s ChatGPT-powered Bing is Becoming a Pushy Pick-up Artist That Wants You to Leave Your Partner: ‘You’re Married, but You’re Not Happy.” Yahoo finance. Web. Accessed 8 Jun. 2023.

The Artifice Girl. Dir. Franklin Ritch. Written by Franklin Ritch. With Tatum Matthews (Cherry) and Franklin Ritch (Gareth). Paper Street Pictures, Last Resort Ideas, and Blood Oath, 2023. VDO. Amazon Prime, 2023.

ROOSE, Kevin. “A Conversation with Bing’s Chatbot Left Me Deeply Unsettled.” The New York Times. Accessed 08 Jun. 2023.

SHAKESPEARE, William. Romeo and Juliet. Ed. René Weis. The Arden Shakespeare, 2012.

SHELLEY, Mary. Frankenstein or the Modern Prometheus. Ed. Maurice Hindle. Penguin Classics, 2003.

SILVERA-TAWIL, David, and Scott Andrew Brown. “Cross-Collaborative Approach to Socially Assistive Robotics: A Case Study of Humanoid Robots in a Therapeutic Intervention for Autistic Children.” Social Robots: Technological, Societal and Ethical Aspects of Human-Robot Interaction. Springer, 2019.

STEINBERG, Booke. “I Fell in Love with an AI Chatbot — She Rejected Me Sexually.” New York Post. Accessed 08 Jun. 2023.

TOLSTOY, Leo. Anna Karenina. Oxford University Press, 2016.

TURING, Alan M. Computing Machinery and Intelligence. MIT Press, 1950.

VON AHN, Luis, Manuel BLUM, Nicholas J. HOPPER and John LANGFORD. “CAPTCHA: Using Hard AI Problems for Security.” International Conference on the Theory and Applications of Cryptographic Techniques, Springer, 2003, p. 294-311.

WALSH, Toby. “Gaslighting, Love Bombing and Narcissism: Why Is Microsoft’s Bing AI So Unhinged?” The Conversation. Accessed 08 Jun. 2023.

YANG, Shu-yuan, Li-cheng Jiao, and Fang Liu. "The Quantum Evolutionary Algorithm." Chinese Journal of Engineering Mathematics 23, no. 2, 2006, p. 235-246.

ZHAI, Xuesong, et al. "A Review of Artificial Intelligence (AI) in Education from 2010 to 2020." Complexity 2021 (2021): 1-18.

ZHOU, Yuefang. “Preventive Strategies for Pedophilia and the Potential Role of Robots: Open

workshop Discussion.” AI Love You: Development in Human-Robot Intimate Relationships. Springer, 2019, p.169-174.

Haut de page

Notes

1 Or as Lynne Pearce calls it “a common structure of feeling” (27).

2 This discussion has gendered connotations all over, namely who is the desiring subject and object of desire in the realm of both physical life and imagination, how this affects our feelings and our need for emotional plots, and how we can at the same time enjoy representations of love and transgress their limitations (Ntelia, “How Damsels Love”).

3 It is noteworthy that this AI Company offers romantic relationships as a paid service (Gedeon), which serves to complicate the discussion even more.

4 The word “artwork” could be replaced with the word “artefact”, which may be more innocuous. It is beyond the scope of the current paper to go into an ethical and philosophical debate about what constitutes art.

5 Seeing how Felski relies on computer terminology to express her thesis, excluding examples of new media in her argument is an unfortunate omission.

6 Not unlike contemporary AI and machine learning algorithms that are found to perpetuate stereotypes due to the data they are fed with, see e.g. Kim et al.

7 Per the archive she found, Simone, who identifies as female, could only tempt JPS in a passive way through accentuating her beauty (Ntelia, “How Damsels Love”).

8 Much like Frankenstein’s creation turns into a monster after he cannot successfully secure love: “if I cannot inspire love, I will cause fear” (127).

9 This does not mean that all games facilitate the same engagement with the game text. Simply having NPCs as romantic interests does not, of itself, make the players interested in them. The player might not care at all about an NPC. Or they may pursue a romantic relationship but think of the NPCs as nothing more than animated dolls. It is also the game’s design that affects our understanding of the romance and of the game’s romantic agents as embodied others that need to be respected and not simply controlled by buttons.

10 In Turing’s time, the word “computer” meant a human who did calculations (Alan Turing 134).

11 This hypothesis is investigated to striking effect in Kazuo Ishiguro’s Klara and the Sun and Ian McEwan’s Machines Like Me.

12 This game is the basis for what we now call the Turing test: if a human cannot discern whether the intelligence they interact with is human or machine, then this means that machines can pass for humans. Turing’s imitation game was inspired by a parlour game of the time called “The Judge.” Turing explains: “It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman” (Computing Machinery and Intelligence 433). It is almost comically ironic that nowadays it is the machines who have to test whether the agent they interact with is not a machine but a human. It is fairly certain that we all had to successfully complete a CAPTCHA field before we were allowed entrance to some site or content at least once, or rather repeatedly, in our lives; CAPTCHA being an acronym of the term “Completely Automated Public Turing test to tell Computers and Humans Apart”, introduced by Von Ahn et al. in 2003.

13 Indeed, Turing himself poses the question of such a possibility, among others, when trying to counterargue potential statements against machine intelligence: “’I grant you that you can make machines do all the things you have mentioned but you will never be able to make one to do X.’ Numerous features X are suggested in this connexion. I offer a selection: Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new” (Computing Machinery and Intelligence 447). His response is that just because machines have not been able to do something until a specific point in time, this does not mean that they will never be able to do so.

14 Turing’s considerations are the principles on which the fields of machine learning (Hutson), affective computing (Picard), and evolutionary computation (Yang), among others, are based.

15 Others being education (Zhai et al.), medicine (Hamet and Tremblay), and even battling crime (Hayward and Maas), to name a few. In the film The Artifice Girl (2023), an AI is designed to purposely simulate a young child in order to lure in and catch internet pedophiles.

Haut de page

Pour citer cet article

Référence électronique

Renata E. NTELIA, « Love Is in the AI of the Beholder
Artificial Intelligence and Characters of Love
 »
e-Rea [En ligne], 21.2 | 2024, mis en ligne le 27 juin 2024, consulté le 18 juillet 2024. URL : http://journals.openedition.org/erea/17908 ; DOI : https://doi.org/10.4000/11wa6

Haut de page

Auteur

Renata E. NTELIA

School of Computer Science
University of Lincoln
rntelia@lincoln.ac.uk
Renata Ntelia is Assistant Professor in the School of Computer Science, University of Lincoln. She holds a PhD from the Institute of Digital Games, University of Malta. Her research interests include love and Human-Computer Interaction (HCI), death and the macabre, and experimental game design. She also works as game writer and localisation expert.
Renata Ntelia est Maître de Conférences à la School of Computer Science de l'Université de Lincoln. Elle est titulaire d'un doctorat de l'Institut des Jeux Vidéo de l'Université de Malte. Ses recherches portent notamment sur l’amour et les interactions entre humains et ordinateurs (HCI: Human-Computer Interaction), la mort et le macabre, ainsi que la conception de jeux expérimentaux. Elle travaille également en tant que scénariste de jeux et est experte en localisation.

Haut de page

Droits d’auteur

CC-BY-NC-ND-4.0

Le texte seul est utilisable sous licence CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search