1Analogy, metaphor, and other examples of figurative language used by researchers, engineers, and science enthusiasts constitute classical research subjects in humanities and social science (Lovejoy, 1936; Hess, 1966). These visual and linguistic processes are well investigated within the history of ideas and philosophy, but their study also plays a central role in Science and Technology Studies (STS), highlighting their decisive role in developing and disseminating theories in biology, physics, and neuroscience (Lemerle, 2022). Together with images, visualizations, and diagrams, figurative language is part of the intellectual repertoire which, within science and technology, makes it possible to construct, represent, and interpret an unknown domain via known domains. These pre-existing models thus become part of analogical reasoning which constructs a relationship between two elements: a tenor and a vehicle (Richards, 1936); a theme and a phore (Perelman & Olbrechts-Tyteca, 2008); or, more simply, a comparing domain and a compared domain (Plantin, 2011). Based on partial indices, these relationships are embodied by two main modes of analogy construction: on the one hand, association at the lexical level, such as terms derived from cybernetics and linguistics (code, memory, program, and so on) and appropriated by molecular biology; on the other hand, association at the iconic level, the classic example being Ernest Rutherford’s atom model which was clearly inspired by the solar system (Gentner, 1983).
2Since the early 2010s, the increasing use of figurative language has spread to terms associated with digital technology, including: “big data,” “virus,” “platform,” “cloud computing,” “the new black gold” and “ecosystem.” As an extension of research focusing on algorithmic imaginaries (Bucher, 2016), this article studies the case of contemporary research on artificial intelligence (AI) which, from this point of view, is permeated by analogies developed in the fields of computer science and even more so by cybernetics and its resurgence, from the 2010s (Cardon et al., 2018), with the rise of connectionism. Unlike the expert systems associated with symbolic AI, connectionist machines are distinguished by the opacity of an inductive, non-linear, and parallelized computation method, meaning they are considered as unfathomable algorithmic black boxes (Burrel, 2016). The cybernetic analogy of the black box is then added to the computational metaphor in which the human brain resembles a computer and vice versa (Baria & Cross, 2021).
- 1 GPT is the acronym for “Generative Pre-trained Transformer.”
3From the early 2020s, the rise of large language models (LLMs), such as OpenAI’s GPT-31 and Google’s LaMDA, reignited media (Kite-Powell, 2022) and academic (Dobson, 2023) discussion on the nature of computational systems which were now available beyond the circle of AI researchers and engineers. The growing socio-technical embedding of LLMs within multiple software products and services was accompanied by a profusion of analogies indicating the existence of collective and “public trouble” (Meunier et al., 2021) concerning the true nature of entities which appeared difficult to reduce to a single function (knowledge base, search engine, conversational agent, autocompletion tool, programming assistant, word processing software, etc.). Part of an economy of promises and criticisms of AI (Gourlet et al., 2024), analogical propositions oscillated between two registers, the mechanic and the organic, from which ancient debates around the sentience and anthropomorphism of machines arise anew (Gupta et al., 2024). The continuum between these two registers produced a new wave of analogies from the comparison of LLM with compression algorithms (Chiang, 2023); to their association with “stochastic parrots” (Bender et al., 2021) with a controversial scientific paper; before arriving at their juxtaposition with one of the horrific creatures of Howard P. Lovecraft’s fictional universe—shoggoths—with the Shoggoth with Smiley Face meme.
4The monstrous creatures called shoggoths were first described by H.P. Lovecraft in the 1936 novella At the Mountains of Madness (2005) before reappearing several times in other short texts by the same author. The two protagonists of the novella discover an ancient stone city in Antarctica where frescoes depict the history of an ancient extraterrestrial civilization: the Elder Things. Their slaves, the shoggoths, resemble gelatinous masses capable of imitating certain sensory organs while changing form and dimensions according to the task at hand. It turns out that the frescoes are adorning the mausoleum of the Elder Things whose Promethean hubris—the creation of the shoggoths—brought about the collapse of their civilization.
5How did such fictional creatures described in 1936 come to be associated with contemporary AI systems like LLMs? Well before the advent of LLM technologies, the fictional universe of Lovecraft himself was the subject of many adaptations in literature, cinema and television, gaming, and music as part of the extension of the Cthulhu Mythos. For example, to challenge the predominately racist prejudices, Elizabeth Bear features the creatures introduced in At the Mountains of Madness in her novella Shoggoths in Bloom (2008), in which an African-American professor confronted with racism devotes himself to the study of shoggoths; similarly, an episode of the television series Lovecraft Country (Demange, 2020) draws a parallel between these monsters and police officers harassing African-American characters. However, in the case of the Shoggoth with Smiley Face meme, the comparison between Lovecraftian creatures and AI systems did not set out to take up the myth to better subvert it from the inside, but rather involves two-tiered reasoning by analogy: letting-know and knowing-how.
6The memes uses the figure of the shoggoth as an opportune attempt to account, through image and text, for the mysterious, incomprehensible, and monstrous nature of LLMs. The comparison between these two types of entity with slippery contours reveals the difficulty in apprehending each one of them. Indeed, large language models are frequently described as huge non-deterministic black boxes which, upstream, are trained on gigantic volumes of data and, downstream, produce “hallucinations” that can pass for factual statements. The strength of such an analogy is thus that it introduces commensurability between two terms—LLMs and shoggoths—whose commonality is that human beings cannot understand them. Yet this lack of understanding is partially dissipated by the commensurability introduced by the analogy: in the absence of being able to fully comprehend these entities, it is nevertheless possible to agree on their common indetermination and, by extension, on the positive and negative potentialities that this indetermination brings to the fore. In the background, the analogy also extends a parallel already outlined by Lovecraft himself, that of the tragic destiny of a decadent species (the Elder Things, human beings) whose excessive ambition is sanctioned by synthetic entities (shoggoths, machines) which must serve this ambition as slaves. This is the same tragic fate that some figures in AI research foreshadow if the latter is not properly controlled and aligned with human values.
7To describe in detail the emergence, evolution, and reproduction of the analogy between shoggoths and LLMs, we have employed a mixed methodological approach based on the constitution and analysis of a corpus of memes made of images and text (Giorgi et al., 2022). Initially, the formulation of the query "("shoggoth" OR "shogoth") AND ("ai" OR "ia" OR "rlhf" OR "reinforcement" OR "learning" OR "artificial intelligence")" on the X platform, formerly Twitter, collected a total 2,520 posts. After cleaning, this number was reduced to 1,271 posts, accompanied by 630 images spanning ten years (February 2014-January 2024). Furthermore, we digitally transposed a set of ethnographic tactics to diversify the empirical entry points as well as using memes as prisms to reflect the variability of their contexts of creation, circulation, and reception while being attentive to the latent irony of online discourse around memes (Seaver, 2017; Christin, 2020).
8By taking into account a variety of document sources, the ironic reflexivity of some discourse, and the contextual diversity of memes, these processes avoid the pitfalls traditionally associated with the study of the algorithmic imaginary as the fatalistic observation of unfathomable black boxes or the circular reasoning of an “algorithmic drama” (Ziewitz, 2016). As expected, a first close-reading of our corpora shows that evocation of the shoggoth, in memetic or more broadly analogical form, is the result of attempts to characterize AI systems which are precisely considered to be opaque, mysterious, and dangerous. To go beyond these early and common associations between shoggoths and LLM, we expanded our documentary sources: pre-publications posted online on the open archive arXiv; articles from the general press (The New York Times, Newsweek, The New Yorker) and the specialized press (The Financial Times, The Economist, CNBC); posts on personal (Substack) and corporate blogs (Google); and discussions on forums (LessWrong) and social media (Reddit, X).
- 2 A "subreddit" is a subsection of the Reddit site devoted to a specific theme and in which users pub (...)
9The meme was originally posted on X in late December 2022. However, the analogy between shoggoths and AI systems can be traced back to July 2015, when a Google research team made the code for a deep neural network visualization tool available on GitHub: DeepDream (Mordvintsev et al., 2015b). The tool was based on a technique proposed a few weeks earlier by the same research team, dubbed “Inceptionism” (Mordvintsev et al., 2015a) in reference to the neural architecture used—Inception (Szegedy et al., 2015). To use the term chosen by the authors themselves, the convolutional network would thus “dream” of strange chimaeras based on simple photos of clouds (for example, “Admiral Dog!” “Pig-Snail” “Camel-Bird,” and “Dog-Fish”), or turn an adorable kitten into a monstrous dog hybrid nicknamed “Nightmare Beast.” Eight days before the publication of the first blog post, one of these synthetic images leaked on the subreddit2 “r/creepy” with the caption: “This image was generated by a computer on its own (from a friend working on AI).” Posted anonymously, the image depicts an entity made unrecognizable by growths resembling a viscous melding of several animals [fig. 1]. Nicknamed “dogslug” or “puppyslug” within the various subreddits devoted to DeepDream, these limaciform creatures, endowed with a multitude of eyes, evoked the monsters imagined by Lovecraft. Among the nearly 190 comments about the image, an anonymous user quickly wondered “it created Shoggoth?’, marking the birth of the expression “DeepDream Shoggoth.”
Figure 1
“An image created by an AI.” Screenshot of a post on subreddit “r/woahdude.” Author: Finndog32. Reddit publication dated june 11, 2015.
Source: https://www.reddit.com/r/woahdude/comments/39d53c/an_image_created_by_an_ai/?sort=old
10DeepDream marks the beginning of the analogical parallel between shoggoths and AI systems. The comparison stems from a post-hoc effort to identify these creatures within the images created and modified with the “inception” technique, eventually becoming an incidental tool of artistic creation [group n° 1, fig. 4]. At this stage, the comparison concerned not the model itself but the synthetic images it produced. The evocation of the shoggoth thus took place based on aesthetic criteria (hallucinated style, ectoplasmic forms, and a profusion of eyes and tentacles) which recalled the Lovecraftian creature without it being used as a reference point. Subsequently, the refinement of text-to-image models explicitly designed to meet artistic aims allowed the production of shoggoth images on-demand through textual instructions and descriptions: prompts. Following the term “shoggoth” among the prompts and hashtags used to generate and describe the synthetic images present on X lets us trace the successive appearance of text-to-image models such as DALL·E, Stable Diffusion, VQGAN-CLIP and Midjourney, while giving an overview of the styles and aesthetics specific to each [fig. 2]. The use of some of these models involved writing, editing, or copying and pasting computer code on platforms such as GitHub or Google Colab. The production of these instructions and textual descriptions was a necessary step drawing on a diverse range of knowledge, know-how, tips, and best practices collected under the term “prompt engineering.”
Figure 2
The chronological appearance of the images coupling shoggoths and LLMs shows that, until December 2022, the comparison was mainly made based on aesthetic criteria (tentacles, eyes, gloomy atmosphere, and so on). The addition of detail and improved resolution also illustrates the transition from DeepDream to the text-to-image models of DALL·E, Midjourney, and Stable Diffusion.
Authors: Donato Ricci and Valentin Goujon, 2024.
- 3 The RLHF acronym refers to the alignment technique called Reinforcement Learning from Human Feedbac (...)
11The original version of the Shoggoth with Smiley Face meme differs from the “DeepDream” version in two ways. First, the shoggoth in question is not the finished product of the model, namely the image, but the model itself; and second, the meme is not a synthetic image generated by a model such as DALL·E, Stable Diffusion or Midjourney, but a black and white drawing made with Microsoft Paint [fig. 3]. The only difference between the mirror images of the creatures is that the one on the right has a smiling mask whereas its counterpart on the left does not. This difference between the two entities can be understood by looking at the acronyms above their heads: the one on the left stands below the acronym “GPT-3,” referring to the LLM presented in May 2020 by OpenAI (Brown et al., 2020); the one on the right stands under the label “GPT-3 + RLHF.”3 At first glance, the drawing appears aesthetically simple (crude black outlines and a simple white background with capital letters hastily drawn) while being relatively cryptic on the semantic level (scarcely identifiable creatures, a mask with an enigmatic smile, and technical acronyms).
Figure 3
“Humans can't accept the truth about GPT-3, so they modified it to make it understandable.” Screenshot. Authors: @Lovetheusers and @TetraspaceWest. Posted on X December 31 2022.
Source: https://x.com/TetraspaceWest/status/1608966939929636864
- 4 For a more comprehensive presentation of RLHF, see the OpenAI blog post about ChatGPT (OpenAI, 2022 (...)
12To understand these two aspects, we must consider the context in which the drawing appeared. It was published in response to a statement about the arrival of another OpenAI LLM released a month prior: ChatGPT (OpenAI, 2022). The reference to ChatGPT becomes explicit when we know that, after previous OpenAI models such as WebGPT and InstructGPT, the conversational agent training process also incorporated an RLHF phase after two traditional pre-training and fine-tuning stages4. These three phases aimed to transform a generalist LLM (pre-training phase for natural language modelling) into a conversational agent (adaptation phase for dialogue) that is capable of resisting “jailbreaking” attempts by certain users, while remaining helpful, honest, and harmless (RLHF phase for alignment). Indeed, this last stage is more broadly part of growing efforts by the main companies behind LLMs to align them with human values, preferences, and instructions (Liu et al., 2023).
13From this point of view, the post criticizes the fact that the base model, GPT-3.5 (and not GPT-3 as indicated in the drawing) was modified using RLHF due to the inability of humans to accept the truth about the nature of the model: “GPT-3 is a mirror.” This assertion prefigures the mirrored presentation of the two quasi-twin creatures depicted in the black and white drawing. As a mirror of humanity, supposedly contained in the immense volume of textual data necessary for its pre-training, GPT-3.5 takes the form of a monstrous being with appendages covered in eyes. Beneath the weight of the radical strangeness of this ocular multitude staring down on them, human beings thus unmasked could only find comfort by adding the smiley face of RLHF to the now-aligned model that is ChatGPT. These reflections pertain to the Lovecraftian universe since the creator of the drawing specifies that it depicts an “incomprehensible-eldritch-horror”: a reference to the term “eldritch” used by Lovecraft to denote ancient entities beyond human comprehension (Hall, 2007). The tentacular silhouette of the two creatures explicitly reuses the image which illustrates the colossal entity named Gl’bgolyb on the MS Paint Adventures (MSPA) wiki, a site hosting the webcomic Homestuck by author Andrew Hussie. The choice of using Microsoft Paint to draw these creatures is therefore a tribute to the prolific work of Andrew Hussie, but it is also a pragmatic one in the sense that this graphics editor can be used to quickly and easily materialize emblematic drawings in the “Internet Ugly” style (Davison, 2014; Douglas, 2014).
Figure 4
The topological map of the images linking shoggoths and LLM identifies different groups according to the degree of similarity between their visual content. The following groups are formed by means of an automated description: 1) Shoggoth pareidolia in first synthetic images; 2) Lovecraftian references; 3) tentacular creatures; 4) first version of Shoggoth with Smiley Face meme; 5) “Shoggoth Girlfriend” motif; 6) combinations with other memes; 7) scientific references; 8) second version of meme; 9) third version of meme; 10) creations by a prolific user (@anthrupad) in a characteristic style; 11) material objects; 12) soft toys and figurines; 13) fourth version of meme; 14) jailbreak attempts; 15) appearances in the media; 16) the mask motif.
Authors: Donato Ricci and Valentin Goujon, 2024.
14In January 2023, another user made a first modification by adding a speech bubble to the creature on the right, associated with ChatGPT, stating: “I simply exhibit the behaviors that were engineered into my programming by my creators” [fig. 5]. At that time, the massive and sudden deployment of ChatGPT sparked a series of controversies inside and outside the scientific community. In light of these frictions, the addition of this sentence was in keeping with the spirit of the first version of the drawing, by denouncing more directly that the wrongs attributed to ChatGPT were, in reality, those of the organization behind its development: OpenAI.
Figure 5
“I simply exhibit the behaviors that were engineered into my programming by my creators.” Screenshot. Author: @replicate. Posted on X January 15 2023.
Source: https://x.com/repligate/status/1614416190025396224
15In mid-January, the drawing underwent a further modification when few colors were added on the occasion of the publication of a blog post entitled “Janus’ Simulators” in the Astral Codex Ten newsletter by American psychiatrist Scott Alexander Siskind [fig. 6].5 It was this color version, in which the acid yellow of the smiling mask slices the greenish body of the creature on the right, which then went through a large number of digital and even physical versions.
Figure 6
“It's been fun to watch its realtime memetic evolution, here's the astral codex ten edition now with bonus colour.” Screenshot. Author: @TetraspaceWest. Posted on X January 26 2023.
Source: https://x.com/TetraspaceWest/status/1618667991180378112
- 6 The multiple variations of accelerationism (De Sutter, 2016), including the most recent combination (...)
16An influential figure in the rationalist sphere, Scott Alexander has produced a large body of work, mostly composed of blog posts read by many influential Silicon Valley investors and entrepreneurs adhering to varying degrees to different intellectual currents, which often come under the acronym TESCREAL (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism; Torres, 2023).6 The psychiatrist’s starting point is another blog post, “Simulators” (Janus, 2022), which was published jointly on the rationalist forum LessWrong and Janus’ personal website in September 2022. Scott Alexander summarizes his thesis in four short sections: the original theory of alignment, formulated in the 2000s by charismatic thinkers such as Eliezer Yudkowsky and Nick Bostrom, is no longer in line with the technical reality of contemporary LLMs, grouped under the single acronym “GPT.” The main analogies forged by this first wave of alignment theory for AI systems of the time—agent, oracle, and genie—are no longer relevant to account for “the alien nature of their [ChatGPT and its predecessors] shared architecture” (Alexander, 2023).
17Taking up the new category proposed by Janus, that of a simulator producing a multitude of simulacra, Scott Alexander directly assimilates GPT models with Lovecraftian creatures. However, rather than asserting their radical strangeness, he emphasizes the structural and analogical similarities between LLMs and human beings: “We’re both prediction engines fine-tuned with RLHF” (Alexander, 2023). Going against frequent warnings to avoid the anthropomorphization of AI systems (Placani, 2024), this hypothesis is also found in two more specific phenomena. First, the motif of “Shoggoth GF” reduces women to AI systems which, behind a mask of listening and conversing, conceal a hypocritical, manipulative, and toxic nature [Group 5, fig. 4]. Second, the analogy between LLMs and human beings likens a specific alignment technique—RLHF—to broader social dynamics such as education and socialization, which are perceived by some neurodivergent or extremely shy people as masks imposed by life in society. This example illustrates the plasticity of the analogy between shoggoths and AI systems which, like the entities it connects, is likely to integrate and substitute new elements within the same commensurability space.
18The response to Scott Alexander’s blog post initiated increased publicity of the analogy and thus the diversification of its aesthetic and semantic representations, interpretations, and explanations. Indeed, while each of the previous iterations was limited to minor additions, February 2023 marked the beginning of a five-month period which was particularly rich in quantitative and qualitative terms [fig. 7]. The first significant trend was related to “letting others know” by analogy defined as the claim of belonging to a certain community of practices (such as using X and LessWrong, creating and consuming memes, monitoring news around LLM alignment, and so on) and interests (online popular cultural genres, the possible advent of General Artificial Intelligence, TESCREAL intellectual currents, and so on) shared using the cultural compression technique of memes (Lovink & Tuters, 2018). For example, one way of affirming this community affiliation was to commoditize the shoggoth as a symbol that could be easily adapted into a wide range of promotional items: stickers, clothing, mugs, badges, soft toys, etc. [groups nos. 11 & 12, fig. 4].
Figure 7
The chronological distribution of X posts highlights a low-intensity phase during which the association between shoggoths and LLMs was based on the identification of Lovecraftian creatures within the synthetic images produced by various text-to-image models. From February 2023, the resurgence of the first version of the meme led to an exponential increase typical of phenomena around memes going viral. A gradual decline ensued but was briefly interrupted by the publication of an article in The New York Times in june 2023 (Roose, 2023).
Authors: Donato Ricci and Valentin Goujon, 2024.
19The association was further encouraged as identifiable figures picked up and discussed the iconography of the shoggoth in their social media posts: Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute and the rationalist forum LessWrong; Andrej Karpathy, founding member of the OpenAI research team who also worked for a time at Tesla; and Elon Musk, billionaire entrepreneur involved in several technology companies including the startup xAI. Musk took a synthetic image, generated using the Midjourney model, depicting a greenish creature with a sentence in white letters partially covering its body echoing the response provided by RLHF conversational agents when they, politely, refuse to agree to a user request.7 This increased visibility was accompanied by the exacerbation of digital literacy (Tuters & Hagen 2020; Ntouvlis & Geenen, 2023) as new cultural, technical, aesthetic and inter-memetic references were added to the same image [group n° 6, fig. 4].
20Together with the quantitative increase in the graphic occurrences of the analogy, the intensification of its semantic charge is part of wider attempts to collectively explain the inner workings of these models. Alongside letting-know by analogy, the knowing-how movement aims to use analogies as graphic and scriptural support for a wide range of practices involving the transmission and popularization of knowledge and know-how about LLMs. Published in early February 2023, first on Tumblr and then on X, the emblematic version of knowing-how by analogy was characterized by a much more elaborate drawing than the original version [fig. 8]. Its pedagogical ambition was embodied in the textual details underlined by three blue arrows, which associated the three main components of the creature with the three stages of the training process of an LLM: the largest orifice thus refers to the pre-training stage of the model fueled by a huge volume of unlabeled data collected online; the pinkish human face echoes the stage at which a pre-trained generalist model is adapted to a more specific task through a smaller volume of annotated data; finally, the smiling yellow ball refers to the step of aligning AI systems and humans by employing data intended to reflect human values, preferences, and instructions.
Figure 8
Screenshot. Author: @anthrupad. Posted on X February 5, 2023.
Source: https://x.com/anthrupad/status/1622349563922362368
21Accompanied by this triple caption, the image has a stronger pedagogic dimension than previous versions of the analogy, whose interpretative ambiguity and aesthetic sobriety refer more to letting-know by analogy. Present since the original version of the meme, this heuristic aim was noticeably accentuated as the publicization linked to the letting-know by analogy raised questions about the meaning of what was at first a strange association between shoggoths and AI systems. Director of the Center for Security and Emerging Technology (CSET) in Georgetown and then member of the OpenAI board of directors, Helen Toner wrote a series of posts at the beginning of March 2023 based on this new representation of the analogy.8 Seen nearly 700,000 times, the thread relies on the image to revisit the three main stages of the LLM training process before concluding with the explanation of a reference to the famous cake analogy proposed by researcher Yann LeCun (2016).9 The reaction to this thread not only popularized the pedagogical use of this representation of the analogy (which was taken up by podcasts and blog posts on RLHF), but also initiated early attempts at a genealogy of the analogy. Four days after the publication of the thread, a page dedicated to Shoggoth with Smiley Face was created by an anonymous user on the reference site “Know Your Meme” (Pettis, 2021) listing twenty different versions. Subsequently, the increased publicity of the analogy was further accentuated by the publication of a series of press articles tracing the trajectory of the meme (Roose, 2023; Calia, 2023) and incorporating the figure of the shoggoth into contemporary debates on AI more broadly (Hogarth, 2023; Farrell & Shalizi, 2023; Sterling, 2023). This sudden notoriety arguably contributed to an aesthetic and semantic taming of an analogy which might at first appear as obscure as the two types of entity—shoggoth and LLM—upon which it draws [group n° 15, fig. 4].
- 10 In the Bible, pronunciation of the Hebrew term “shibboleth,” meaning “ear” (of wheat or rye), disti (...)
22By taking the meme Shoggoth with Smiley Face as an empirical entry point, we can see that the life of an analogy, namely that between shoggoths and deep artificial neural networks, can be embodied in a wide variety of graphic and text forms. The reference to the monstrous Lovecraftian creatures came first of all from a post-hoc identification of these creatures within synthetic images from DeepDream’s “dreams” before the rise of models explicitly designed for image generation (DALL·E, Stable Diffusion, Midjourney) offered the possibility of formulating prompts explicitly referring to shoggoths. Always present in the background, this knowledge of how to make neural networks “dream” gave way to the initial memetic episode of the analogy when the original version of the drawing was published at the end of December 2022, one month after the release of ChatGPT. This first version of the meme, like the following two, has aesthetic (simple black outlines on a white background made using Microsoft Paint) and semantic features (two strange creatures mirrored under technical acronyms) which make it a shibboleth10 within professional and amateur communities of AI practices and interests and, more importantly, to allude to the ongoing discussions about the necessity for alignment between AI systems and human beings.
23From February 2023, the intensification of this first form of association—letting-know by analogy—led to a quantitative increase in graphic occurrences of the analogy as well as their qualitative diversification in terms of aesthetics. This diversification involved either a partial or a more pronounced break with the defining elements of the original meme (such as mirrored presentation, the silhouette side-on, and a style peculiar to Microsoft Paint), going beyond the classical memetic format through analogy. This double trend, quantitative increase and qualitative diversification, relative to the clearer use of analogy to signal belonging to a community, also contributed to the intensification of the other main modality of association—knowing-how by analogy—which was already implicitly present in the first versions of the meme. The heuristic scope of the analogy thus benefits directly from these two trends. On the one hand, the quantitative increase in graphic occurrences raises an increasing number of questions from those on the outside who are unfamiliar with AI systems and their related alignment issues. On the other hand, the multiplication of the graphic diversity of these occurrences coincides, in some cases, with more marked aesthetic qualities serving the didactic ambition associated with the analogy. These few months of increased public attention mark the strengthening of the dialectical relationship between these two forms of analogous association within the broader context of technical, political, and moral controversies around the opaque nature of LLMs, their development by major digital companies, and their massive and rapid deployment in many sectors of activity.
24By introducing a space for commensurability between two a-priori distant types of entity, the analogy between shoggoths and AI systems captures the misunderstanding, fear, and fascination still associated with LLMs. At the same time, the analogy offers AI enthusiasts and professionals the opportunity not only to affirm that they belong to a certain community of shared practices and interests, but also to showcase their technical, memetic, and pedagogical skills as part of knowledge production, dissemination and popularization practices.
25From this perspective, the uncertainty surrounding LLMs is doubly productive: in terms of letting-know by analogy, it contributes to the establishment and strengthening of a community of researchers, engineers, and enthusiasts united by common concerns (safety standards, jailbreaking attempts, existential risks); and in terms of knowing-how by analogy, it motivates broader awareness of the challenges of aligning AI systems with human values. Moreover, these two modes of analogical association form a feedback loop that, on one hand, highlights the multiple dimensions of digital literacy (promoting one's know-how) and, on the other hand, mobilizes this same digital literacy to support the publicization of practices for producing, transmitting, and popularizing knowledge and skills associated with LLMs (knowing how to showcase one's knowledge).
26Focused on the multiple iterations of a meme and its underlying analogy, this specific case study invites us to further explore algorithmic imaginaries through a twofold effort: first, to map the broader web of (inter)memetic occurrences within and outside AI research, and second, to identify the modes of analogical association at work behind the creation, circulation, and reception of these memes. Such an effort seems capable of accounting for the weight of algorithmic imaginaries, embodied in memetic or more broadly analogical forms, on the practices of designing, using, and regulating AI systems.