Skip to navigation – Site map

HomeNuméros9DossierIntroduction. Photographs and Alg...

Dossier

Introduction. Photographs and Algorithms

Introduction. Photographies et algorithmes
Estelle Blaschke, Max Bonhomme, Christian Joschke and Antonio Somaini
Translated by Simon Cowper
This article is a translation of:
Introduction. Photographies et algorithmes [fr]

Abstracts

This issue of Transbordeur explores the reciprocal relationship between photography and algorithms—artificial intelligence (AI) models, in particular—be it for the purposes of analyzing, modifying, or generating images. The issue’s introduction presents an overview of research conducted since the 1960s at the intersection of IT, computer graphics technologies (CGI) and artistic experimentation, delineating an archaeology of the algorithmic image. We look at how photography itself has been shaken to its core by the computational turn, in terms of both its practices and its ontology, and examine the fundamental role that photographic images have played as a reference point in the design of analytical and generative AI models, with “photorealism” as the goal standing on the horizon.

Top of page

Editor’s notes

Translated by Simon Cowper
Some images in the digital edition of this article were removed due to unresolved reproduction rights.

Full text

1Over the last years, there have been profound changes in how photographic images are captured, processed, shared, circulated, and viewed. This shift is predicated on the integral presence and agency of artificial intelligence (AI) algorithms and models, which must be taken into account if we are to understand the mechanisms involved. This issue of Transbordeur takes stock of this new situation the photographic image finds itself in, with some articles focusing on contemporary manifestations of the transformations that have taken place and others adopting an archaeological approach. Our idea is to use this dual perspective to analyze the way in which these recent developments are causing us to give renewed thought to a whole series of issues that cut across the history, practices, institutional frameworks, and theories of the photographic medium. If we can better understand these transformations in the realm of photography, it will also contribute to current discussions of broader issues relating to the nature of our interactions with the algorithms that our social lives are increasingly structured by, as well as the politics, ethics, and visual economies of AI.

2From a historical point of view, the relationship between photography and AI seems to have developed as one of reciprocation. On the one hand, vast quantities of photographic images—produced with the intention of creating datasets or taken from the internet (websites, social networking platforms, stock image databases)—have, since the 1990s, played a fundamental role in training AI algorithms and models to carry out the tasks of computer vision or machine vision. These algorithms can be used to detect, recognize, and classify the faces, bodies, objects, and places depicted in images, or to generate and modify images on the basis of textual prompts, combinations of prompts and images, and sometimes just images.

3On the other, a number of AI algorithms and models have profoundly transformed photographic practices and are even helping to modify our idea of what “photography” is, whether it be when algorithms intervene at the precise moment the shot is being taken, or when recommendation systems influence the transmission, circulation, and reception of images across platforms and networks, or through models that generate photorealistic images that closely resemble photographs, although they are not the product of any form of optical capture.

  • 1 Georg Simmel first introduced the concept of Wechselwirkung in his Über sociale Differenzierung. So (...)

4This twofold dynamic, this form of reciprocation—or Wechselwirkung, to use a term proposed by sociologist Georg Simmel1—can be effectively summed up in terms of a dual concept: photography in AI, AI in photography.

Photography in AI

  • 2 Warren McCulloch and Walter Pitts, “A Logical Calculus of Ideas Immanent in Nervous Activity,” Bull (...)

5The use of photographic images to put together image datasets designed to train computer vision and face recognition systems dates back to the mid-1960s, just a few years after Frank Rosenblatt invented the Perceptron (1957) on the basis of research carried out by Warren McCulloch and Walter Pitts.2 This was a supervised machine learning algorithm, structured like an “artificial neuron,” which was used for the tasks of computer vision (more specifically, the recognition of characters like the letter “C”) with the Mark I Perceptron in 1960 (fig. 1). Other initiatives at that time targeted the automatic processing of images. As Frances Cullen’s article (pp. 68–79) points out, NASA launched research into the development of IT methods for algorithmic image processing in order to improve the photographs of the surface of the moon taken by automated devices during the Surveyor program.

1. Frank Rosenblatt and the Mark I Perceptron, 1960

1. Frank Rosenblatt and the Mark I Perceptron, 1960

Washington DC, National Museum of the US Navy.

  • 3 Woody W. Bledsoe, The Model Method in Facial Recognition, Technical Report PRI 15 (Panoramic Resear (...)

6In 1964, Woody Bledsoe, Helen Chan, and Charles Bisson embarked on a series of experiments with the “man-machine” system developed at Panoramic Research in Palo Alto, California (fig. 2).3 The name of the system, which was tested on a database of two thousand pictures of faces (mostly ID photos), derived from the need for preliminary human intervention. It was actually necessary for the main coordinates of a face depicted in a photograph to be plotted before the picture could be submitted to a computer, which was to compare it with other photographs.

2. Main coordinates of a face taken from a photograph for Woody Bledsoe’s “man‑machine” system, n.d.

2. Main coordinates of a face taken from a photograph for Woody Bledsoe’s “man‑machine” system, n.d.

Photograph by Dan Winters, ca. 2020.

© Dan Winters Photography

  • 4 On the FERET program, see Patrick Rauss et al., “FERET (Face Recognition Technology) Program,” Proc (...)

7Thirty years later, in 1993, still in the United States, the Defense Advanced Research Projects Agency (DARPA) and the Army Research Laboratory (ARL) launched the FERET (FacE REcognition Technology) program, accelerating the use of photographs in the development of face recognition systems.4 The program aimed to develop applications that could be deployed not only in the military arena but also more widely across the public and private domains.

  • 5 On the JAFFE dataset, see Michael J. Lyons, Miyuki Kamachi, and Jiro Gyoba, “Coding Facial Expressi (...)
  • 6 Estelle Blaschke, Banking on Images: The Bettmann Archive and Corbis (Leipzig: Spector Books, 2016)

8The systematic production of photographic images for use in training face and emotion recognition systems continued through the second half of the 1990s and the 2000s. An example here is the JAFFE (Japanese Female Facial Expression) dataset, which was presented to the public in 1998 and contained 213 photographs depicting seven different basic facial expressions modeled by a selection of ten Japanese women (fig. 3).5 At the same time, the development of computer vision systems continued in the private sector—particularly in companies running large databases of stock images—training these systems to detect, recognize, and classify objects, places, bodies, and faces in huge datasets of connected images and text (captions, descriptions, technical information).6 In 2011, Google entered the field with the introduction of the “Search by Image” function, which allows users to take a particular image as a starting point and then search for similar images. The addition of Google Lens, which was launched in 2017, would make it possible for computer vision to be integrated into the platform’s search and recommendation functions.

3. Images extracted from the JAFFE (Japanese Female Facial Expression) dataset

3. Images extracted from the JAFFE (Japanese Female Facial Expression) dataset

Based on research conducted by Michael J. Lyons 2 et al., “Coding Facial Expressions with Gabor Wavelets,” Proceedings Third IEEE International Conference on Automatic Faceand Gesture Recognition (1998): 200–205. The images are available here: https://zenodo.org/​records/​3451524

9A major turning point in the history of the use of photographic images in datasets to train computer vision systems came at the end of the 2000s, when, rather than producing images, researchers and private companies began taking them, in very large quantities, directly from the internet. The example of ImageNet is emblematic of this. Put together in 2009 by Fei-Fei Li and her team at Stanford University, this dataset played a key role between 2010 and 2014 in the accelerated development of computer vision systems through the “ImageNet Large Scale Visual Recognition Challenge” (ILSVRC) series.7 Incorporating fourteen million photographic images taken from online platforms like Flickr and organized into twenty-one thousand categories and sub-categories, ImageNet is characterized by the systematic association of each image with one or more English-language nouns derived from the WordNet lexical database, which was created in 1985 by Princeton’s Computational Cognitive Science Lab.8 The indexing of such a large number of images was facilitated by the huge contribution of tens of thousands of click workers recruited from across the globe through the Amazon Mechanical Turk microwork platform. There is a certain irony in the fact that Amazon called these workers “Turks,” a reference to the Automaton Chess Player, which was the talk of the town in the Austro-Hungarian Empire at the end of the eighteenth century. Dressed in a Turkish costume with a turban on its head, the automaton was operated by a human chess master hidden under the table on which the game was played (fig. 4).

4. The Mechanical Turk, ca. 1770, automaton constructed by Baron Johann Wolfgang von Kempelen

4. The Mechanical Turk, ca. 1770, automaton constructed by Baron Johann Wolfgang von Kempelen

Illustration from Joseph Friedrich Freiherr von Racknitz, Über den Schachspieler des Herrn von Kempelen, nebst einer Abbildung und Beschreibung seiner Sprachmachine (Leipzig: Johann Gottfried Müllerschen Buchhandlung, 1784), Berlin, Universitätsbibliothek der Humboldt-Universität.

  • 9 There started to be more and more of a distinction made between the two fields of analytical AI and (...)

10If photography has played a key role in the development of machine vision systems in the vast realm of analytical AI (AI designed as a set of detection, recognition, and classification systems that can be used for surveillance, security checks, and predictive purposes), the same is true for the field of generative AI (AI designed as a group of systems capable of generating new data—texts, images, sounds, and voices—after being trained with huge quantities of other data).9

  • 10 Ian Goodfellow et al., “Generative Adversarial Nets,” preprint, arXiv, June 10, 2014, https://doi.o (...)

11Introduced in 2014, Generative Adversarial Networks (GANs) were one of the first AI models designed to generate and modify images. By the second half of the 2010s, artists and image-editing apps were making widespread use of GANs. The training of these models was based from the start on datasets made up of photographic images. One example of this is the CIFAR-10 small object photograph dataset, mentioned by Ian Goodfellow and his co-authors in the 2014 article that first presented GANs.10 Their research set out to use GANs to generate photorealistic images that could then be added, as synthetic data, to the photographic images of the initial dataset. Accordingly, in the years that followed, GANs were often trained on datasets of photographs in a bid to generate photorealistic images. The StyleGAN and BigGAN models, for example, were trained to produce images of this kind on the basis of ImageNet’s fourteen million photographs. They also learned to carry out different types of photographic operations: style transfer, upscaling, inpainting (removing and replacing objects in an image), and outpainting (extending an image beyond its frame), along with all the operations connected with the production of deep fakes in the form of still or moving images.

12The launch, in 2022, of Latent Diffusion Models—generative AI models, such as Stable Diffusion, DALL‑E 2, and Midjourney, that are all capable of using text prompts or combinations of prompts and images to generate photorealistic still images—marks a new stage in the story of photographic images being included in the datasets used to train AI models. In the case of Stable Diffusion—the only model to have been released as open source, since it comes out of a collaboration between private companies (Stability AI and Runway), a university (the LMU in Munich, through the CompVis group), and nonprofit organizations like LAION—the dataset used to train the model was LAION-5B.11 This massive dataset, published as open source for the first time in 2022, contains five billion text-image pairs from internet via Common Crawl, another nonprofit organization which records an “archive” approximately once a month of all the data stored on the internet and makes it available in the public domain for research purposes and as a means to develop AI models.12

  • 13 See Christo Buschek and Jer Thorp, “Models All the Way Down,” Knowing Machines, accessed January 21 (...)
  • 14 On LAION-Aesthetics, see laion.ai/blog/laion-aesthetics.
  • 15 See ibid.

13Most of the LAION-5B images linked to texts—be it the “alt attributes” associated with each image loaded onto an html-encoded site, the captioning of stock images or online shopping platforms, or simply the comment texts on websites or social media platforms—are photographs. In addition, according to a study by Christo Buschek and Jer Thorp—members of the Knowing Machines research group headed by Kate Crawford13—a crucial role is played, again within LAION-5B, by a subset called LAION-Aesthetics, containing images that the dataset’s creators consider to have a “high visual quality.”14 However, this “high aesthetic quality” expresses a very specific—geographically, socially, and culturally situated—taste. LAION-Aesthetics was created from two datasets named Simulacra Aesthetic Captions (SAC) and Aesthetic Visual Analysis (AVA). As the creators of the SAC database themselves stressed, the images included in SAC and AVA, and subsequently in LAION-Aesthetics, express the tastes of the users of these platforms, who are mainly located in the US and other Western countries.15

14From Bledsoe, Chan, and Bisson’s “man-machine” system through to LAION-5B, photographic images have thus played a key role in the development of analytical and generative AI systems. Photography, in other words, is involved at a profound level in structuring the way these systems “see,” describe, generate, and modify images.

AI in photography

15The presence of AI algorithms and models in photography—another aspect of the reciprocity mentioned above—takes at least three forms. Firstly, there are the AI algorithms that are integrated to an ever-greater degree in new camera models, including those in our smartphones, where they perform operations at the very moment a picture is taken. Then there are the algorithms that are used in the recommendation systems on social media platforms, which govern the transmission, circulation, and reception—and, in many cases, the censorship—of photographic images. Lastly, there is the dissemination across contemporary visual culture of an enormous quantity of photorealistic images, which although they are not produced by any form of optical capture, are part of a process of reconfiguration of what we might regard as an expanded field of “photography.”

  • 16 As Julian Stallabrass points out, “The user’s choice of when to press the shutter marks only a mid- (...)

16For some years now, there has been an accelerated tendency for the photographic moment of the “shot” to be infiltrated by all kinds of AI algorithms that detect objects, perform facial recognition, and identify scene types (portrait, landscape, night scene), automatically adjusting the settings for focus, exposure, brightness, color balance, motion tracking, and image stabilization. Techniques such as HDR (High Dynamic Range) processing make it possible to combine several shots taken in very rapid succession to record a greater range of details and balance light and shadow.16 Optical effects like bokeh separate the subject from its background, which is rendered with more blur. The Night mode intervenes in low-light photography, introducing more clarity into night scenes. Finally, all kinds of filters can be used after the shot has been taken to modify the images recorded.

  • 17 Estelle Blaschke, “Diskrete Operationen: Formen präemptiver Bildzensur in der KI-gestützten Fotogra (...)

17The presence of algorithms in our cameras and smartphones makes it necessary for some of the key concepts in the theory of photography to be reevaluated:17 the dominant idea—informing theories that flourished during the 1980s and 1990s—of the photographic image as “imprint” and “index,” and of the “decisive moment,” are no longer valid in today’s context. In her article, Barbara Grespi (pp. 95–105) draws on photography’s different philosophical foundations and makes comparisons with recent smartphone technologies that make use of deep learning algorithms. The image produced is indeed no longer a snapshot but a composite image that combines various images or image fragments, taken simultaneously or successively, generating a result that is more consistent with the statistics determining what constitutes “successful” photography across social media platforms. The photographic image comes into being in much the same way as a photogram in cinema, but unlike the film fragment, it condenses an entire temporal layer rather than revealing an instant. The other factor that is particularly instructive is the inclusion of a tool borrowed from photogrammetry whose function is to produce 3D models on the basis of photographic images taken in series. The temporal layer of the series is then turned into a spatial layer: using a variety of perspectives, the algorithm attempts to triangulate the space in the picture and offers a view in three dimensions. In this way, tools we use on a regular basis corroborate a discourse that, in the last fifteen years, has largely revised the traditional ontology of the photographic image.

  • 18 For one of the most recent studies of this question, see Joanna Zylinska, The Perception Machine: O (...)

18AI, in other words, is expanding the field of computational photography, with the result that since the advent of digital photography, all the phases of photographic production have been in a continual process of transformation, from the moment of taking the shot through to the whole sequence of processing stages that follow.18 In his article on unsharp masking in Photoshop, Till Heilmann proposes an archaeology dating back to the pre-digital history of filters (pp. 44–57). He thus highlights the continuity between the edge enhancement tools used in printing and the filters used in Photoshop, starting with the first version of the software. Even before the age of computing, sharpening filters operated on the principle of superimposing a blurred, low-contrast positive on the original negative. The mechanism is no different when it comes to the high-profile image-processing program, which superimposes variable layers of contrast and sharpness.

19Fully integrated into social media platforms, AI algorithms and models also determine a photographic image’s social life, by encouraging or preventing its circulation. This involves recommendation systems that manage visual flows on the basis of user profiles, behaviors, preferences, and reactions, both past and predicted, and computer vision systems that apply extremely opaque criteria to detect, remove, or blur images whose content is not compatible with the rules adopted by the platforms. Making censorship autonomous in this way is not without its problems: historical photographs are a case in point, as Katja Müller-Helle explains in her article (pp. 118–29), based on the example of Nick Ut’s Napalm Girl. Other computer vision systems, meanwhile, recognize the objects, places, bodies, and faces appearing in a picture and extract all kinds of information and data from them or make it possible for platform users to search for similar images.

20AI-generated photorealistic images have also helped bring about profound changes in the field of what we call “photography.” It is clear from the way generative AI models have evolved, as briefly outlined above, that photorealism is a quality that has been explicitly targeted, almost like a teleological horizon that is to be reached as quickly as possible. The different versions of the models that have succeeded one another since 2022 have each been characterized by an increasing degree of photorealism, to the point where glitches like the famous six-fingered hands—markers, it was once thought, that made it possible to distinguish shots taken with an optical camera from photorealistic images generated by AI—have been gradually eliminated.

21Hence the need for detailed analysis of how generative AI algorithms and models function: this will allow us to better understand their architecture, the composition of the datasets they have been trained with, and the way they use the “latent space” in which images and texts have been encoded and transformed into vectors so that they can be processed by a whole series of mathematical and statistical operations.

Photographs and datasets

22There is a great deal we do not know about the origins of the images compiled in these datasets, the manual or automated procedures used to index and sequence them, the words used to describe them, and the numerous biases that creep in, in the AI algorithms and models. In his article for this issue, Thierry Sugitani (pp. 106–17) presents an overview of the evolution of datasets since the mid-2000s, based on the discussions that accompanied the release of these tools and on a vast body of technical literature. He describes a shift from projects supported by prestigious universities to datasets entirely produced by private companies like Yahoo or Microsoft. He also observes an evolution in methodology: while the first datasets from the 2000s through to the mid-2010s were the fruit of active composition, with procedures for purging unnecessary information in order to furnish the algorithms with reliable material, the new datasets accumulate such a volume of images that it is becoming increasingly difficult to monitor their indexing.

  • 19 Kate Crawford and Trevor Paglen, “Excavating AI: The Politics of Images in Machine Learning Trainin (...)
  • 20 Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Ne (...)

23When the ImageNet project was starting out, the aim was still to have this set of images dovetail with a descriptive system, called WordNet, which was made up of 5,247 words organized in a tree structure proceeding, in each category, from the most general to the most particular, as if to cover the totality of objects that could be depicted in photographs. ImageNet has been the subject of a number of installations by artist and theorist Trevor Paglen, including From “Apple” to “Anomaly”:19 these show vast areas of the ImageNet structure, in which images and words have been systematically paired as a result of the invisibilized labor of tens of thousands of click workers (fig. 5).20

5. Trevor Paglen, installation view From “Apple” to “Anomaly” (Pictures and Labels): Selections from the ImageNet Dataset for Object Recognition

5. Trevor Paglen, installation view From “Apple” to “Anomaly” (Pictures and Labels): Selections from the ImageNet Dataset for Object Recognition

Curve Gallery, Barbican Centre, London, 2019–20. Assemblage of 30,000 prints of photographs from the ImageNet database, London, Barbican Centre.

© Trevor Paglen

  • 21 Hito Steyerl, “Mean Images,” New Left Review 140–41 (2023): 82–97.

24By contrast, datasets like LAION-5B that have been published since 2022 have no qualms about bringing together images harvested from all kinds of websites, social networks, and online image banks. Their mass does not entirely manage to compensate for the effects of bias and the corrections introduced by automated indexing systems to escalate the ranking of this or that social or racial category, or to remove images that are considered inappropriate (NSFW, Not Safe for Work). Instead of correcting the existing biases in datasets, generative AI adds new ones into the mix, as Hito Steyerl points out in her article “Mean Images.”21

  • 22 On the “model collapse,” see Ilia Shumailov et al., “AI Models Collapse When Trained on Recursively (...)

25Another issue that is now set to be pervasive is the question of how to distinguish, in future datasets, between images produced without the help of AI models and those created, at least in part, with their assistance. Given the prevalence of this second type of image on the internet, the prospect of a near future in which AI models are trained with datasets containing a large number of AI-generated images is becoming more and more concrete, bringing with it all the questions that this kind of feedback loop raises. According to the latest studies, a scenario like this, known as “AI autophagy” or “data cannibalism,” would produce a “model collapse.”22

Photorealism

26The images generated by algorithms and AI remediate characteristics and aesthetic programs that are well established in photography. They are thus accompanied by a rhetoric that is part of the vocabulary of the medium, especially evident in the constant use of the term “photorealism.” Be it in computer-generated imagery (CGI), 3D renderings, GANs, or the latest text-to-image models, photorealism has actually been the driver and point of reference for numerous optimization processes. However, there is considerable variety in the interpretations that have been made of it, and in the procedures and technologies that allow this aesthetic quality to be achieved.

27CGI was developed in the late 1970s for use in architecture, engineering, and industrial product design contexts, and in connection with cinema and video gaming. Research in this field was mainly guided by the desire to achieve a level of realism that would resemble, and end up rivaling, the way we perceive images produced by optical media (photography and film). In her study of experiments conducted at Cornell University in the 1980s, Maria Eriksson (pp. 32–43) traces the development of techniques for evaluating photorealism, which would give rise to the parameters that are now used to measure this quality in AI-generated images. These initial experiments not only involved human-computer interaction through algorithm design but also required the manual arrangement and adjustment of scenes and objects. The measurement of light, the staging of an object, the representation of color, surface, and texture, and the continuous comparison between original and copy—practices that are all eminently photographic, even artistic—played a major role in making the simulation valid in perceptual terms. Today, there are enormous visual encyclopedias of computer-generated images produced by multiple commercial suppliers (fig. 6).

6. Quixel Megascans, “Bigleaf Hydrangea” category, August 2022

6. Quixel Megascans, “Bigleaf Hydrangea” category, August 2022

© Megascans Quixel

28Jens Schröter (pp. 58–67) extends this reflection on the recent history of photorealistic CGI by asking a simple yet crucial question: What do we mean when we say that an image “looks like a photo”? What are its specific attributes, over and above accuracy of representation, depth of field, and richness of detail? Based on his analysis of lens flare, a special effect used in films and video games, Schröter maintains that the purported deficiencies of the photographic process become an advantage when photorealism is the aim. Lens flare simulates the numerous limitations of the physical camera and emphasizes its optical characteristics.

  • 23 See, in particular, Jacob Birken, Vom Pixelrealismus (Berlin: Schlaufen, 2023).

29CGI and photorealistic 3D renderings thus simulate the presence of a camera, as well as more subtle aspects like the inclusion of light reflecting on surfaces, work on sharpness and blur, and certain perspective effects—all elements that are part of a logic described by Jacob Birken as “realism immanent in the media.”23 These images do not seek to imitate the physical world; rather they set out to mimic the media that dominated the visual culture of the last century: photography and cinema. They thus integrate one of photography’s most powerful effects, its transparency, which makes us forget that we are not looking at an image—a picture, for example, of a simulated piece of Chinese porcelain—but pretending to look through it (fig. 7).

7. Atsushi Nakabayashi, view of a 3D model of a Chinese teacup, Sketchfab, 2022

7. Atsushi Nakabayashi, view of a 3D model of a Chinese teacup, Sketchfab, 2022

© Atsushi Nakabayashi

  • 24 Nelson Goodman, Languages of Art: An Approach to a Theory of Symbols (Indianapolis, IN: Hackett, 19 (...)
  • 25 See Crawford, Atlas of AI, and Peter Szendy, with Emmanuel Alloa and Marta Ponsa, eds., The Superma (...)

30However, the remediation of visual culture also accentuates the prejudices and invisibilities implicit in past visual representations, as Roland Meyer expounds in his analysis of image-generation models (pp. 20–31). Through his examination of the infrastructural conditions governing models like DALL‑E and Midjourney, Meyer exposes the ramifications of verbal concepts and probabilistic logic in the production of generic content. Here, realism should be understood rather as “platform realism” or “ImageNet realism,” as Eriksson also points out in her essay, in a logic that could be extended to LAION-5B. But if the notion of realism is to be conceived of—as Nelson Goodman has suggested24—as profoundly relative, historically situated, and ideologically determined, the question that arises is what kind of relations and ideologies are apparent in generative models. As Meyer argues—and this is echoed by Kate Crawford and Peter Szendy25—it is essential to disentangle the capitalist and extractivist foundations of these technologies in order to formulate a critique of AI.

  • 26 Katherine Hayles, “Inside the Mind of an AI: Materiality and the Crisis of Representation,” New Lit (...)

31These issues take on even more importance in the context of a “crisis of representation,” of the kind identified by Katherine Hayles apropos of the relationship between experience and language.26 Photographic media have played a key role in establishing the concepts of realism and objectivity since the mid-nineteenth century. However, it is the impact of these media on notions of truth and faith in science and society in general that has made CGI, or deep fakes, a powerful tool of disinformation and manipulation, ultimately challenging these certainties. At the same time, current developments in digital visual culture are providing us with a unique opportunity to reconsider the validity of these concepts in visual depictions, both past and present.

Prompts and latent spaces

32The launch, in 2022, of generative AI models like ChatGPT (text-to-text) and Stable Diffusion, DALL‑E 2, and Midjourney (text-to-image) also marks the sudden appearance of a new cultural object, one that had hitherto been confined to the realms of computer science: the prompt. Today, this term refers to words, sentences, and texts, written in “natural” language (that is to say, ordinary, non-machine language) and used to activate generative AI models, by drawing on their “latent spaces” as a source of texts or images.

33But what is “latent space” exactly? It is an abstract mathematical entity whose significance and cultural, epistemological, and political implications can hardly be overstated today. Latent space is a key element in any machine learning process, and thus essential to any AI model. It is the abstract space in which complex digital objects (such as images, texts, and sounds) are encoded and represented in a simplified, compact form, so that they can be processed using various mathematical operations. A latent space consists of vectors (series of numbers, arranged in a precise order) that represent data points in a multidimensional space with hundreds or even thousands of dimensions. Each vector, with n number of dimensions, represents a specific data point, with n number of coordinates. These coordinates capture some of the characteristics of the digital object encoded and represented in the latent space, determining its position relative to other digital objects: for example, the position of a word in relation to other words in a given language, or the relationship of an image to other images or to texts.

34During the training of recent models of generative AI, the encoding in latent space of vast numbers of interconnected images and texts (text-image pairs) is a key moment in the process of generating images. Take Stable Diffusion, for example: a latent space was used for the encoding, in the form of vectors, of the five billion connected texts and images in the LAION-5B training dataset. It is within this latent space that images and texts are processed, on the basis of the information contained in the prompts. In the end, it is out of this latent space that generated images arise.

35Text prompts (possibly accompanied by one or more images) are the tool that makes it possible to probe and activate the latent spaces of generative AI models. Because of their multidimensionality and their vectorial and mathematical nature, these latent spaces remain completely inaccessible, invisible, unimaginable. Each word contained in a prompt sent to a text-to-image model—whether it’s a word describing one of the entities you’d like to appear in an image or one giving information about this image’s shape, color, texture, focus, style, and historical period as well as all kinds of material and technical specifications—activates a different zone in the latent space and thus helps generate a certain type of image rather than another type.

36With prompts, written language becomes a medium for producing images in a way that is totally unprecedented. The images generated by a text-to-image model are neither the visual translation of an iconographic program expressed in a text nor the becoming-image of language, of the kind produced in calligrams or visual poetry, nor yet a visual allegory of a concept that has been formulated verbally. On this point, the interview with Google researcher Jason Baldridge conducted for this issue by Nicolas Malevé and Katrina Sluis (pp. 130–41) is particularly illuminating: it reveals the intimate entanglement of computational linguistics and research carried out on automatic image generation.

  • 27 Roland Meyer, “The New Value of the Archive: AI Image Generation and the Visual Economy of ‘Style,’ (...)
  • 28 Hannes Bajohr, “Operative Ekphrasis: The Collapse of the Text/Image Distinction in Multimodal AI,” (...)
  • 29 Jay David Bolter and Richard Arthur Grusin, Remediation: Understanding New Media (Cambridge, MA: MI (...)

37By contributing to the generation of these images, prompts function as a new type of “speech act,” showing once more how language can be operational and become a form of action. Prompts can also be regarded as “search commands” (as per Meyer’s interpretation27) or as a form of “operative ekphrasis” (the term suggested by Hannes Bajohr28): an ekphrasis that doesn’t describe existing images but instead generates images by pre-describing them. They are also a form of “remediation,” as theorized by David Bolter and Richard Grusin29—a process of turning the entire history of the visual media, with their various material supports, techniques, operations, protagonists, styles, and traditions and their different historical phases and theoretical discourses—into a broad spectrum of linguistic terms that can be used to sound out latent space.

38All of this has a profound impact on the expanded field of photography. The huge list of terms that textually describe its complete set of techniques, supports, devices, operations, protagonists, subjects, and usages, along with all their historical developments, becomes a lexicon that can be mobilized in prompts designed to generate images with a “photographic” quality. With prompts, then, photography, like any other visual medium—painting, drawing, sculpture, cinema, etc.—is reduced to the words that describe it. In the years to come, if text-to-image and text-to-video models really do become a dominant form of image production, the more profound our knowledge is of terms related to the history, theory, and practices of visual media, the greater our ability will be to direct these models in nonstandard, non-repetitive ways via latent space. The same goes for other multimodal generative AI systems, all based on the “text-to . . .” principle: text-to-sound and, soon, text-to-image/sound.

Economics and ideology

  • 30 Crawford and Paglen, “Excavating AI,” and Fabian Offert and Thao Phan, “A Sign That Spells: DALL‑E  (...)
  • 31 Google put out patches for its Gemini model in a bid to be more representative of different physica (...)
  • 32 Antonio A. Casilli, En attendant les robots : Enquête sur le travail du clic (Paris: Seuil, 2019); (...)
  • 33 See Sigfried Giedion, Mechanization Takes Command (New York: Oxford University Press, 1948).

39In addition to the race, class, and gender biases that researchers and critics have already identified30—and which tech companies have sometimes tried (amidst some controversy31) to correct—and the crucial problem of the energy costs and the ecological price to be paid (which the industry is not necessarily making any effort to fix), generative AI also raises labor-related questions. Whether we’re talking about the invisible labor of click workers tasked with content moderation or put on the image-tagging production line, the work required of each and every one of us when we fill in CAPTCHAs, or the job of curating datasets and moderating generated content, the platform economy generates new forms of work through and on images.32 Moreover, the fact that the composition of certain datasets is kept opaque suggests that their operations involve the illegal harvesting of human labor: this is the direction taken in the lawsuits that have been filed, by artists in particular, who accuse tech companies of having used their work without their permission. Finally—as a further stage in “mechanization taking command”33—AI puts a certain number of jobs in image production (in illustration, commercial photography, retouching, graphic design, advertising) at risk, seemingly jeopardizing the medium-term future of these professions.

40This is a focus of interest in Chris Balaschak’s article (pp. 80–91), which uses a case study dating back to the 1970s—Sonia Sheridan’s “Generative Systems”—to question the resurgence of the motif of automation as applied to tasks that have been considered menial at different moments in the history of visual media. We should also ask ourselves, no doubt, what kind of work is involved in the algorithmic generation of images, and what consequences these technologies will have for image workers. Is this another step in the invisibilization of work, a process akin to what Marx describes as commodity fetishism?

41Consistent with the ideology that drives Silicon Valley today, generative AIs were initially presented not only as a great leap forward in technological terms but also as a fun creative tool to play with. Witness the messaging put out by companies like OpenAI: in addition to its successes in respect of photorealism, DALL‑E has also been promoted as an artistic game that makes it possible to produce conveyor-belt surrealism, at little (apparent) cost, and without requiring the least bit of technical skill. In presenting generative AI as a kind of creative pastime—rather than as an increase in industrial productivity—we tend to blank out the ecological, political, and economic problems posed by the introduction of these techniques on a massive scale.

  • 34 Max Bonhomme et al., “Une généalogie des images composites,” Transbordeur 7 (2023): 6–17.
  • 35 Roland Meyer, “Es schimmert, es glüht, es funkelt: Zur Ästhetik der KI-Bilder,” 54books, March 20, (...)

42If we then turn to look at the uses of images and examine the actual place this new imagery occupies in our visual culture, we may note that to start with it was a great meme machine. The glitches still very much present in the images generated by the first text-to-image models in 2022–23, as well as in some of the less moderated versions of Stable Diffusion, have produced their share of monsters.34 In this intermediate phase of developing generative AI, it was precisely the imperfections and breaches of photorealism that gave these images their spice, making mileage not so much out of “high fidelity” as out of exaggerated distortion. We might think, for example, of the memes based on the free version of DALL‑E, initially called “DALL‑E Mini” and then “Craiyon”, which responded to the prompt with a mosaic of nine images that were low definition and looked very poor, quite unlike the very high-quality images that would become the norm with Midjourney updates in early 2023.35 This shoddy aesthetic, which was both schoolboyish and caricatural, contrasts with the extravagance of many of the AI-generated images when used on the “default” setting: by default, the rendering will fluctuate between photorealism and graphic illustration; by default, the image will feature the same treatment of light, a kind of soft incandescence that Meyer has called the “fluffy glamour glow,” as well as a combination of warm and cool tones, gentle transitions, and a preference for curving lines over clearly defined angles. The desire to constantly improve quality most likely led in the end to kitsch becoming established as the norm. This is an apt moment to remember the degree to which recording media like photography have played, historically, the role of an antidote to kitsch: the laconic qualities of a daguerreotype plate set against the overblown emotion of nineteenth-century Salon painting.

  • 36 André Gunthert, “Les faux débats de l’IA,” L’image sociale, November 19, 2023, https://imagesociale (...)
  • 37 Adobe, for example, has already included AI-generated images in its inventory of stock photos.

43It is probably still too early to draw any conclusions with regard to the massive use of “generated” images for commercial, institutional, or journalistic communication. In all likelihood, we can expect to see illustrative uses of AI becoming widespread36—taking over from stock images and generic illustrations in the “corporate Memphis” style.37

  • 38 Yuval Barnea, “From Crisis to Prosperity: Netanyahu’s Vision for Gaza 2035 Revealed Online,” The Je (...)

44On the other hand, it is already clear that AI-generated images are playing a part, more or less directly, in a variety of current political and military conflicts, as is evident from the numerous controversies that permeated the US presidential election campaign in 2024. In May of that year, The Jerusalem Post published images that were obviously generated by AI and had been part of a PowerPoint coming directly from the office of Israeli prime minister, Benjamin Netanyahu (fig. 8). These images offer a futuristic vision of the Gaza Strip as an El Dorado for tech investors, studded with skyscrapers and electric cars, powered by large solar parks.38 “Gaza 2035” is the name given to the colonial project, which is the product of a policy of tabula rasa, embodied in these visualizations, these “projective” images, which borrow their aesthetics from real estate promotion. Donald Trump’s statements, in February 2025, about a US takeover of Gaza in order to turn it into a Riviera of the Middle East seem to refer directly to this kind of image. The photorealism of algorithmic images thus plays a role not only in deliberate practices of manipulation, as in the case of deep fakes, but also as a relay point for other communication strategies, taking the form of visual projections, in a context of technological acceleration that can also turn out to be profoundly unfair and harmful.

8. Benyamin Netanyahu’s vision for Gaza 2035

8. Benyamin Netanyahu’s vision for Gaza 2035

AI-generated image, as displayed on the Israeli Prime Minister’s website and picked up by The Jerusalem Post on May 3, 2024.

Top of page

Notes

1 Georg Simmel first introduced the concept of Wechselwirkung in his Über sociale Differenzierung. Sociologische und psychologische Untersuchungen (Leipzig: Duncker & Humblot, 1890).

2 Warren McCulloch and Walter Pitts, “A Logical Calculus of Ideas Immanent in Nervous Activity,” Bulletin of Mathematical Biophysics 5, no. 4 (1943): 115–33; Frank Rosenblatt, The Perceptron: A Perceiving and Recognizing Automaton, Report 85-460-1 (Cornell Aeronautical Laboratory, 1957). On the history of computer vision, see James E. Dobson, The Birth of Computer Vision (Minneapolis: University of Minnesota Press, 2023).

3 Woody W. Bledsoe, The Model Method in Facial Recognition, Technical Report PRI 15 (Panoramic Research Inc., Palo Alto, California, 1964); Woody W. Bledsoe and Helen Chan, A Man-Machine Facial Recognition System: Some Preliminary Results, Technical Report PRI 19A (Panoramic Research Inc., Palo Alto, California, 1965); Woody W. Bledsoe, Man-Machine Facial Recognition: Report on a Large-Scale Experiment, Technical Report PRI 22 (Panoramic Research Inc., Palo Alto, California, 1966).

4 On the FERET program, see Patrick Rauss et al., “FERET (Face Recognition Technology) Program,” Proceedings of the 25th AIPR Workshop: Emerging Applications of Computer Vision 2962 (1997): 253–63.

5 On the JAFFE dataset, see Michael J. Lyons, Miyuki Kamachi, and Jiro Gyoba, “Coding Facial Expressions with Gabor Wavelets” (1998), preprint, arXiv, September 13, 2009, https://doi.org/10.48550/arXiv.2009.05938.

6 Estelle Blaschke, Banking on Images: The Bettmann Archive and Corbis (Leipzig: Spector Books, 2016).

7 On ImageNet, see Jia Deng et al., “ImageNet: A Large-Scale Hierarchical Image Database,” 2009 IEEE Conference on Computer Vision and Pattern Recognition Workshops (Miami, 2009), 248–55.

8 On WordNet, see https://wordnet.princeton.edu.

9 There started to be more and more of a distinction made between the two fields of analytical AI and generative AI after generative AI models began to develop in the second half of the 2010s, with the launch of algorithms like Generative Adversarial Networks in 2014, Transformers in 2017, and Large Language Models like GPT-1 in 2018, followed, in 2019, by GPT-2. In 2022, this boom culminated in the launch of ChatGPT and text-to-image diffusion models like Stable Diffusion, DALL‑E 2, and Midjourney. However, the two fields are closely connected and often share the same technologies.

10 Ian Goodfellow et al., “Generative Adversarial Nets,” preprint, arXiv, June 10, 2014, https://doi.org/10.48550/arXiv.1406.2661.

11 On LAION-5B, see https://laion.ai/blog/laion-5b/.

12 On Common Crawl, see https://commoncrawl.org.

13 See Christo Buschek and Jer Thorp, “Models All the Way Down,” Knowing Machines, accessed January 21, 2025, https://knowingmachines.org/models-all-the-way.

14 On LAION-Aesthetics, see laion.ai/blog/laion-aesthetics.

15 See ibid.

16 As Julian Stallabrass points out, “The user’s choice of when to press the shutter marks only a mid-point in a burst of images, taken before and after, that are melded to make the resulting ‘photograph’, using HDR effects to increase tonal range and resolution, and to decrease ‘noise’, or lower entropy.” Julian Stallabrass, “Memories of the Present: Photography and Artificial Intelligence,” New Left Review 148 (2024).

17 Estelle Blaschke, “Diskrete Operationen: Formen präemptiver Bildzensur in der KI-gestützten Fotografie,” Bildwelten des Wissens 16 (2020): 32–41.

18 For one of the most recent studies of this question, see Joanna Zylinska, The Perception Machine: Our Photographic Future Between the Eye and AI (Cambridge, MA: MIT, 2023) and Fred Ritchin, The Synthetic Eye: Photography Transformed in the Age of AI (London: Thames & Hudson, 2025).

19 Kate Crawford and Trevor Paglen, “Excavating AI: The Politics of Images in Machine Learning Training Sets,” September 19, 2019, https://excavating.ai.

20 Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (New Haven, CT: Yale University, 2021).

21 Hito Steyerl, “Mean Images,” New Left Review 140–41 (2023): 82–97.

22 On the “model collapse,” see Ilia Shumailov et al., “AI Models Collapse When Trained on Recursively Generated Data,” Nature 631 (2024): 755–59. See also Sina Alemohammad et al., “Self-Consuming Generative Models Go MAD,” preprint, arXiv, July 4, 2023, https://doi.org/10.48550/arXiv.2307.01850.

23 See, in particular, Jacob Birken, Vom Pixelrealismus (Berlin: Schlaufen, 2023).

24 Nelson Goodman, Languages of Art: An Approach to a Theory of Symbols (Indianapolis, IN: Hackett, 1976).

25 See Crawford, Atlas of AI, and Peter Szendy, with Emmanuel Alloa and Marta Ponsa, eds., The Supermarket of Images (Paris: Gallimard, 2020).

26 Katherine Hayles, “Inside the Mind of an AI: Materiality and the Crisis of Representation,” New Literary History 54, no. 1 (2022): 635–66.

27 Roland Meyer, “The New Value of the Archive: AI Image Generation and the Visual Economy of ‘Style,’” IMAGE 37 (2023): 100–111.

28 Hannes Bajohr, “Operative Ekphrasis: The Collapse of the Text/Image Distinction in Multimodal AI,” Word & Image 40, no. 2 (2024): 77–90.

29 Jay David Bolter and Richard Arthur Grusin, Remediation: Understanding New Media (Cambridge, MA: MIT, 2000).

30 Crawford and Paglen, “Excavating AI,” and Fabian Offert and Thao Phan, “A Sign That Spells: DALL‑E 2, Invisual Images and The Racial Politics of Feature Space,” preprint, arXiv, October 26, 2022, https://doi.org/10.48550/arXiv.2211.06323.

31 Google put out patches for its Gemini model in a bid to be more representative of different physical types. The company subsequently back-pedaled in response to fierce criticism of the illogical results that were sometimes generated.

32 Antonio A. Casilli, En attendant les robots : Enquête sur le travail du clic (Paris: Seuil, 2019); see also Jeff Guess, “Conversations,” Transbordeur 3 (2019): 36–47.

33 See Sigfried Giedion, Mechanization Takes Command (New York: Oxford University Press, 1948).

34 Max Bonhomme et al., “Une généalogie des images composites,” Transbordeur 7 (2023): 6–17.

35 Roland Meyer, “Es schimmert, es glüht, es funkelt: Zur Ästhetik der KI-Bilder,” 54books, March 20, 2023, https://54books.de/es-schimmert-es-glueht-es-funkelt-zur-aesthetik-der-ki-bilder/.

36 André Gunthert, “Les faux débats de l’IA,” L’image sociale, November 19, 2023, https://imagesociale.fr/11366.

37 Adobe, for example, has already included AI-generated images in its inventory of stock photos.

38 Yuval Barnea, “From Crisis to Prosperity: Netanyahu’s Vision for Gaza 2035 Revealed Online,” The Jerusalem Post, May 3, 2024, https://www.jpost.com/israel-hamas-war/article-799756.

Top of page

List of illustrations

Title 1. Frank Rosenblatt and the Mark I Perceptron, 1960
Caption Washington DC, National Museum of the US Navy.
URL http://journals.openedition.org/transbordeur/docannexe/image/2738/img-1.jpg
File image/jpeg, 965k
Title 2. Main coordinates of a face taken from a photograph for Woody Bledsoe’s “man‑machine” system, n.d.
Caption Photograph by Dan Winters, ca. 2020.
Credits © Dan Winters Photography
URL http://journals.openedition.org/transbordeur/docannexe/image/2738/img-2.jpg
File image/jpeg, 976k
Title 3. Images extracted from the JAFFE (Japanese Female Facial Expression) dataset
Caption Based on research conducted by Michael J. Lyons 2 et al., “Coding Facial Expressions with Gabor Wavelets,” Proceedings Third IEEE International Conference on Automatic Faceand Gesture Recognition (1998): 200–205. The images are available here: https://zenodo.org/​records/​3451524
URL http://journals.openedition.org/transbordeur/docannexe/image/2738/img-3.jpg
File image/jpeg, 91k
Title 4. The Mechanical Turk, ca. 1770, automaton constructed by Baron Johann Wolfgang von Kempelen
Caption Illustration from Joseph Friedrich Freiherr von Racknitz, Über den Schachspieler des Herrn von Kempelen, nebst einer Abbildung und Beschreibung seiner Sprachmachine (Leipzig: Johann Gottfried Müllerschen Buchhandlung, 1784), Berlin, Universitätsbibliothek der Humboldt-Universität.
URL http://journals.openedition.org/transbordeur/docannexe/image/2738/img-4.jpg
File image/jpeg, 1.3M
Title 5. Trevor Paglen, installation view From “Apple” to “Anomaly” (Pictures and Labels): Selections from the ImageNet Dataset for Object Recognition
Caption Curve Gallery, Barbican Centre, London, 2019–20. Assemblage of 30,000 prints of photographs from the ImageNet database, London, Barbican Centre.
Credits © Trevor Paglen
URL http://journals.openedition.org/transbordeur/docannexe/image/2738/img-5.jpg
File image/jpeg, 877k
Title 6. Quixel Megascans, “Bigleaf Hydrangea” category, August 2022
Credits © Megascans Quixel
URL http://journals.openedition.org/transbordeur/docannexe/image/2738/img-6.jpg
File image/jpeg, 630k
Title 7. Atsushi Nakabayashi, view of a 3D model of a Chinese teacup, Sketchfab, 2022
Credits © Atsushi Nakabayashi
URL http://journals.openedition.org/transbordeur/docannexe/image/2738/img-7.jpg
File image/jpeg, 284k
Title 8. Benyamin Netanyahu’s vision for Gaza 2035
Caption AI-generated image, as displayed on the Israeli Prime Minister’s website and picked up by The Jerusalem Post on May 3, 2024.
URL http://journals.openedition.org/transbordeur/docannexe/image/2738/img-8.jpg
File image/jpeg, 624k
Top of page

References

Electronic reference

Estelle Blaschke, Max Bonhomme, Christian Joschke and Antonio Somaini, Introduction. Photographs and AlgorithmsTransbordeur [Online], 9 | 2025, Online since 26 February 2025, connection on 13 January 2026. URL: http://journals.openedition.org/transbordeur/2738; DOI: https://doi.org/10.4000/13dwo

Top of page

About the authors

Estelle Blaschke

Estelle Blaschke is a professor of media studies at the University of Basel and lectures on the history and theory of photography at the University of Art and Design (ÉCAL) in Lausanne. Her research focuses on photography, the visual economy, and digital technologies and cultures. Together with Armin Linke, she was joint director of the research and exhibition project Capital Image / Image Capital, which ran from 2019 to 2024 (Centre Pompidou, Folkwang Museum, MAST Bologna, Deutsche Börse Photography Foundation).
Estelle Blaschke est professeur en théorie des médias à l’université de Bâle et enseigne l’histoire et la théorie de la photographie à l’ÉCAL à Lausanne. Ses recherches portent sur l’histoire de la photographie, l’économie visuelle et les technologies et cultures numériques. Avec Armin Linke, elle a co-dirigé le projet de recherche et d’exposition Capital Image/Image Capital de 2019 à 2024 (Centre Pompidou, Folkwang Museum, MAST Bologna, Deutsche Börse Photography Foundation).

By this author

Max Bonhomme

Max Bonhomme is an associate professor (MCF) of design and visual cultures at the University of Strasbourg. His research centers on the history and theory of graphic design, photomontage, popular imagery, and digital imagery. Together with Aline Théret, he is jointly curating the 2025 Couper, coller, imprimer : Une histoire graphique du photomontage politique (La Contemporaine, Nanterre).
Max Bonhomme est maître de conférences en design et cultures visuelles à l’université de Strasbourg. Ses recherches portent sur l’histoire et la théorie du design graphique, le photomontage, l’imagerie populaire et les imageries numériques. En 2025, il est commissaire, avec Aline Théret, de l’exposition Couper, coller, imprimer. Une histoire graphique du photomontage politique (La Contemporaine, Nanterre).

By this author

Christian Joschke

Christian Joschke is a professor at the Beaux-Arts de Paris and coeditor-in-chief of Transbordeur : Photographie histoire société. Between 2007 and 2020, he was an associate professor (MCF) at Université Lumière Lyon 2 and Université Paris Nanterre. He was joint organizer of the exhibition Photographie, arme de classe : Photographie sociale et documentaire en France 1928–1936 at the Centre Pompidou (exh. cat., Textuel, 2018) and has published Les Yeux de la nation : Photographie amateur et société dans l’Allemagne de Guillaume II (Les presses du réel, 2013) and La Révolution suspendue : Photographie et presse communiste dans l’Allemagne de Weimar (Macula, 2025).
Christian Joschke est professeur aux Beaux-Arts de Paris et corédacteur en chef de Transbordeur. Photographie histoire société. Entre 2007 et 2020, il a enseigné comme maître de conférences à l’université Lumière Lyon 2 et à l’université Paris Nanterre. Co-organisateur de l’exposition Photographie, arme de classe. Photographie sociale et documentaire en France 1928-1936 au Centre Pompidou (catalogue chez Textuel, 2018), il a publié Les Yeux de la nation. Photographie amateur et société dans l’Allemagne de Guillaume II (Dijon, Les presses du réel, 2013) et La Révolution suspendue. Photographie et presse communiste dans l’Allemagne de Weimar (Macula, 2025).

By this author

Antonio Somaini

Antonio Somaini is a professor in film, media and visual culture theory at Université Sorbonne Nouvelle and a senior member of the Institut Universitaire de France (IUF). His current research focuses on the impact of AI on images, visual culture, and contemporary artistic practices. He is the overall curator of the exhibition Le Monde selon l’IA (Jeu de Paume, April–September 2025).
Antonio Somaini est professeur de théorie du cinéma, des médias et de la culture visuelle à l’Université Sorbonne Nouvelle et membre senior de l’Institut Universitaire de France (IUF), avec un projet de recherche sur l’impact de l’IA sur les images, la culture visuelle et les pratiques artistiques contemporaines. Il est le commissaire général de l’exposition Le Monde selon l’IA (Jeu de Paume, avril-septembre 2025).

By this author

  • Disréalismes [Full text]
    Une conversation entre Grégory Chatonsky, Christian Joschke et Antonio Somaini
    Disrealisms. A conversation between Grégory Chatonsky, Christian Joschke and Antonio Somaini
    Published in Transbordeur, 7 | 2023
Top of page

Copyright

CC-BY-NC-ND-4.0

The text only may be used under licence CC BY-NC-ND 4.0. All other elements (illustrations, imported files) may be subject to specific use terms.

Top of page
Search OpenEdition Search

You will be redirected to OpenEdition Search