On the Measurement of Realism in Synthetic Images. From Cornell Box Experiments to ImageNet and Inception Photorealism
Abstracts
Just as in the (older) field of computer graphics, a key goal in the development of generative image models has been the achievement of photorealism—i.e., the making of images that are perceived as indistinguishable from those produced with a camera lens. This paper locates current techniques for assessing photorealism in synthetic images within a longer history of efforts to evaluate realism in computer-generated content. More precisely, I compare two current standards for assessing photorealism in AI-generated images (the so-called FID metric and Inception score) with the assessment methods used in the Cornell box experiments—a compilation of evaluation techniques that set the standard for how realism was evaluated in computer-generated images in the 1980s and onwards. Exploring how ideas about realism and photorealism become translated into algorithmic models, and drawing on the works of Lev Manovich and Hannes Bajohr, I observe a shift from simulative and sequential enactments of photorealism to predictive and connectionist enactments of photorealism, starting in the period between 2014 and 2016. I also reveal how a specific AI training dataset (ImageNet) and neural network (Inception model) has played a central role in shaping the aesthetics of photorealism in generative image making since 2016.
Index terms
Mots-clés :
photoréalisme, techniques d’évaluation, images de synthèse, boîte de Cornell, modèles génératifs d’imagesKeywords:
photorealism, evaluation techniques, computer graphics, Cornell box, generative image modelsOutline
Top of pageFull text
- 1 Cindy Goral et al., “Modeling the Interaction of Light between Diffuse Surfaces,” Computer Graphics(...)
- 2 Goral, “Modeling the Interaction,” 219.
1In 1984, a research team at Cornell University released a paper describing a new method for validating how realistically the propagation of light through space could be modeled in computer-generated images.1 The method involved testing how well the predictions of a specific illumination algorithm called the radiosity model corresponded with a real-life environment whose lighting situation, geometric properties, and material conditions had been carefully measured and controlled. The basic idea behind the experiment was to measure the illumination conditions of a given physical location, take a photograph of the environment, create a computer-rendered replica of the space, and then measure the accuracy with which the synthetic replica could be made to mimic the photograph and physical scene (fig. 1). To demonstrate their method, the researchers constructed a test cube made of fiberboard panels, roughly 70 × 70 × 70 cm in size. The exchangeable interior walls of the cube—colored red, blue, and white—were all painted with flat latex paints to minimize reflections, and one side of the cube was left open “for viewing and photographic purposes.”2 Outside the box, the researchers staged illuminating lights, a white diffuse surface, and a camera, mounted on a white paper enclosure (fig. 2).
1. Photograph of test cube (left) and computer simulation (right), second iteration of the Cornell box, after Gary Meyer et al., “An Experimental Evaluation of Computer Graphics Imagery,” ACM Transactions on Graphics 5, no. 1 (1986): 48

© Association for Computing Machinery
2. Initial configuration of the Cornell box, illustration taken from Cindy Goral et al., “Modeling the Interaction of Light Between Diffuse Surfaces,” Computer Graphics 18, no. 3 (1984): 221

© Association for Computing Machinery
- 3 Gary Meyer et al., “An Experimental Evaluation of Computer Graphics Imagery,” ACM Transactions on G (...)
- 4 Meyer et al., “An Experimental Evaluation,” 49.
2When the team replicated the experiment a year later, they explained that radiometric measurements were made at twenty-five different locations in the box, using a method “developed in the field of heat transfer.”3 The scene was controlled by keeping the light bulbs running at a particular voltage strength, filtering out infrared energy, and making sure no additional light entered the room. After photographing the cube, the recorded physical measurements were converted into a computable format and compared to those found in the computer simulation. The results were plotted in three-dimensional diagrams, overlaying the test results from the physical cube and computer rendering, where a low discrepancy between the two was taken as an indicator of realism. Side-by-side visual experiments were also performed with humans placed in front of two view cameras: one displaying a computer-simulated version of the test cube, and the other showing a direct camera shot of the physical cube, with a partitioning curtain preventing the subjects from knowing which one was which (figs. 3 and 4). The results revealed that 45 percent of the participants “did no better than they would have by guessing.” Nevertheless, the researchers concluded that the experiment “lends strong support to the perceptual validity of the simulation and display process,” thus suggesting that the rendering algorithm had, indeed, succeeded in creating a not just realistic, but photorealistic image.4
3. Visual comparison device, illustration taken from Gary Meyer et al, “An Experimental Evaluation of Computer Graphics Imagery,” ACM Transactions on Graphics 5, no. 1 (1986): 45

© Association for Computing Machinery
4. A participant comparing real and simulated images in the Cornell box experiment, illustration taken from Gary Meyer et al, “An Experimental Evaluation of Computer Graphics Imagery,” ACM Transactions on Graphics 5, no. 1 (1986): 46

© Association for Computing Machinery
- 5 Nelson Goodman, Languages of Art: An Approach to a Theory of Symbols (Indianapolis, IN: The Bobbs-M (...)
- 6 See also Jens Schröter’s essay in this issue.
3The Cornell box experiments—as they would later become known—illustrate how the achievement of photorealism has been crucial to the field of computer graphics since its origins in the 1950s. The desire to achieve photorealism is also central to the development of contemporary generative image models like DALL-E, Midjourney, and Stable Diffusion, which have all been designed to create images that are perceived as indistinguishable from those produced with a camera lens. In what follows, I explore how evaluation techniques for synthetic images have helped define and maintain ideas concerning photorealism from the 1980s and onwards. If, following Nelson Goodman, realism must be conceived of as something deeply relative and historically situated (for instance, a hieroglyph might have looked as realistic to a Fifth Dynasty Egyptian as a Jan van Eyck portrait did to a Renaissance Dutchman),5 then it becomes necessary to explore how different realisms and photorealisms take shape.6
4I use the Cornell box experiments—which outline a standard model for how realism was assessed in computer-generated images during the 1990s and 2000s—as a starting point for exploring how photorealism in synthetic images has increasingly become a quantified and measurable thing. I also compare the forms of photorealism that emerge from the Cornell box experiments with contemporary techniques for evaluating photorealism in AI-generated images, assuming that such evaluative practices are instrumental in defining, maintaining, and bringing aesthetic conventions related to photorealism into being. This work reveals how notions of realism grounded in physical space have given way to a realism grounded in evaluations of the photographic and its particular visual aesthetics in terms of focus, detail, sharpness, contrast, color fidelity, etc. In so doing, the photographic has also increasingly come to stand in for, and function as the ground truth of, the real.
The Cornell Box Experiments and Sequential Photorealism
- 7 See “What Can We Learn by Benchmarking Graphics Systems?,” SIGGRAPH ’88: ACM SIGGRAPH 88 Panel Proc (...)
- 8 “Pretty Pictures Aren’t So Pretty Anymore: A Call for Better Theoretical Foundations,” conference p (...)
- 9 Ibid.
5When the Cornell box was introduced in the mid-1980s, computer-generated images had begun to appear everywhere, from the offices of construction engineers and industrial designers to advertising bureaus, animation studios, and gaming developers. As a result of their increasingly widespread use, voices were raised regarding the need to construct benchmark tests for graphics systems and agree on formal attributes to consider in the evaluation of computer-generated images.7 Calls for more rigorous quality assessments came not least from the field of design and architecture, where a lack of “consistency, accuracy, robustness and reliability” in computer renderings was described as an urgent problem.8 Condemning how many computer graphics developers seemed to “feel satisfied if the results are ‘correct’ most of the time,” critics expressed “horror” at the “lack of rigor and formalism in many of the ad hoc approaches adopted by current work,” and addressed the need to theorize and standardize computer renderings, making sure their end results could be trusted.9
- 10 See http://www.graphics.cornell.edu/online/box/, (accessed January 23, 2024).
- 11 Christoph F. Reinhart and Oliver Walkenhorst, “Validation of Dynamic RADIANCE-Based Daylight Simula (...)
- 12 Karol Myszkowski and Tosiyasu L. Kunii, “A Case Study towards Validation of Global Illumination Alg (...)
- 13 Christiane Ulbricht, Alexander Wilkie, and Werner Purgathofer, “Verification of Physically Based Re (...)
6The Cornell box experiments spoke directly to these calls to validate, professionalize, and standardize computer renderings and offered a template for the evaluation of quality in computer-generated images.10 Throughout the 1990s and 2000s, each of the three assessment techniques proposed in the Cornell experiments—the analysis of data captured from a physical scene, the use of human subjects in assessing realism, and the application of algorithms to compare photographs and computer renderings—would undergo significant developments, yet maintain a central position in performance tests. With regard to the use of physical scenes as ground truth, a wide range of similar experiments were conducted in the following decades that successively escaped the original test cube and came to involve more and more complex environments, such as conference rooms equipped with chairs and desks,11 as well as atriums with staircases and corridors as seen in figure 5, where the top image (a) shows a photograph taken at the University of Aizu in Japan, and the bottom image (b) shows a computer rendering of the same scene.12 With time, a more and more advanced apparatus was also used to document the properties of physical scenes, including incident color meters and gonioreflectometers to capture color and lighting conditions.13
5. Above: photograph of the atrium at the University of Aizu, Japan; below: computer rendering of the same location, illustrations taken from Karol Myszkowski and Tosiyasu L. Kunii, “A Case Study Towards Validation of Global Illumination Algorithms: Progressive Hierarchical Radiosity with Clustering,” Visual Computer 16, no. 5 (2000): 284

© Karol Myszkowski et Tosiyasu L. Kunii
- 14 Ann-Sophie Lehmann, “Taking the Lid Off the Utah Teapot: Towards a Material Analysis of Computer Gr (...)
- 15 Lehmann, “Taking the Lid Off,” 184.
7These intricate attempts to stage, measure, and record “the real” illustrate how the task of assessing and achieving realism in early computer graphics involved nothing less than documenting the entire physical world. In order for complex scenes—including everything from sand, skin, and plastic to wind, fog, and natural light—to be realistically modeled, the material properties of each of those elements first had to be measured and translated into a rendering algorithm. As a result, Ann-Sofie Lehmann notes that conference proceedings in the field of computer-generated imagery in the 1980s and 1990s “read like an unsorted encyclopedia of the visual properties of nearly everything, from specific bird’s feathers, silk textiles, wet hair, damaged car-lacquer, Chinese ink brush strokes, to corroded bronze or the fuzzy surface of leaves.”14 As Lehmann puts it, the computer graphics community had embarked on a quest “to render all materials, to collect them, to understand them, to simulate their aging and weathering, to reconstruct and recombine them.”15
- 16 See, for example, Ann McNamara, “Exploring Visual and Automatic Measures of Perceptual Fidelity in (...)
- 17 Ali Borji, “Pros and cons of GAN evaluation measures,” Computer Vision and Image Understanding, 179 (...)
- 18 Martin Čadík et al., “New Measurements Reveal Weaknesses of Image Quality Metrics in Evaluating Gra (...)
8The physical experiments enabled by the Cornell box were crucial in validating that such renderings were true to the laws of optics and physics. Yet it was also believed that realism had to be evaluated on a more aesthetic and perceptual level, which meant that human subjects were included in the evaluation process. Here, too, the Cornell box experiments would serve as a central reference point and source of inspiration in the coming decades.16 Today, human evaluations are still frequently described as the most efficient way of gauging photorealism in synthetic images,17 and a series of methods for refining and increasing the objectivity of such assessments have been proposed. For instance, efforts have been made to synthesize human evaluations of (photo)realism into metrics such as the Mean Opinion Score, originally developed in telecommunications engineering to measure the perceived quality of video, audio, or audiovisual content.18
- 19 Zhou Wang et al. “Why Is Image Quality Assessment So Difficult?,” Proceedings of the IEEE Internati (...)
9The third and final assessment technique proposed by the Cornell box experiments—using algorithmic methods to compare photographs and computer renderings—is arguably the evaluation procedure that has transformed the most since the mid-1980s. In the years following 1984, the first metrics to become standard involved counting the errors, pixel by pixel, in a computer rendering of a photograph to estimate photorealistic qualities. This was done using the so-called Peak Signal-to-Noise Ratio (PSNR) and Mean Squared Error (MSE). Commonly applied to assess the effects of image compression and processing, the metrics work by comparing two images: an original image and a distorted version of the footage. For each pixel, the error (or difference) between the two images is computed using PSNR, before the squared differences are averaged across all the pixels with the help of MSE.19 Commonly used to assess the effects of image compression and processing, PSNR works by comparing two images: an original image and a distorted version of the footage. For each pixel, the error (or difference) between the two images is computed, before the squared differences are averaged across all the pixels, using the MSE. A high PSNR is taken to indicate low image distortion and therefore better image quality. When used to evaluate photorealism, a synthetic image is compared with a photograph.
- 20 Zhuo Wang et al., “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE (...)
- 21 Wang, “Image Quality Assessment.”
- 22 See, for example, Rushmeier, “Comparing Real and Synthetic Images.”
- 23 Scott Daly, “The Visible Differences Predictor: An Algorithm for the Assessment of Image Fidelity,” (...)
- 24 See, for example, Zhuo Wang, Eero Simoncelli, and Alan Bovik, “Multi-scale Structural Similarity fo (...)
10PSNR and MSE come with a series of assumptions, including the idea that all image distortions can be treated as equal, disregarding how some distortions may be more visible and relevant than others.20 They also assume that pixel deviances always reduce image quality and thus fail to recognize that some can improve it (for example, this can be the case in image upsampling).21 In the mid-1990s, the pixel-error approach would therefore be questioned by a new wave of researchers proposing that image quality should instead be evaluated at a more structural level.22 Modeled on theories about the workings of the human sensory system and its ways of noticing overarching anomalies and irregularities in images, a series of new evaluation metrics were introduced. An early example is Scott Daly’s Visible Differences Predictor from 1993, which produced a “difference map” that pointed to regions in synthetic images that were identified as dissimilar to those of a ground-truth image.23 Later, perceptual techniques such as the Multi-scale Structural Similarity Measurement (MS-SSIM), and the Structural Similarity Index (SSI), first presented in 2003 and 2004,24 became standards. These models compare the luminance, contrast, and structure of two images and produce a final similarity score that furthers quantitative comparisons between different rendering models.
- 25 Joss Whittle, Mark Jones, and Rafał Mantiuk, “Analysis of Reported Error in Monte Carlo Rendered Im (...)
11To summarize, the Cornell box experiments thus outline how photorealism has largely been evaluated in computer-generated images, as different versions and combinations of its experimental setup—involving physical tests, human evaluations, and algorithmic comparisons between photographs and computer renderings—played a key role in developments of computer rendering models well into the 2010s.25 In the assessment methods discussed so far, a computer rendering was understood as realistic when it is possible to algorithmically determine that its visual properties closely corresponded to a specific physical scene, and photorealistic when a human or algorithm could identify a high level of similarity between a photograph and its computer-rendered replica. This represented the realism and photorealism of the one-on-one comparison, where the visual qualities of synthetic images were measured and compared against other, singular reference images, or physical scenes, using sequential evaluation algorithms.
The Inception Score, FID Metric, and Predictive Photorealism
- 26 Ian Goodfellow et al., “Generative Adversarial Nets,” Advances in Neural Information Processing Sys (...)
- 27 Tim Salimans et al., “Improved Techniques for Training GANs,” Advances in Neural Information Proces (...)
- 28 Salimans, “Improved Techniques,” 4.
- 29 Salimans, 5.
12In 2016—roughly two years after Ian Goodfellow et al. released their first iteration of a Generative Adversarial Network, jump-starting a new wave of synthetic image making rooted in deep learning26—Tim Salimans and colleagues proposed a new method for automating the evaluation of photorealism in AI-generated images.27 Their method was called the Inception score and addressed uncertainties in human evaluation metrics, which were thought to greatly depend on the setup of the assessment task, the “motivations” of annotators, and the fact that annotators’ opinions may change—for instance, as a result of being given feedback about their mistakes.28 The Inception score applied a pre-trained Inception model—which belongs to a category of deep-learning models known as Convolutional Neural Networks, originally trained for image-classification tasks—to a set of synthetic images. For each synthetic image, the Inception model was called upon to calculate the certainty with which the Inception model could identify different objects (ties, cats, potted plants, etc.) in the images. The Inception score then assumed that images with meaningful objects (i.e., objects that look convincing and are arranged realistically) should have a low entropy in their label distribution—i.e., the Inception model should be confident about the specific objects it finds. Furthermore, it assumed that a meaningful set of synthetic images should display a high diversity or entropy in found labels, indicating visually varied image outputs. Put differently, Salimans et al. described their Inception score as a way of measuring the “objectness” of a generated image—as a way, that is, of assessing if a synthetic image contains elements that an object recognition model would recognize as real and actual things.29 The assumption here was that the Inception model’s ways of finding and classifying objects in images closely correlate to human—and therefore realistic—ways of reading images and photorealism. Unlike previously, this notion of photorealism was fundamentally modeled on another synthetic, machine-learning model’s capacity to identify and label objects in images, paired with the assumption that “good” generative image models will produce images that are diverse—an evaluation parameter that made little sense when synthetic images were fully handcrafted during the 1980s and 1990s.
- 30 Manovich, “AI Image Media,” 35.
- 31 Bajohr, “Algorithmic Empathy.”
13Taking assessment techniques inspired by the Cornell box as a point of comparison, and following Lev Manovich, we can understand this shift as a move from “simulation” to “prediction,” as the making and evaluation of synthetic images no longer had to involve translating external realities into computational formulas but was rather a matter of creating and evaluating visual content based on statistical estimations.30 Taking our cue from Hannes Bajohr, we can also think of this shift as a move from “sequential” to “connectionist” image making and image assessments, as synthetic images are no longer only created and assessed by algorithms that follow explicit commands and execute orders in a deterministic, transparent, and consecutive fashion (as in earlier computer graphics), but are instead made and evaluated as a result of a deep-learning model’s capacity to implicitly analyze, and learn from, training data by processing information in parallel, interconnected, and opaque ways that are often difficult to fully explain or monitor—even for those designing the models.31
- 32 Lukas R. A. Wilde, “Generative Imagery as Media Form and Research Field: Introduction to a New Para (...)
- 33 Wilde, “Generative Imagery.”
14Today’s AI-powered text-to-image models establish a radical break with the visualization techniques developed in the computer graphics community from the 1950s and onwards. AI-driven generative image models make no use of earlier computer graphic techniques such as 3D modeling, texture mapping, and shading methods. The path to achieving photorealism in AI-generated images does not, as Lukas R. A. Wilde describes it, lead “through simulated 3D space, but through a multi-dimensional latent space of linguistic categories.”32 AI-generated images are not modeled after humanly translated—and algorithmically specified—insights about the physics and geometrical qualities of objects, textures, or natural phenomena. The “thing” being modeled here is not the direct properties of three-dimensional objects in space but the binary information found in flat, pixelated images in training datasets—alongside language prompts that steer, shape, and narrow down visual possibilities.33
- 34 Martin Heusel et al., “GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equili (...)
15This fundamental shift in how synthetic images are made is also mirrored in evaluation metrics such as the Inception score. During the formative years between 2014 and 2023 when there was a surge in AI-powered generative image making, the most significant accompaniment to the Inception score was the Fréchet Inception Distance (FID). First introduced by Heusel et al. in 2017, the FID metric addressed the fact that the Inception score lacked comparisons to ground-truth datasets, which was seen as necessary to achieving robust image assessments.34 Rather than directly incorporating the Inception model’s object labels in its quality assessments, the FID was designed to operate in a feature space (a reduced representation of images), and used the Inception model to synthesize and compare visual features between large datasets of “real” and synthetic images, features that were later compared through a Fréchet distance measurement.
- 35 See, for example, Yannick Assogba, Adam Pearce, and Madison Elliot, “Large Scale Qualitative Evalua (...)
- 36 Chitwan Saharia et al., “Photorealistic Text-to-Image Diffusion Models with Deep Language Understan (...)
- 37 Tero Karras et al., “Analyzing and Improving the Image Quality of StyleGAN,” 2020 IEEE/CVF Conferen (...)
- 38 Prafulla Dhariwal and Alexander Nichol, “Diffusion Models Beat GANs on Image Synthesis,” Advances i (...)
- 39 Assogba, Pearce, and Elliot, “Large Scale Qualitative Evaluation.”
16Since 2016/17, the Inception score and FID metric have been applied in most large-scale studies that evaluate the performance of some of the world’s most widely used generative image models.35 They have also played a central role in performance demonstrations of groundbreaking text-to-image models such as Google’s Imagen,36 NVIDIA’s StyleGAN,37 and OpenAI’s diffusion models38—often alongside displays of results from human assessments of photorealism. In other words, it is not that the FID metric and Inception score have replaced human evaluations of photorealism, but they have emerged as a standard complement to manual assessment techniques. Furthermore, the boundaries between human and deep-learning-driven ways of assessing synthetic images are increasingly becoming blurred, with deep-learning models being used, for example, to rank/cluster synthetic images into “meaningful groups” before they are shown to humans, thus making it possible to scale manual workflows and evaluate multiple images simultaneously.39 Figure 6 provides an example of what this can look like and shows an interface where images of sea urchins produced by two different image generators (BigGAN to the left and BigGANdeep to the right) were scrutinized by humans, after being clustered by a third deep-learning technique. Reviewers were asked to glance over the images to determine which model had produced images with the highest image quality.
6. Interface used for evaluating AI-generated images, illustration taken from Yannick Assogba et al., “Large Scale Qualitative Evaluation of Generative Image Model Outputs,” preprint, arXiv, January 11, 2023, 7

17Unlike quantitative photorealistic assessments in the 1980s, 1990s, and early 2000s—which functioned according to predesigned algorithmic formulas (such as error-based evaluation methods, or structural similarity measurements)—the new generation of automated evaluation techniques (found in FID and the Inception score) all involve the use of one AI model to evaluate the performance of another. Under this paradigm of photorealistic assessments, a computer rendering is understood as photorealistic when it contains visual elements and objects that are easily recognizable to either humans or a neural net. In the case of the latter, the notion of photorealism that emerges is that of the large-scale dataset, since the FID metric is only calculated on aggregated datasets consisting of at least 50 000 images. Thus, photorealism according to FID is only measured and defined at scale.
18The Inception score and FID metric also diverge from the Cornell box method of combining evaluations of both physical and photographic realism. With the Inception score and FID metrics, there is only photorealism.
ImageNet and Inception Photorealism
19So far, I have outlined how photorealism has increasingly become a quantifiable and measurable thing, following increased calls to validate and standardize computer graphics models in the late 1980s. During the 1990s and early 2000s, “high-quality” synthetic images were commonly identified by comparing visual similarities between a computer rendering, the physical and material environment it mimicked, and a photograph of a physical scene. These comparisons were made using linear, algorithmic methods, whose ways of processing data were as manually designed as the computer renderings they assessed. This was the photorealism of the unique test environment and/or sample photograph, as defined by transparent, sequential, and consistent algorithms. With the introduction of Generative Adversarial Networks in 2014, however, synthetic images found a new birthplace in the statistical analysis performed by deep neural nets. Henceforth, photorealistic assessment techniques became rooted in deep learning and uncoupled from one-on-one comparisons with physical, ground-truth scenes and/or singular photographs. Instead, photorealism is defined by comparing and identifying patterns in vast collections of images. This represents the photorealism of the aggregated image dataset, as measured and defined by opaque, parallel, and connectionist deep neural nets.
- 40 Christian Szegedy et al., “Going Deeper with Convolutions,” 2015 IEEE Conference on Computer Vision (...)
- 41 See, for example, Emily Denton et al., “On the Genealogy of Machine Learning Datasets: A Critical H (...)
- 42 Gabriel Pereira and Bruno Moreschi, “Artificial Intelligence and Institutional Critique 2.0: Unexpe (...)
- 43 Crawford and Paglen, “Excavating AI.”
- 44 Tuomas Kynkäänniemi et al., “The Role of ImageNet Classes in Fréchet Inception Distance,” arXiv:220 (...)
- 45 Kynkäänniemi, “The Role of ImageNet Classes,” 3.
- 46 Kynkäänniemi, 17.
20To be precise, one specific neural net has come to dominate definitions of synthetic photorealism since 2014, namely the Inception model, which constitutes the core in both the FID metric and Inception score. First introduced in 2015, the Inception model has been trained on ImageNet40—one of the world’s most (in)famous training datasets for machine learning.41 AI models trained on ImageNet have been heavily criticized for expressing a deeply Anglo- and Eurocentric version of visual culture that, amongst other things, heavily reproduces commercial and capitalist logics,42 while carrying deeply narrow and problematic notions of gender and ethnicity.43 These limitations are all reawakened when the FID metric and Inception score are put to work—an issue that is increasingly also confirmed in computer science communities. While it was long assumed that the FID metric did, indeed, succeed in measuring and comparing structural similarities across synthetic and non-synthetic datasets (as promised), recent research has demonstrated that it is, in fact, “most interested in a handful of features whose only purpose is to help with ImageNet classification, not on some careful analysis of the whole image.”44 For instance, the FID metric has been shown to exhibit a “fixation on the most prominent ImageNet classes” such as suits, seat belts, bow ties, and cowboy hats.45 Figure 7 shows a sample of images of what the FID metric “sees” when it evaluates realism in synthetic images, as identified by Tuomas Kynkäänniemi and colleagues.46 The yellow sections in the images indicate the regions that the FID considers most important, while blue indicates areas of lesser importance. The annotation in the top-left corner of each image shows the object category that the FID has most confidently identified in the bright-yellow sections. The authors note that annotations from the yellow sections correspond to object classes that are most readily represented in the ImageNet database, including object categories like poncho, sweatshirt, stethoscope, and feather boa.
7. Evaluation result based on the FID metric, illustration taken from Tuomas Kynkäänniemi et al., “The Role of ImageNet Classes in Fréchet Inception Distance,” preprint, arXiv, February 14, 2023, 17

21What this means is that images that are ranked as highly “photorealistic” by the FID metric and Inception score will be those that contain visual elements that are widely represented in ImageNet and thus easily recognized by the Inception model. As generative image models are optimized to score well in FID and Inception score evaluations (again, currently the quantitative assessment models par excellence in visual generative AI), there is every reason to believe that ImageNet has thus far heavily shaped the visual aesthetics of generative AI models, despite the efforts made to use other (less biased) datasets during model training. To rephrase my previous statement, we can thus say that the photorealistic ideal that emerges from the Inception score and FID metric is that of ImageNet and Inception photorealism, which forwards a deeply narrow version of photographic culture, while disqualifying a broad range of other photographic traditions and aesthetics from ever being considered as “photorealistic.”
- 47 See, for example, Mehdi S. M. Sajjadi et al., “Assessing Generative Models via Precision and Recall (...)
22As a result of recent critiques of the Inception score and FID metric, a series of alternative evaluation metrics have been proposed.47 Currently, however, it is uncertain which of these might become the new evaluation standard. Once again, this highlights how definitions of photorealism are never fixed and stable. The various evaluation methods described in this text have all played temporary yet influential roles in shaping and reshaping notions of realism and photorealism. These shifting ways of defining photorealism, I argue, are also key to understanding contemporary visual culture and the complex relationship between photography, algorithms, and realism that exists today—a relationship where algorithmic systems are increasingly assigned the task of distinguishing between what is and looks real and what does not.
I thank the editors of this special issue, especially Estelle Blashke and Olivier Lugon, for their generous feedback on previous versions of this text. Mathias Johansson, Magnus Rust, and Erik Eggeling also provided invaluable input and conversation. The research was partly funded by Riskdagens Jubileumsfond (RJ), ref. P21-0012.
Notes
1 Cindy Goral et al., “Modeling the Interaction of Light between Diffuse Surfaces,” Computer Graphics 18, no.3 (1984): 213–22.
2 Goral, “Modeling the Interaction,” 219.
3 Gary Meyer et al., “An Experimental Evaluation of Computer Graphics Imagery,” ACM Transactions on Graphics 5, no. 1 (1986): 30–50, here: 32. See also Michael F. Cohen and Donald P. Greenberg, “The Hemi-Cube, A Radiosity Solution for Complex Environment,” Computer Graphics 19, no. 3 (1985): 31–40.
4 Meyer et al., “An Experimental Evaluation,” 49.
5 Nelson Goodman, Languages of Art: An Approach to a Theory of Symbols (Indianapolis, IN: The Bobbs-Merrill Company, 1968).
6 See also Jens Schröter’s essay in this issue.
7 See “What Can We Learn by Benchmarking Graphics Systems?,” SIGGRAPH ’88: ACM SIGGRAPH 88 Panel Proceedings Computer Graphics, art. no. 3 (August 1988), https://doi.org/10.1145/1402242.1402248; “Aesthetics of Computer Graphics,” ACM SIGGRAPH Computer Graphics 19, no. 3 (1985), https://doi.org/10.1145/325334.325257.
8 “Pretty Pictures Aren’t So Pretty Anymore: A Call for Better Theoretical Foundations,” conference panel moderated by Rae A. Earnshaw, SIGGRAPH 1987: 14th Annual Conference on Computer Graphics and Interactive Techniques (1987), https://history.siggraph.org/learning/pretty-pictures-arent-so-pretty-anymore-a-call-for-better-theoretical-foundations-moderated-by-rae-a-earnshaw/
9 Ibid.
10 See http://www.graphics.cornell.edu/online/box/, (accessed January 23, 2024).
11 Christoph F. Reinhart and Oliver Walkenhorst, “Validation of Dynamic RADIANCE-Based Daylight Simulations for a Test Office with External Blinds,” Energy and Buildings 33, no. 7 (2001): 683–97.
12 Karol Myszkowski and Tosiyasu L. Kunii, “A Case Study towards Validation of Global Illumination Algorithms: Progressive Hierarchical Radiosity with Clustering,” The Visual Computer 16, no. 5 (2000): 271–88.
13 Christiane Ulbricht, Alexander Wilkie, and Werner Purgathofer, “Verification of Physically Based Rendering Algorithms,” Computer Graphics Forum 25, no. 2 (2006): 237–55.
14 Ann-Sophie Lehmann, “Taking the Lid Off the Utah Teapot: Towards a Material Analysis of Computer Graphics,” ZMK Zeitschrift Für Medien- Und Kulturforschung 3, no. 1 (2012): 169–84, here: 174.
15 Lehmann, “Taking the Lid Off,” 184.
16 See, for example, Ann McNamara, “Exploring Visual and Automatic Measures of Perceptual Fidelity in Real and Simulated Imagery,” ACM Transactions on Applied Perception 3, no. 3 (2006): 217–38.
17 Ali Borji, “Pros and cons of GAN evaluation measures,” Computer Vision and Image Understanding, 179 (2019): 41-65.
18 Martin Čadík et al., “New Measurements Reveal Weaknesses of Image Quality Metrics in Evaluating Graphics Artifacts,” ACM Transactions on Graphics 31, no. 6 (2012): 1–10.
19 Zhou Wang et al. “Why Is Image Quality Assessment So Difficult?,” Proceedings of the IEEE International Conference on Acoustics Speech and Signal Processing (2002): 3313–16; Holly Rushmeier et al., “Comparing Real and Synthetic Images: Some Ideas about Metrics,” in Rendering Techniques ’95, ed. Patrick Hanrahan and Werner Purgathofer, Eurographics series (1995): 82–91.
20 Zhuo Wang et al., “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE Transactions on Image Processing 13, no. 4 (2004): 600–12, here: 602.
21 Wang, “Image Quality Assessment.”
22 See, for example, Rushmeier, “Comparing Real and Synthetic Images.”
23 Scott Daly, “The Visible Differences Predictor: An Algorithm for the Assessment of Image Fidelity,” in Digital Images and Human Vision, ed. Andrew B. Watson (Cambridge, MA: MIT Press, 1993), 179–206.
24 See, for example, Zhuo Wang, Eero Simoncelli, and Alan Bovik, “Multi-scale Structural Similarity for Image Quality Assessment,” Proceedings of the 37th Asilomar Conference on Signals, Systems and Computers (2003): 1398–402; Wang, “Image Quality Assessment.”
25 Joss Whittle, Mark Jones, and Rafał Mantiuk, “Analysis of Reported Error in Monte Carlo Rendered Images,” The Visual Computer 33 (2017): 705–13; Giovani Balen Meneghel and Marcio Lobo Netto, “A Comparison of Global Illumination Methods Using Perceptual Quality Metrics,” 2015 28th SIBGRAPI Conference on Graphics, Patterns and Images (2015): 33–40, https://doi.org/10.1109/SIBGRAPI.2015.52.
26 Ian Goodfellow et al., “Generative Adversarial Nets,” Advances in Neural Information Processing Systems 27 (NIPS 2014).
27 Tim Salimans et al., “Improved Techniques for Training GANs,” Advances in Neural Information Processing Systems 29 (NIPS 2016).
28 Salimans, “Improved Techniques,” 4.
29 Salimans, 5.
30 Manovich, “AI Image Media,” 35.
31 Bajohr, “Algorithmic Empathy.”
32 Lukas R. A. Wilde, “Generative Imagery as Media Form and Research Field: Introduction to a New Paradigm,” IMAGE 37, no. 1 (2023): 6–33, here: 18.
33 Wilde, “Generative Imagery.”
34 Martin Heusel et al., “GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium,” Advances in Neural Information Processing Systems 30 (NIPS 2017), https://doi.org/10.48550/arXiv.1706.08500.
35 See, for example, Yannick Assogba, Adam Pearce, and Madison Elliot, “Large Scale Qualitative Evaluation of Generative Image Model Outputs,” arXiv.2301.04518 (2023), https://doi.org/10.48550/arXiv.2301.04518; Jorge Agnese et al., “A Survey and Taxonomy of Adversarial Neural Networks for Text-to-Image Synthesis,” arXiv.1910.09399 [cs.CV] (2019), https://doi.org/10.48550/arXiv.1910.09399; Guillermo Iglesias, Edgar Talavera, and Alberto Díaz-Álvarez, “A Survey on GANs for Computer Vision: Recent Research, Analysis and Taxonomy,” Computer Science Review 48 (2023): 1–37.
36 Chitwan Saharia et al., “Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding,” Advances in Neural Information Processing Systems 35 (NeurIPS 2022).
37 Tero Karras et al., “Analyzing and Improving the Image Quality of StyleGAN,” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020): 8107–16.
38 Prafulla Dhariwal and Alexander Nichol, “Diffusion Models Beat GANs on Image Synthesis,” Advances in Neural Information Processing Systems 34 (NeurIPS 2021).
39 Assogba, Pearce, and Elliot, “Large Scale Qualitative Evaluation.”
40 Christian Szegedy et al., “Going Deeper with Convolutions,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015); see also Christian Szegedy et al. “Rethinking the Inception Architecture for Computer Vision,” arXiv:1512.00567 [cs.CV] (2015), https://doi.org/10.48550/arXiv.1512.00567.
41 See, for example, Emily Denton et al., “On the Genealogy of Machine Learning Datasets: A Critical History of ImageNet,” Big Data & Society 8, no. 2 (2021); Kate Crawford and Trevor Paglen, “Excavating AI: The Politics of Images in Machine Learning Training Sets,” AI & SOCIETY 36, no. 4 (2021): 1105–1116.
42 Gabriel Pereira and Bruno Moreschi, “Artificial Intelligence and Institutional Critique 2.0: Unexpected Ways of Seeing with Computer Vision,” AI & SOCIETY 36, no. 4 (2021): 1201–23.
43 Crawford and Paglen, “Excavating AI.”
44 Tuomas Kynkäänniemi et al., “The Role of ImageNet Classes in Fréchet Inception Distance,” arXiv:2203.06026 [cs.CV] (2023): 8, https://doi.org/10.48550/arXiv.2203.06026.
45 Kynkäänniemi, “The Role of ImageNet Classes,” 3.
46 Kynkäänniemi, 17.
47 See, for example, Mehdi S. M. Sajjadi et al., “Assessing Generative Models via Precision and Recall,” Advances in Neural Information Processing Systems 31 (NeurIPS 2018); Richard Zhang et al., “The Unreasonable Effectiveness of Deep Features as a Perceptual Metric,” arXiv:1801.03924 [cs.CV] (2018), https://doi.org/10.48550/arXiv.1801.03924; Pum Jun Kim et al., “TopP&R: Robust Support Estimation Approach for Evaluating Fidelity and Diversity in Generative Models,” Advances in Neural Information Processing Systems 36 (NeurIPS 2023).
Top of pageList of illustrations
![]() |
|
---|---|
Title | 1. Photograph of test cube (left) and computer simulation (right), second iteration of the Cornell box, after Gary Meyer et al., “An Experimental Evaluation of Computer Graphics Imagery,” ACM Transactions on Graphics 5, no. 1 (1986): 48 |
Credits | © Association for Computing Machinery |
URL | http://journals.openedition.org/transbordeur/docannexe/image/2352/img-1.jpg |
File | image/jpeg, 247k |
![]() |
|
Title | 2. Initial configuration of the Cornell box, illustration taken from Cindy Goral et al., “Modeling the Interaction of Light Between Diffuse Surfaces,” Computer Graphics 18, no. 3 (1984): 221 |
Credits | © Association for Computing Machinery |
URL | http://journals.openedition.org/transbordeur/docannexe/image/2352/img-2.jpg |
File | image/jpeg, 149k |
![]() |
|
Title | 3. Visual comparison device, illustration taken from Gary Meyer et al, “An Experimental Evaluation of Computer Graphics Imagery,” ACM Transactions on Graphics 5, no. 1 (1986): 45 |
Credits | © Association for Computing Machinery |
URL | http://journals.openedition.org/transbordeur/docannexe/image/2352/img-3.jpg |
File | image/jpeg, 645k |
![]() |
|
Title | 4. A participant comparing real and simulated images in the Cornell box experiment, illustration taken from Gary Meyer et al, “An Experimental Evaluation of Computer Graphics Imagery,” ACM Transactions on Graphics 5, no. 1 (1986): 46 |
Credits | © Association for Computing Machinery |
URL | http://journals.openedition.org/transbordeur/docannexe/image/2352/img-4.jpg |
File | image/jpeg, 735k |
![]() |
|
Title | 5. Above: photograph of the atrium at the University of Aizu, Japan; below: computer rendering of the same location, illustrations taken from Karol Myszkowski and Tosiyasu L. Kunii, “A Case Study Towards Validation of Global Illumination Algorithms: Progressive Hierarchical Radiosity with Clustering,” Visual Computer 16, no. 5 (2000): 284 |
Credits | © Karol Myszkowski et Tosiyasu L. Kunii |
URL | http://journals.openedition.org/transbordeur/docannexe/image/2352/img-5.jpg |
File | image/jpeg, 438k |
![]() |
|
Title | 6. Interface used for evaluating AI-generated images, illustration taken from Yannick Assogba et al., “Large Scale Qualitative Evaluation of Generative Image Model Outputs,” preprint, arXiv, January 11, 2023, 7 |
URL | http://journals.openedition.org/transbordeur/docannexe/image/2352/img-6.jpg |
File | image/jpeg, 893k |
![]() |
|
Title | 7. Evaluation result based on the FID metric, illustration taken from Tuomas Kynkäänniemi et al., “The Role of ImageNet Classes in Fréchet Inception Distance,” preprint, arXiv, February 14, 2023, 17 |
URL | http://journals.openedition.org/transbordeur/docannexe/image/2352/img-7.jpg |
File | image/jpeg, 385k |
References
Electronic reference
Maria Eriksson, “On the Measurement of Realism in Synthetic Images. From Cornell Box Experiments to ImageNet and Inception Photorealism”, Transbordeur [Online], 9 | 2025, Online since 26 February 2025, connection on 26 March 2025. URL: http://journals.openedition.org/transbordeur/2352; DOI: https://doi.org/10.4000/13dws
Top of pageCopyright
The text only may be used under licence CC BY-NC-ND 4.0. All other elements (illustrations, imported files) are “All rights reserved”, unless otherwise stated.
Top of page