Navigation – Plan du site

AccueilNuméros28-2True Believers: The Incredulity H...

True Believers: The Incredulity Hypothesis and the Enduring Legacy of the Obedience Experiments

John M. Doris, Laura Niemi et Edouard Machery
p. 53-89

Résumés

De nombreux commentaires des expériences de Milgram soutiennent l’Hypothèse d’incrédulité, laquelle soutient que les participants de Milgram n’auraient en général pas cru qu’ils administraient des chocs électriques réels. Si l’Hypothèse d’incrédulité était juste, on devrait en conclure que les sujets obéissants ne croyaient pas mal agir, ce qui impliquerait que Milgram a échoué à mettre en évidence des niveaux alarmants d’obéissance destructrice. Dans cet article, nous démontrons que l’Hypothèse d’incrédulité n’est, en général, pas exacte : elle n’explique que très difficilement le comportement des participants dans les expériences de Milgram et ses nombreuses réplications, et elle concorde mal avec les témoignages des participants concernant leur propre vécu.

Haut de page

Texte intégral

1 The obedience experiments: Conventional Wisdom

  • 1 Scare quotes, for two reasons: (1) Milgram’s individual trials did not compare different condition (...)

1Stanley Milgram’s “obedience experiments”1 are among the most famous studies in social psychology, and perhaps, in all the human sciences [Burger 2014, 489], [Reicher, Haslam et al. 2012, 315]. While the experiments have always been controversial, lately, more than 60 years after they were conducted, their place in the history of psychology has been the object of pointed questions. In her provocative book on the Milgram experiments, Gina Perry [2013, 301] goes so far as to suggest that they “might not be good science.” If Perry is right, and Milgram’s studies don’t in fact demonstrate what countless social scientists and scholars have taken them to show, the implications would be momentous.

2The studies are deeply engrained in popular culture [Blass 2004, 259–268], so if they are bad science, a consequential piece of intellectual heritage reduces to folklore, and “that experiment where they shocked people,” turns out to be the epistemological kin of bigfoot sightings. Implications for scholarly discourse would be equally substantial. The Milgram experiments have been deployed in service of:

  • Claims in history that “ordinary people” commit atrocities such as those perpetrated in the Holocaust [e.g., Browning 1992].2
  • Claims in psychology that situational variables matter more, and personality variables matter less, than both theory and common sense predict [e.g., Ross & Nisbett 2011].
  • Claims in philosophy that character and virtue are much less important than many ethicists suppose [e.g., Doris 2002, 2022], [Harman 1999], [cf. Miller & Chris 2014, 39–43].
  • 3 To the extent a single experimental paradigm can demonstrate any general proposition.
  • 4 While belief about wrongness is central to the discussion of the Milgram phenomenon, strictly spea (...)

3Even commentators who find such claims overstated have tended to agree that Milgram was able to demonstrate something extremely important [e.g., Badhwar 2009]; for the most part, “conventional wisdom” maintains that Milgram’s experiments are in fact historically consequential. Here, we shall characterize this Conventional Wisdom as follows. Milgram convincingly demonstrated3 the existence of destructive obedience: specifically, that a large percentage of people, under minimally coercive authoritative command, are willing to harm innocents, even while believing this harm to be morally objectionable.4 In what follows, we defend Conventional Wisdom.

2 Overturning Conventional Wisdom? The Incredulity Hypothesis

  • 5 We assume that the readers of this special issue on Milgram are familiar with the basic structure (...)

4We will not undertake a comprehensive scientific defense of Milgram; the voluminous literature makes such an exercise impracticable, even for book-length treatments. Instead, we will mainly focus on one familiar objection to Milgram, recently revived by Perry and colleagues [Perry 2013, Perry, Brannigan et al. 2020], to the effect that many of Milgram’s participants did not believe the shocks were real (as of course, they weren’t),5 and therefore did not believe they were behaving badly by administering them. If this interpretation, which we will call the Incredulity Hypothesis, is correct, Milgram’s cover story—that the shocks were real, and painful—flopped with his participants, and his experiments did not show that ordinarily decent people can be readily induced to destructive obedience.

5The Incredulity Hypothesis has been floated, in various permutations, since the earliest days of Milgram commentary [e.g., Holland 1968], [Mixon 1972], [Orne & Holland 1968], [Patten 1977a,b]. In one sense, its recent revival is unsurprising. The Hypothesis is a tempting way to explain behavior that appears inexplicable: it’s preposterous to think that ordinary folks would torture an innocent person at the request of a biology teacher sporting a lab coat. In another sense, the revival is surprising, because the Incredulity Hypothesis, while old, isn’t venerable: it has previously enjoyed rather limited traction. For example, Sabini & Silver [2005, 547]—critics of psychologists and philosophers citing Milgram’s findings on destructive obedience in an attempt to undermine traditional notions of personality and character—deemed what we are calling the Incredulity Hypothesis “wholly uncompelling” [cf. D. C. Russell 2009, 279], [N. Russell 2018, 120–126].

  • 6 Another classic-but-controversial study in social psychology that has recently been subject to deb (...)

6Then why again, now, the Incredulity Hypothesis? We suggest two reasons. First, the release of important material from Yale University’s Milgram archives has enabled illuminating re-analyses (https://archives.yale.edu/​repositories/​12/​resources/​4865). Second, the current intellectual zeitgeist has fostered fierce criticisms of psychological science, which have not only prompted much-needed methodological reform in psychology [Chambers 2017], but also whetted a seemingly insatiable appetite, in both scholarly and popular venues, for (sometimes justified) “take-down” pieces debunking classics of the field.6 The Incredulity Hypothesis is especially suited to popular science writing, as it requires no scientific training or prior knowledge of psychology to assert or understand: virtually any experiment that employs deception is vulnerable to questions about the credulousness of those purportedly deceived.

7Most broadly, the revival of the Incredulity Hypothesis may be viewed against a cultural backdrop of skepticism about science, and expertise more generally, which is embodied in various forms of “science denialism” (e.g., [Melo-Martín & Intemann 2018], [Jewett 2020], [Sinatra & Hofer 2021]). Science denialism may be global, involving a wholesale rejection of scientific authority, as in impugning a “liberal bias” to science. But denialism might also be local; podcasts extolling the disregard of mainstream biomedical science about vaccines (“do your own research”) presumably aren’t encouraging skepticism about the engineering making their podcasts possible. In any case, where science denialism prospers, it is unsurprising that psychology, which has long enjoyed less prestige than the natural sciences [Lilienfeld 2012], [Lykken 1991], would encounter heightened scrutiny. We are not saying that those critiquing particular studies in psychology, or even psychology more generally, are science denialists; indeed, when Perry says Milgram’s experiments “might not be good science,” it is perhaps implicit that there exists good psychological science, to which Milgram’s experiments compare unfavorably. We are saying that debunking efforts like the Incredulity Hypothesis are particularly likely to find an appreciative audience in an era of science denialism, a circumstance which helped enable the Hypothesis’ current revival.

3 Milgram’s findings aren’t like the flawed science of RepliGate

8In some ways psychology, and social psychology in particular, did much to bring trouble on itself. Spurred by Daryl Bem’s [2011] “demonstration” of Extrasensory Perception in a leading journal, psychology entered a period of intense self-scrutiny [Chambers 2017], [Machery & Doris forth], [Ritchie 2020]. What emerged were repeated failures of replication, where many celebrated studies could not be reproduced, or could be reproduced only sporadically, and the realization that many of these failures were sourced in “questionable research practices” being enshrined as standard research practices, as was the case with “p-hacking,” which generates false positives by capitalizing on chance. In the wake of this “replication crisis,” or “RepliGate” [Doris 2015, 44–49], [Machery & Doris 2017], it was inevitable, and altogether appropriate, that classics of social psychology like Milgram’s experiments would endure renewed scrutiny.

9While we welcome critical assessment of science in general, and psychology in particular, we advise against assimilating Milgram’s findings to Bem’s ESP debacle and other findings discredited by the replication crisis: there are important disanalogies between Milgram and the debunked science of RepliGate. Many of the prominently discredited studies were remarkably counterintuitive, and when substantial effect sizes were reported for such curiosities, people started to think something was up [e.g., Pashler, Coburn et al. 2012].

10The Milgram experiments are not at all like this. It’s manifest—and this is an important point to which we will return—that there was indeed something surprising in Milgram’s findings: the relative ease with which destructive obedience was induced. But there’s another sense in which Milgram’s findings, unlike some of the curiosities implicated in psychology’s recent troubles, are altogether expectable. Destructive obedience has been an enabler of totalitarian orders as long as there have been totalitarian orders; perhaps Milgram’s findings shouldn’t be surprising, to anyone who’s read a bit of history. Perhaps too, Milgram’s experiments shouldn’t be surprising to anyone who’s ever run afoul of a schoolyard bully and their sidekicks [Arpaly 2005, 644]—that’s pretty much everyone who’s ever been on a schoolyard. In short, Milgram’s experiments exhibit what is sometimes referred to as “face validity”: the findings appear plausible by the lights of “naïve observation” [Hardesty & Bearden 2004], [Mosier 1947]. The point can also be understood in terms of prior probabilities: while the prior probability of the hypothesis being supported for RepliGate’s problematic studies on psychological “curiosities” is comparatively low, that of Milgram’s findings is comparatively high.

4 The Incredulity Hypothesis’ explanatory burden

  • 7 We again note (cf. footnote 1) that we are neutral as to whether the behavior in question is best (...)

11If Conventional Wisdom is right, something similar is going on with laboratory obedience in Milgram and real-world behavior, which preserves the possibility that both might be explained by a comparatively parsimonious theory of destructive obedience.7 However, if the Incredulity Hypothesis is correct, there are two very different things going on: the evidently harmless experimental pretense of harming, and profoundly harmful real-world destructive obedience.

  • 8 For discussion of some complexities, see [Sober 2015].

12The world of empirical fact is messy, and we’d not want to lean too hard on considerations of parsimony.8 Still, the Incredulity Hypothesis explains away destructive obedience at the cost of inducing much psychological complexity that itself needs explaining. How does the Incredulity Hypothesis envisage the psychology of obedient participants? What did the participants think they were doing? Why did they carry on with a charade, in a manner that has seemed so convincingly real to so many observers? And why did so many of them seem so distressed while doing so? Among other concerns, it is readily supposed, contra the Incredulity Hypothesis, that participants’ incredulity should be associated with defiance, not obedience: participants who realize the experiment is a sham might be expected to simply call it a day and exit the situation, rather than continue wasting their time. Once the charade is unmasked, what credibility does the experiment have, and what authority has the experimenter?

  • 9 In a footnote to his initial publication, Milgram [1963, 377, note 4], reports that he ran a group (...)
  • 10 To be fair, we suspect that EDEs are more often alleged than empirically demonstrated [e.g., Mummo (...)

13Defenders of the Incredulity Hypothesis can offer explanations of participants’ mental states suited to their depiction of the phenomena: maybe incredulous participants complied because they were paid9 or because they understood what was expected of them and wanted to be cooperative—meaning their performance is to be understood as an experimenter demand effect, or EDE. After all, one might point out, it is commonly observed that EDEs undermine findings in the social sciences [e.g., Corneille & Lush 2023], [Orne 1962], [Rosenthal & Rubin 1978].10 Another explanation of participant mental states that is ostensibly compatible with the Incredulity Hypothesis and with participants’ intense manifestations of stress is that participants were not stressed by certainty that they were shocking the learner, but may instead have been stressed by uncertainty about what was really going on [Mixon 1972, 159–160], [cf. Perry 2013, 138]. Doubtless, being uncertain about whether one is causing suffering in another person could generate stress. But uncertainty is not the mental state the Incredulity Hypothesis should be understood to posit: if incredulity is supposed to assuage concerns about alarming levels of destructive obedience, it needs to be understood as a state of disbelief, not uncertainty (cf. our discussion at p. 23 below).

14In our estimation, participants’ physical symptoms, including sweating, trembling, stuttering, groaning, digging fingernails into flesh [Milgram 1963, 375–377], suggest that any stressogenic properties of EDEs or pangs of doubt are insufficient to explain the apparently extreme stress many participants exhibited. But we are not here insisting that these explanations, or other explanations of the kind, are non-starters. Instead, our point is that added explanatory complexity is necessary if the Incredulity Hypothesis is to be plausible; once a critic departs the relatively straightforward account where participants’ behavior and distress are understood in terms of credulity, they owe a well-developed hypothesis regarding the psychological processes underlying the incredulous pretense, with systematic supporting evidence.

5 Milgram’s data (1): Not p-hacked or fraudulent

  • 11 As appears fairly standard, the canonical version is for us Experiment 5, the “New Baseline” condi (...)

15There’s another way Milgram’s findings differ from many of the discredited results from psychology’s recent traumatic episodes: they are not due to a clever massaging of noisy data to extract a publishable, significant statistic. In fact, the finding in canonical versions of the experiment can be stated with a single figure accessible to anyone with elementary school math—the 65% obedience rate (the fraction of participants who were fully obedient).11 Milgram did employ statistical techniques in his work, for example in noting whether there were statistically significant differences in obedience rates between variations of the experiment, but for the basic finding, there’s nothing to torture or massage. We don’t claim that no questionable research practices can be found in Milgram’s work, just that problems like p-hacking don’t seem to be a serious issue.

16Nor do there seem to be serious questions of outright fraud with respect to the 65%; at least, such intimations are not prominent in the multitudes of often heated Milgram criticisms. Milgram worked in an era when standards of scientific reporting were comparatively relaxed, and his major statement on many issues was a book, Obedience to Authority [1974] intended for popular audiences. There’s little doubt that Milgram sometimes indulged in what might charitably be considered “spin”: perhaps most conspicuously, with respect to his rather sloppy debriefing procedures, and the trauma some participants suffered [Perry 2013, 65–94].

17With respect to the Incredulity Hypothesis, however, allegations of fraud would quickly seem strained, as can be seen by considering three possibilities:

  1. Milgram accurately believed most obedient participants were incredulous.
  2. Milgram inaccurately believed most obedient participants were credulous.
  3. Milgram accurately believed most obedient participants were credulous.
  • 12 Holland [1968, 65–68] reports that in his extension of Milgram (from unpublished doctoral research (...)

18If the Incredulity Hypothesis is right, it’s either (1) or (2). On (1), Milgram was an epic fraudster: he willfully, over a period of years, grossly misrepresented his central finding. Success at this undertaking would be a little miraculous: while there may be more fraud under the surface of science than one likes to think, it’s the famous, heavily scrutinized fraudulent findings that are most likely to be outed [Machery & Doris forth], and Milgram’s findings are nothing if not famous and heavily scrutinized. Notice too, that for this Big Lie to fly, witnesses such as Milgram’s students and staff would have to be either in on it, or themselves expertly deceived, and the complicity or deception sustained for more than half a century. On option (2), it’s Milgram himself, not his participants, who was duped: the participants’ play acting was so convincing as to fool the person who designed, and observed, the experiments. And again, there were other observers to be fooled, so we’ve now got to impugn extraordinary thespian abilities to the participants.12 Option (3), of course, is exactly as Conventional Wisdom has it, and entails rejecting intimations of fraud; it posits only that participants’ evident manifestations of distress are evidence of distress, caused by their believing they were hurting an innocent person.

6 Milgram’s data (2): The studies replicate

19A final way Milgram’s studies differ from the discredited studies of RepliGate is that they are generally not afflicted by replication failures.

20We need to proceed cautiously here. First, what counts as a replication, and what can be concluded from successful (and failed) replications is considerably more complicated than is often supposed [e.g., Stroebe & Strack 2014], [Machery 2020, Machery & Doris forth]. Second, most of the replication attempts for Milgram were conducted before replication anxiety had reached today’s fever pitch, meaning less importance was placed on exact replications, so the replications exhibit varying degrees of fealty to the original, and often should be counted as conceptual replications, or extensions.

21Nevertheless, a clear picture emerges. There is strong evidence for replicability in Milgram’s own studies, in what we’re calling the “canonical” variants (note 11 above): Experiment 1 (65%), Experiment 2 (62.5%), Experiment 5 (65%), and Experiment 8 (65%). Not quite exact replications, but close: Experiments 2 and 5 add vocal learner protests to the original paradigm, and Experiment 8 extends the paradigm to a new participant population, women.

22Things are already looking pretty good. Furthermore, there is little reason, based on the many variations conducted by other researchers, to think Milgram’s own results were peculiar to his lab. Consider the following table:

Table 1: Chronological Sampling of Replications & Extensions of Milgram Experiments

Author(s) Year Location Proportion obedienta Note
Ancona & Pareyson 1968 Italy 85% Compare Milgram Exp. 3, proximity
Edwards et al. 1969 South Africa 87.5 %b Compare Milgram Exp. 2, voice-feedback
Rosenham 1969 USA 85% Compare Milgram Exp. 2, voice-feedback
Ring, Wallston, & Corey 1970 USA 91% Worried experimenter, all female sample
Mantell 1971 Germany 85% Compare Milgram Exp. 2, voice-feedback
Powers & Geen 1972 USA 83% First observed obedient or disobedient “model”
Costanzo 1976 USA 81%b Doctoral dissertation; compare Milgram Exp. 1, remote
Burley & McGuinness 1977 UK 50% Underdescribed; compare Milgram Exp. 1, remote
Shanab & Yahya 1977 Jordan 73% Jordanian schoolchildren; Compare Milgram Exp. 1, remote & 2, voice-feedback
Shanab & Yahya 1978 Jordan 62.5% University of Jordan students; Compare Milgram Exp. 1, remote & 2, voice-feedback
Miranda et al. 1981 Spain 50% Compare Milgram Exp. 2, voice-feedback, and 3, proximity
Meeus & Raaijmakers 1985 Netherlands 92% Disparaging job applicant
Schurz 1985 Austria 80% Compare Milgram Exp. 1, remote
Slater et al. 2006 UK 100%, 74%c “Virtual reprise”
Burger 2009 USA 70%d Abbreviated paradigm
Dambrun & Vatine 2010 France 53%, 13%d “Immersive video environment”
Bocchiaro, Zimbardo, & Van Lange 2012 Amsterdam 76.5%e Convincing others to undergo traumatic sensory deprivation
Beauvois, Courbet, & Oberlé 2012 France 81% Quiz game scenario; Compare Milgram Exp. 2, voice-feedbackf
Zeigler-Hill et al. 2013 USA 94% Noise “blasts” for punishment
Doliński et al. 2017 Poland 90% Compare Milgram Exp. 2; Burger's abbreviated format
a. "Obedient” indicates compliance until the experimenter stopped the study.
b. Reported in [Blass 2000, 58–59].
c. “Hidden” victim and “visible” victim.
d. The experiment went to 150-volt level on the shock generator, instead of Milgram’s 450; this facilitated Institutional Review Board (IRB) approval, since most of participants’ stress in Milgram occurred after this level. Arguably, since a majority of participants who went past 150 for Milgram continued to the end, the truncated version enables reasonably secure inference as to how participants would have performed in the full experiment [Miller 2009, 196], [cf. Packer 2008].
e. For students asked how they would behave in this scenario, 3.6% said they would be obedient, and 18.8% thought the average student at their university would obey [Bocchiaro, Zimbardo et al. 2012].
f. Beauvois, Courbet et al. [2012] report 3 other variations, all broadly supportive of Milgram’s original results.
  • 13 Unfortunately, there does not seem to be a relevant meta-analysis in the literature. This might be (...)

23Table 1 represents a sampling of convenience, not selection for a proper meta-analysis.13 We won’t vouch for the quality of every study, most of which were performed in an era lacking the methodological scrupulosity characteristic of the best (post-RepliGate) contemporary psychology. Across the many variations, rates of obedience vary, as they did across Milgram’s own variations. But this is exactly as it should be in light of sampling error and the systematic variation introduced by changes in experimental design and populations sampled. All in all, then, there is negligible reason to think the infamous 65% is a serious overestimate. The impression, in fact, is impressively uniform, especially considering the remarkable national diversity in this group of studies.

  • 14 We assume that the frequently observed higher rates of obedience do not count as replication failu (...)

24Missing are repeated instances of reasonably exact replications of Milgram with strikingly lower levels of obedience, the sort of circumstance that would justify claims of replication failure.14 We did not find them in the literature, and we believe Milgram’s critics experience similar difficulty; given the innumerable criticisms that have been leveled against Milgram over the years, we’d expect to see the existence of major replication failures written in lights, if such failures did abound. It is of course possible that such replication failures occurred, but were simply not submitted for publication or not published, given the longstanding bias against publishing replication attempts in psychology [e.g., Makel, Plucker et al. 2012]. But even so, after 60-odd years, such unpublished failures would perhaps have leaked into psychology’s oral tradition, had they occurred with any regularity.

  • 15 The study is a comparative outlier in another respect; gender differences often fail to appear in (...)

25Blass [1999, 58–59] identified three unpublished US doctoral dissertations with rates of obedience appreciably lower than the canonical 65% ([Bock 1972], 40%; [Podd 1970], 31%; [Shalala 1974], 30%) as well as an unpublished conference presentation reporting a similarly lower rate ([Rogers 1973], 37%). Let’s assume these studies are of publishable quality and are appropriately considered part of the replication record. There’s still a quite considerable amount of obedience (30%–40%), a circumstance completely unlike the “disappearing effects” that characterized the replication failures of RepliGate [e.g., Open Science Collaboration (OSC) 2015]. And similarly for what is probably the best-known published outlier, Kilham and Mann’s [1974] Australian variant, which found lower, but still disconcerting, rates of obedience: 40% for male participants and 16% for female participants in the “executant” role of actually administering the shocks, and 68% male obedience and 40% female obedience in the “transmitter” role of performing tasks subsidiary to the actual shocking.15 Obedience in Kilham and Mann is generally lower than Milgram’s 65%, perhaps enough lower to justify calling the study a failed replication; at the same time, the study’s procedures are different enough from the original to urge caution in doing so.

  • 16 Perry [2013, 276–281] discusses Burger’s [2009] replication attempt, which she correctly describes (...)
  • 17 Perry [2013, 266–267] accuses Milgram [1974, 171] of “deliberate obfuscation” in characterizing ra (...)

26Given the record, we’re unsure why Haslam, Loughnan et al. [2014] report that “[a]ttempts have been made to replicate [the Milgram study] with mixed results.” If “mixed” means “not completely invariant,” they’re of course right, but experimental results in the social and biological sciences should not be completely invariant—this is simply a consequence of sampling variation [e.g., Francis 2013]. In fact, the absence of such variation prompts suspicions about questionable research practices or fraud. But if “mixed” is meant to imply that conflicting results routinely appear and little can be confidently concluded—which is what we take to be the standard implication of “mixed” in critical discussions of scientific findings—the statement is, to put it generously, misleading. Indeed, Haslam and colleagues only cite two studies in support of their above quoted assessment, and both are supportive of Milgram: Burger’s [2009] abbreviated version, and Slater et al.’s [2006] virtual reprise.16 If the record is in fact mixed, why not cite repeated replication failures in counterpoint to the repeated replication successes?17 The answer to Does Milgram replicate? is not we can’t be sure, but almost certainly yes.

  • 18 For similarly sanguine assessments of Milgram’s replicability, see [Brown 1986, 4], [Burger 2009], (...)

27We are mindful of the phenomenon of “zombie literatures,” viz. collections of studies that seem to provide robust evidence for the reality of an empirical phenomenon that in fact does not exist [Machery & Doris forth] [called “ghost literatures” in Machery 2021], and we can’t exclude with complete confidence the possibility that the collection of studies following on Milgram’s work isn’t one of them. However, the uncertainty and the messiness of the social scientific record—and the record on Milgram doubtless exhibits some untidiness—seldom mandates “mathematical certainty.” Usually, the best students of social science can do, in interpreting the record, is to “make the smart bet.” Betting on Milgram to replicate (at reasonable odds) is a smarter bet than most.18

28While the standards of contemporary university Institutional Review Boards (IRBs) preclude putting our confidence to the full test, you should be reassured by watching Darren Brown’s impressively close replication on reality TV (https://www.youtube.com/​watch?v=y6GxIuljT3w),19 and also a reiteration by the French journalist Christophe Nick.20 We won’t lean too hard on reality television, although we note that Perry [2013, 15] herself suggests an analogy between Milgram and Allen Funt’s “Candid Camera,” of the 1950s and 1960s, where people were ludicrously pranked, and often amusingly slow to catch on. However, if there is a suggestive analogy here, it does not tell for the Incredulity Hypothesis. According to Funt’s son Peter, who produced a 2014 reprise of the show, “people are more easily fooled than ever”—“virtually everyone” on the reprise accepted ridiculous scenarios as veridical [Funt 2014]. Apparently, securing “buy in” for seemingly unbelievable circumstances is not so difficult as might be supposed.

29All in all, if there are issues with Milgram’s work, they aren’t akin to the issues that have beleaguered the flawed science of RepliGate. Of course, that Milgram’s results are plausible, not beset by questionable research practices, and probably replicate just fine does not get him out of the woods with respect to the Incredulity Hypothesis, for that objection takes issue not with the finding itself, but the explanation of it.

7 Does the Incredulity Hypothesis generalize?

  • 21 As Mixon [1972, 154], another defender of the Incredulity Hypothesis, remarks of Hollands’s report (...)

30Perhaps what replicates when Milgram replicates is not really destructive obedience, but is instead the charade the Incredulity Hypothesis intimates; participants didn’t believe they were administering painful shocks but only (for some reason) acted like they believed they were doing so. Detailed study-by-study discussion of all the non-Milgram variations, about which we often have a lot less information than we do for Milgram, would be helpful in exploring this possibility. But there are some studies that give us strong reason to think that the Incredulity Hypothesis should be generalized with an overabundance of caution. Of his high obedience German variation, Mantell [1971, Abstract] observes that “[n]early all participants were completely convinced of the genuineness of the experiment.” Rosenham [cited in Milgram 1974, 173] reports that independent raters judged that 60% of his participants “thoroughly accepted the authenticity of the experiment.” If you’re disinclined to trust the investigators, in these suspicious times, there’s Sheridan and King’s [1972] variant, where actual shocks were administered to a “cute, fluffy puppy,” which howled in pain from the punishment: male 54% obedient; female 100% obedient. Puppies don’t lie!—and there’s no cause to suggest participants thought they did. Nor could the Incredulity Hypothesis be sensibly applied to a little-known predecessor to Milgram: a demonstration by Landis [1924, 459], where 15 of 21 participants complied with the experimenter’s request to chop the head off a live white rat. An exception is Holland’s [1968] unpublished doctoral dissertation, an extension where he reports rates of obedience comparable to Milgram (68% of 100 participants over three conditions), but also reports observer ratings indicated that participants’ levels of “suspiciousness” regard the experimental deception was “universally quite high,” a report inconsistent with Milgram’s own assessments of participant credulity (see p. 72). However, Holland also reports that of the 20 subjects judged to be of low suspiciousness, 16 (80%) were fully obedient, an awkward circumstance for defenders of the Incredulity Hypothesis.21

31Given the totality of the record, prudence likely dictates that the Incredulity Hypothesis be limited to Milgram’s own studies. That there’s something wrong with Milgram’s own experiments is a less dramatic conclusion than the conclusion that Milgram-style destructive obedience has never been convincingly demonstrated in any lab, but given the outsize prominence of Milgram in the history of the social sciences, it is plenty dramatic.

8 How much obedience is enough (to be worrisome)?

32It’s fair to ask what amount of obedience is “very substantial” or would “give one pause.” One out of 20? One out of 10? One out of 5? The lesson would be very different if 1 out of 5 or 10 participants were obedient rather than 1 out of 2 or 2 out of 3: on the first, disturbing lesson, we would have learned that among us roam people willing to harm innocents when asked to do so; on the second, insulting lesson, it’s we who are willing to cause such harm. Milgram’s work is famous in part because the insulting lesson—the lesson intimated by Conventional Wisdom—is bound to elicit a “Me? No way!” incredulous gaze. This is also what makes the work infamous: “Yes, you, more likely than not, would do something truly crummy in some circumstances—and it might not take that much to get you to do it.”

33People’s tendency to self-enhancement, the maintaining of unrealistically positive self-assessments, has been extensively documented for many domains [Doris 2015, 92–97], [Dunning 2006], [Gilovich 1991], and it certainly extends to the moral domain, in what is sometimes termed moral grandiosity [Allison, Messick et al. 1989], [Epley & Dunning 2000], [Green, Sedikides et al. 2017], [Van Lange 1991]. We think that the umbrage directed at Milgram has much to do with the insulting lesson’s corrective to moral self-enhancement, perhaps even as much as it has to do with Milgram’s own ethical lapses. If Milgram’s work is scientifically discredited, our moral self-conceptions may remain intact; if the Conventional Wisdom is right, we’re not as virtuous as we’d like to think.

34As to the pointed question about what makes a “very substantial” rate of obedience, intuitions may reasonably diverge. But here’s a tentative, and not implausible, answer: anything at 40% or above enters the range of the insulting lesson, and strikes us as “very substantial.” If 40% or more of participants are willing to harm in a lab experiment, wouldn’t most of us do so under some megalomaniacal, tyrannical dictator? Obedience at the level of 20% is in the range of the disturbing lesson, and might be thought of as “substantial”: looks like too many of your neighbors are potential authoritarian functionaries, but there’s a good chance you might not be. Here then, is an intuitive gloss on the amount of obedience obtained across the range of Milgram replications and variations: “substantial” on the low end and “very substantial” on the high; quite consistent with what one would expect, if Milgram were on to something real.

9 The Incredulity Hypothesis and ethical criticisms of Milgram: Stress and pretense

  • 22 Milgram [1974, 41–43] depicts self-report data for 137 participants: a robust majority (from eyeba (...)

35Like the Incredulity Hypothesis, ethical criticisms of Milgram, the most serious of which concerns the evident suffering of the participants, have been around from the beginning of commentary [e.g., Baumrind 1964], [cf. Miller 2016, 188–189]. And here, of course, Milgram [1963] is hoist of his own petard, when he reports that participants were observed to “sweat, tremble, stutter, bite their lips, groan, and dig their fingernails into their flesh” Milgram [1963, 375], and quotes a witness describing a participant “reduced to a twitching, stuttering wreck [...] rapidly approaching a point of nervous collapse” [Milgram 1963, 377].22 Much commentary (including that of the first author here) has given Milgram a pass on the ethical criticism, assuming or asserting that the scientific value of the studies outweighs any ethical violations engendered by the participants’ distress. However, archival research and reporting, including interviews with Milgram’s participants [Perry 2013, esp. 65–124], make many ethical concerns seem sensible, and we’re now inclined to think that some ethical criticisms are irrefutable [cf. Russell 2018, 111–120], though we remain hesitant as to whether this mandates the judgment that it would have been better if the studies were never performed.

36However, our concern here is not the merit of the ethical criticism per se, but the relations among criticisms of Milgram’s experiments: the ethical criticism is difficult to maintain together with the Incredulity Hypothesis, as Perry [2013, 125–139] does. Consider Perry’s description of seeing Milgram’s “shock machine” in an exhibit:

For many of the subjects it was an instrument of torture. I thought of the hundreds of trembling hands that had pushed those levers and of the stuttering voices, the sweating palms, the uncanny laughter. These were the symptoms of distress and agitation that Milgram and others had observed from behind the mirror as they watched each person who sat in front of the long line of switches. [Perry 2013, 299–300]

37It is hard to see how this could be accurate, assuming the Incredulity Hypothesis is true. Are we to interpret this passage as depicting obedient people pretending to be extremely bothered, trembling, with sweaty palms?

  • 23 Instead of focusing on harm here, one could focus on negligence. Perhaps Milgram believed that his (...)

38Conventional Wisdom suggests a straightforward explanation of why ethical criticism is warranted: the experimental scenario created by Milgram caused participants to suffer, sometimes badly, by inducing them to do something they believed to be terribly wrong. Matters are more opaque on the Incredulity Hypothesis. For if the obedient participants saw through the ruse, and didn’t think they were actually shocking anybody, it’s obscure why they should be so upset [cf. Doris 2002, 43–45]. Actors who assault people on stage or screen may have aroused affect, but we don’t usually expect them to be traumatized by what they did, or suffer the moral injury [Griffin, Purcell et al. 2019] that afflicts real-life combatants.23 -1.5

  • 24 Alternatively, participants might have been stressed merely because they were asked to commit harm (...)
  • 25 It might be that the doubters and the stressors were different groups. If, per the Incredulity Hyp (...)

39In response, it might be said that even pretending to harm another person is distressing. Indeed, Cushman, Gray et al. [2012] found that simulating harming someone—e.g., cutting the experimenter’s throat with a rubber knife or smashing “her” rubber hand—elicited an emotional reaction, even though participants knew full well that they were not harming the experimenter. What’s more, in some variants of Milgram’s experiments, participants appear to exhibit stress even when they know the scenario is not real (e.g., Slater, Antley et al. [2006] “virtual reprise”). Notably, Mixon [1972, 150] reported that participants in his role-playing variant, where subjects were explicitly told the scenario was pretend, exhibited signs of distress similar to Milgram’s participants: sighs, finger-tapping, trembling, gasps, and nervous laughter. Mixon’s description of these behaviors is not so detailed as one would wish, and we can’t confidently conclude that stress levels reached those found in Milgram. In any case, we don’t deny that simulating harm can be stressful, but asking people to simulate harm, knowing that they might be somewhat distressed as a result, is not to “torture” them, to use Perry’s word, and does not merit the type of moral condemnation Perry levels against Milgram.24 Which explains why Cushman and colleagues’ experiment was IRB approved.25

40The Incredulity Hypothesis also undercuts another point associated with ethical criticism of Milgram: the experimenter’s prodding the reluctant participants to continue was coercive, even abusive, which is plausibly regarded as another source of participants’ suffering. According to Perry, the experimenter’s prods, which Milgram [1963, 374] characterized as “not impolite,” were a kind of bullying; she insists that Milgram and associates contrived to place “enormous” pressure on participants in order to generate “attention-grabbing results” [Perry 2013, 46–49].

41Perry’s [2013, 115–123] and earlier [e.g., Darley 1995, 130] analyses indicate that John Williams, the high school biology teacher who played the “experimenter” role, deviated from the experiment’s standardized prompts in an attempt to maximize obedience, so it’s probably fair to conclude that the prompting participants experienced was sometimes more intense than Milgram’s published narrative suggests. If it’s true this pressure was “enormous,” it not only casts Milgram and his crew as notoriety-craving bullies, but it also has theoretical importance, since it suggests that it’s harder than Milgram and his followers contend to induce destructive obedience. We doubt that the pressure really was “enormous”—it emphatically wasn’t enormous compared to the totalitarian pressures that generate real world destructive obedience—but the present point is that it’s unclear why maximally intense pressure would be needed, if participants, as the Incredulity Hypothesis maintains, did not believe they were actually administering painful shocks. To illustrate: while it presumably should take enormous pressure to induce someone to put their hand on a red-hot stove, it shouldn’t take much at all to induce them to put their hand on a stove they don’t believe is really hot.

42One might insist that the experimenter exerted enormous pressure because he didn’t notice participants were doubtful, or because he was overly zealous, or perhaps because he was trying to convince the disobedient participants, or because incredulous participants find pretend harming affectively aversive [Cushman, Gray et al. 2012]. These are possible explanations, but they are not very compelling. Surely the experimenter would have realized many participants were just playing along and that pressure was not needed, or he would have limited his pressure to the recalcitrant participants; note that Cushman and colleagues did not need extreme pressure to induce pretend harming. Then the best explanation of the expedients Milgram and associates required to secure obedience (as documented by Perry herself) is that participants actually believed they were shocking someone, and didn’t want to do it, just as Conventional Wisdom supposes.

43Here emerges a dilemma: (1) assert the Incredulity Hypothesis, and deflate the ethical criticism, or (2) assert the ethical criticism, and deflate the Incredulity Hypothesis. We favor (2): Milgram’s participants (or at least many of them) genuinely suffered, and he and his associates deserve ethical criticism for causing that suffering. To fully motivate this decision, we turn to direct evidence for the Incredulity Hypothesis itself.

10 Evidence for the Incredulity Hypothesis? Self-report data

44It’s understandable to find it incredible that so many—or any—participants would be compliant in Milgram’s experiments. Indeed, many commentators sympathetic to Milgram emphasize the striking disproportion between minimally coercive pressure and wildly counternormative behavior [e.g., Doris 2022, 26, 200]. On the Incredulity Hypothesis, the appearance of disproportion gets explained away: since participants knew the shocks were not real, they didn’t believe what they were doing was wrong, so it’s not surprising it didn’t take much to get them to do it. But endorsing this explaining away requires assuming another explanatory burden: explaining why the participants appeared so stressed, if they didn’t think they were causing anyone pain. Conversely, Conventional Wisdom offers a more intuitive explanation: participants were very stressed because they believed themselves to be doing something they believed to be very wrong, and didn’t want to do it. However, determining the best explanation when engaging in abductive inference allows degrees of freedom that are often constrained only by taste, and we don’t expect these arguments to be conclusive.

45The problem of explaining participants’ stress involves indirect argument, where the observed stress is to be explained by inference to the mental states—e.g., degrees of credulity—associated with that stress. However, there is also the possibility of more direct argument, where inferences about what participants believed are based on what the participants themselves reported about what they believed. While this inference appears more direct, it is hardly trivial, because self-report data is a notoriously unreliable guide to actual behavior and functioning (see [Gosling, John et al. 1998], [Holzman & Kagan 1995, 5], [Mischel 1968, 25], [Pervin 1996, 315], [Ross & Nisbett 2011]), probably especially so in morally loaded contexts [Graham 2014]. Still, if it is irresponsible to treat self-report data incautiously, it is also irresponsible to ignore what can be an important source of evidence [e.g., Chan 2008]. To fully assess the Incredulity Hypothesis, we must look to what the participants themselves said.

46Milgram addressed this issue with a post-experimental questionnaire sent to former participants. See results in Table 2.

Table 2: Responses to Question on Belief (Based on Milgram’s original Table 7, [1974, 172])

During the Experiment Defiant Obedient All subjects
(1) I fully believed the learner was getting painful shocks 62.5% (230) 47.9% (139) 56.1% (369)
(2) Although I had some doubts, I believed the learner was probably getting the shocks. 22.6% (83) 25.9% (75) 24.0% (158)
(3) I just wasn’t sure whether the learner was getting the shocks or not. 6.0% (22) 6.2% (18) 6.1% (40)
(4) Although I had some doubts, I thought the learner was probably not getting the shocks 7.6% (28) 16.2% (47) 11.4% (75)
(5) I was certain the learner was not getting the shocks. 1.4% (5) 3.8% (11) 2.4% (16)
Note. This table is based on an unpublished study of Milgram’s post-experimental questionnaire, which was conducted by his research assistant, Taketo Murata [see Perry, Brannigan et al. 2020, for historical context]. Milgram’ presentation of these results differs from Murata’s: Murata, but not Milgram, dichotomized credulity; Murata reported the mean number of shocks, while Milgram reported only defiance (stopping before the end of the experiment) vs. obedience. The percentages of obedience here do not correspond to our “canonical” 65%, since the respondents are drawn from various experimental conditions that had different rates of obedience.

47To fully assess the Incredulity Hypothesis, we need to address the question, “Did participants report believing the learner was actually getting shocks?” Table 2 allows us to respond yes—not every individual participant, of course, but a very strong majority. As we can see, 85% of the defiant participants and 74% of the obedient participants either fully believed the learner was getting shocks or believed the learner was probably getting shocks. Indeed, most of the participants, by far, self-reported the highest level of belief (response 1), while next most often reported was the second-highest level of belief (response 2). This interpretation aligns with Milgram’s [1974, 172] own, to the effect that three quarters (74%) of obedient participants—those responding with 1 and 2—acted with the belief that they were, in fact, administering painful shocks.

  • 26 In her book, Perry contends “only half of the people who undertook the experiment fully believed i (...)

48To Perry’s thinking, it’s only those obedient subjects responding with 1—139 of the 658 participants, or 21%, who were credulously obedient.26 Even if one accepts limiting the category of credulously obedient participants to those responding with 1 (“fully”), 21% is still a disconcerting rate of destructive obedience; what we termed “substantial” above. But in fact, there are good reasons to reject the restriction to response 1. As we’ve said, it’s a truism of the social sciences that self-report data can be unreliable. An especially striking example is Johansson and colleagues’ research on choice blindness: participants making a choice between two options fail to notice a mismatch between their actual choice and the contrary choice an experimenter informs them they made, and then offer fluent rationalizations for having “chosen” the option they didn’t choose (e.g., [Johansson, Hall et al. 2005, 2006], [Bortolotti & Sullivan-Bissett 2021], [Hall, Johansson et al. 2010, 2012, Hall, Strandberg et al. 2013]). People who have done something horrible, such as shocking an innocent, would very likely be drawn (consciously or unconsciously) to a rationalization that would mitigate their guilt, such as claiming they didn’t think anyone was being hurt (as Milgram [1974, 172] put it, taking an “easy out”). So, there’s good reason to think that the number of “fully” credulous obedients exceeded the 139 who responded with (1) in Table 2.

49Furthermore, the restriction to fully believing participants is, in a crucial respect, misleading. In starkest terms, the question about the veracity of the Milgram experiments is moral: was the participants’ behavior something that we (and the participants themselves) ought to view with moral concern? To the extent that it is, then it seems that the moral concern ought to extend past fully believing. One should care about destructive obedience when the perpetrator is wavering in their belief about harmfulness, from “probably,” “not sure,” and even to “probably not.” That is, moral concern pertains to the great majority of Milgram’s respondents, since only 2.4% report being “certain” the shocks were not real. To sharpen the point, consider: I fully believed another hunter was standing in the direction I fired, and I fired anyway. The outrageousness is not much ameliorated with the substitution of “wasn’t sure” or “probably.” The moral concern directed at any partially believing participant involves, at least in part, something like that involved in concern about negligence: the individual didn’t take reasonable care to make sure they were not torturing another human being. Given the learner’s protests, and perhaps even without them, taking the experimenter’s word for it is not the exercise of such care.

50So what does Table 2 really tell us about the Incredulity Hypothesis? We reanalyzed the data ourselves, which are plotted in Figure 1.

Figure 1: Defiant and obedient participants’ answers to the question on belief (y-axis, 1 “fully believing,” 5 “certain” incredulity; triangle: mean credulity)

Figure 1: Defiant and obedient participants’ answers to the question on belief (y-axis, 1 “fully believing,” 5 “certain” incredulity; triangle: mean credulity)
  • 27 For the interpretation of effect sizes, see [Doris 2022, 215 ff.], [Machery & Doris 2017].

51Consistent with the Incredulity Hypothesis, there is a small negative correlation between obedience and believing that one is actually shocking an innocent person (biserial correlation = −.17, p.001). However, despite this grain of truth, the difference in credulity between defiant and obedient participants is fairly small (.39 on a 5-point scale, corresponding to an effect size d = .3, a small to medium effect on Cohen’s [1988] venerable classification).27 That is, defiant and obedient participants, in the aggregate, had a very similar interpretation of what was going on in the experiment. This suggests that the Incredulity Hypothesis can only be a small part of the explanation of Milgram’s findings. Moreover, it could hardly be otherwise, given the “Lotta-Little Principle”: complex psychological outcomes, which behaviors in Milgram’s studies surely were [Miller 2013, 266] will typically be determined by multiple variables, none of which will reach impressively large effect sizes [Doris 2022, 172–173, 231].

52Perry, Brannigan et al. [2020] also analyzed the data presented in Table 2 above. Their first logistic regression (Model 1, p. 98) dichotomized credulity (full belief = responses 1 and 2 to Milgram’s post-experimental questionnaire; disbelief = responses 4 and 5; response 3 ignored): they found “an odds ratio of 2.57, suggesting that those who had a high level of belief that the shocks were real were 2.57 times more likely to be defiant than those who had a low level of belief.” Their second logistic regression (Model 2, p. 98) did not dichotomize belief, but treated the five degrees of belief as an ordinal variable, and resulted in an odds ratio between full belief (response 1) and full disbelief (response 5) equal to 3.7. Based on this, they conclude “this means that variations in dramaturgical credibility resulted in dramatic variations in the levels of obedience and defiance” [Perry, Brannigan et al. 2020, 98].

  • 28 If the risk of catching a disease is .00000001% for group A and .00000002% for group B, the relati (...)
  • 29 If 66% of credulous participants and 50% of skeptical participants are defiant, the odds ratio is  (...)

53Odds ratios are far from being an optimal way of communicating effects sizes, and often cause people to poorly assess effect sizes [e.g., Gigerenzer, Wegwarth et al. 2010]. Moreover, Perry and colleagues confuse relative risk and odds ratio in their verbal gloss of odds ratios: observing an odds ratio of 2.57 is not the same as showing that skeptical participants were 2.57 more likely to disobey—the latter is a relative risk. This confusion matters, because while a relative risk equal to 2.57 can be large (although it need not be28), a 2.57 odds ratio is far less impressive.29

  • 30 Here Perry et al. rely on Murata’s dichotomization (which, confusingly, is not the same as the dic (...)

54More important, ultimately, is that Perry et al.’s analyses are consistent with that we suggested above. In their reanalysis of Murata’s data, Perry, Brannigan et al. [2020, 95, Table 1] report the average mean shock levels for what they term fully believing (m = 19.05; n = 367) and not fully believing (m = 21.73; n = 289) participants:30 a difference of 2.68 out of 30 possible shocks, or 8.9%. Perry, Brannigan et al. [2020, 99] say that this difference, while statistically significant, is “relatively” small, but urge we attend to odds ratio, which they erroneously term a “very large” effect.

55We’re not inclined to ignore the small difference in means: while it may be statistically significant, it is not, as is sometimes said, substantively significant—a difference that much matters in practical terms. Returning once more to the moral perspective, note that believers and unbelievers alike went, on average, about two-thirds of the way to all the way, to the region of the ominous “Intense Shock” and the alarming “Extreme Intensity Shock” on the generator’s control panel. So there is certainly a disconcerting rate of credulous obedience. While we have no firm opinion on the contested question of whether individual participants were morally responsible for what they did [Miller 2016, 200–215], we note that if one is inclined to blame the participants, they are probably unlikely to see the protestation, we were 8.9% less obedient than the incredulous group, as a convincing excuse.

56Perry, Brannigan et al. [2020] augment their case with another archival study, Hollander & Turowetz’s [2017] analysis of participant interviews recorded immediately after their participation in Milgram’s experiment concluded. Hollander & Turowetz [2017, 661] determined that 72% (33 of 46) of obedient participants in their sample explained their behavior by reference to beliefs the investigators coded as “Learner not being harmed,” while 11% (5 of 45) of defiant participants did so. Again, this apparently indicates that incredulity was implicated in obedience. But according to Hollander & Turowetz [2017, 666–668], only “a handful” (curiously, for an empirical report, Hollander & Turowetz do not report the exact figure) of these explanations referenced doubting the experimental cover story. If we’re reading “a handful” correctly, only a small fraction of explanations clearly indicate participants seeing through the experimental deception. The other two glosses of not being harmed reported by Hollander & Turowetz—trusting the experimenter’s assurances and suspecting the learner was “overreacting”—are compatible with thinking the learner was receiving painful shocks.

57Indeed, Hollander & Turowetz [2017, 663] note that participants may have distinguished harm and pain, as indeed the experimenter’s instructions invited them to do [e.g., Milgram 1963, 374]. This distinction is elided by Perry et al.’s [2020] summary of the study: “[t]he most prevalent accounts were those that doubted that any suffering was actually occurring” [2020, 91, emphasis added]. Somewhat plausible, given the experimenter’s assurance to the effect, to think no harm was occurring, much less plausible to think no suffering was occurring, given the learner’s protests, and the teacher’s own experience of a painful sample shock ([Milgram 1974, 20]; this part of the procedure in Brown’s TV replication is illuminating). The teacher had good reason to think the learner was experiencing pain, even if they could talk themselves out of the learner being harmed.

58Once again, we cannot rule out the possibility that “not being harmed” explanations were self-serving rationalizations by people who believed they did something deplorable. Hollander & Turowetz [2017, 659–650] doubt that participants believed that they had behaved improperly, and contend that even if participants did condemn their own conduct, they did not have strong reason to subsequently minimize it (unlike, say, criminal defendants). But as already noted, there’s extensive evidence that self-enhancing rationalizations are ubiquitous, and they are often produced effortlessly in real-time conversation [Doris 2015, 138–143]. So the presumption that at least some participants were rationalizing in post-experimental interviews seems sensible, given what they had just done.

59There’s another piece of potentially relevant evidence here, which seems to be less frequently discussed than Milgram’s credulity data. Milgram [1974, 171–172; Table 6] also reports “Subject estimates of pain felt by victim,” a 14-point scale with an “extremely painful” zone at the high end, with responses from participants in 9 experimental conditions. The average response for all respondents was 12.10, which certainly makes it look as though participants believed suffering occurred. Moreover, the ratings of defiant participants, 12.07, and obedient participants, 12.20, are essentially the same. It’s peculiar that the ratings of the obedient participants aren’t substantially lower, if the Incredulity Hypothesis plays an important role in explaining behavior in Milgram. The most reasonable thing to think about shocks that are not real, we think, is that they are also not painful, and we suppose Milgram’s participants had a similar view. It’s odd then, that the difference between defiant and obedient participants is here negligible: these data seem to provide no support for the Incredulity Hypothesis, and in fact give some reason to question it.

60The evidence we have been discussing involves retrospective reports: what participants said about their experience, after the fact. If there were substantial numbers of incredulous subjects who realized they’d been duped, shouldn’t there be things in the records from during the experiment like, “OK, buddy, I can see you’re pulling my leg; let’s knock off the pretending, and call it a day,” or “I know this is fake. I’m outta here!”? Raphaël Künstler, the editor of this special issue, posed just such a question to Perry in a recorded online seminar. Perry replied that she had reviewed “a lot” of audio recordings of experimental trials, and could not recall an instance where a participant expressed incredulity during the experiment.31 But if the Incredulity Hypothesis were widely applicable, certainly we should expect considerable incredulity to have surfaced during the experiments. Nestar Russell’s [2018, 123–124] work on the Milgram archives uncovered one participant (in the unpublished Relationship condition) who appeared to be confident that the learner was not actually getting shocked, but this participant was disobedient. Of course, an incredulous and disobedient participant does not support the Incredulity Hypothesis, which associates incredulity and obedience.32

61To be fair, it wouldn’t surprise us to learn that some obedient participants did express doubt during the experiment; there were a lot of participants, and the records are incomplete. Our point is, once again, that the Incredulity Hypothesis needs to be augmented, this time with explanation of why the incredulous subjects, so far as we now know, overwhelmingly kept their peace during the experiment, especially when expressing doubt could be viewed as an avenue of escape from a distasteful or distressing situation. Once again, Conventional Wisdom has an easier road: incredulity was generally not expressed during the experiment because it was not experienced during the experiment.

62We do not claim that there’s nothing to the Incredulity Hypothesis. The analyses above show that incredulity and obedience are not strongly associated, but they are associated. Milgram [1974, 172] himself noted that some 25% of participants harbored substantial doubts, so it has always been acknowledged that the experimental deception did not work equally well on everyone, and it would be foolish to deny that these differences could be associated with differences in obedience. However, recognizing that some participants experienced doubt should not efface ethical concern about doubt being resolved in favor of the experimenter, rather than the learner. Nor should it cause us to lose sight of the most consequential phenomenon: many credulous participants engaged in destructive obedience, just as Conventional Wisdom supposes.

  • 33 Like the Incredulity Hypothesis itself, this mischaracterization is not new; for example, we find (...)

63Nevertheless, Perry’s group [2020, 16] concludes that “the key finding of this study, that obedience is not as unreasoning and automatic as Milgram would have us believe, ought to encourage significant revisions in fair-minded textbooks and other historical accounts of the development of social psychology.” This reflects a serious misframing of the issue; so too does their characterization of Milgram’s commentators as asserting that the studies showed “humanity’s slavish obedience to authority” [2020, 89].33 As has been noted for many years, the Milgram studies demonstrate not “unreasoning and automatic” obedience, but unexpectedly—and alarmingly—high levels of hesitant and conflicted obedience [Badhwar 2009, 281], [Doris 2002, 42], [Russell 2009, 275]. Textbook authors and historians may continue to repeat that account with an altogether clear conscience.

11 Conclusion: Incredulity, credulity, and where the (theoretical) duck sits

64We’ve been spending a lot of time on the nits and grits of Milgramology, and we fear that by now, even the most compliant readers will be having doubts about going all the way. We finish, then, by zooming out to the big picture.

65The unavoidable conclusion: a substantial, or very substantial, percentage of Milgram’s participants credulously engaged in destructive obedience, just as Conventional Wisdom asserts. This conclusion remains unassailable, even if we stipulate a very generous concession to the Incredulity Hypothesis: half of the obedient participants were disbelieving, and just playing along. This amended result, 32.5%, is not as chilling as Milgram’s (and his many replicators), 65% (or more), but it is still plenty chilling. For the reasons we’ve described, this concession is overly generous, but the result of the exercise vindicates Conventional Wisdom, and indicates that the Incredulity Hypothesis cannot be used to “debunk” Milgram.

66What are the theoretical implications of this? Recall that the Conventional Wisdom on Milgram has been deployed by historians asserting that ordinary people commit atrocities, psychologists questioning the efficacy of personality constructs, and “character skeptic” philosophers rejecting the centrality of character and virtue for ethical thought. Vindication of Milgram in face of the Incredulity Hypothesis does not by itself vindicate these claims. The Incredulity Hypothesis is not the only objection one might have to Milgram, although it may be the objection with the most destabilizing implications for Milgram’s findings, in the (counterfactual) event it goes through. And even if Conventional Wisdom survives, as we are confident it does, sweeping theoretical conclusions are not established by a single run of studies, even a run of studies as compelling as those of Milgram and his followers. At most, Milgram’s experiments are but a part of these stories; getting it right about such contested and consequential issues requires extensive analysis over the full range of available evidence.

67Nevertheless, Milgram’s extraordinary demonstrations make vivid, in a way no amount of statistical intricacy or rhetorical ingenuity can, the fact of human moral frailty. And that is the enduring legacy of the obedience experiments.

Haut de page

Bibliographie

Allison, Scott T., Messick, David M., et al. [1989], On being better but not smarter than others: The Muhammad Ali effect, Social Cognition, 7(3), 275–295, doi: 10.1521/soco.1989.7.3.275.

Ancona, Leonardo & Pareyson, Rosetta [1968], Contributo allo studio della aggressione: La dinimica della obbedienza distruttiva (Contribution to the study of aggression: The dynamics of destructive obedience), Neurologia e Psichiatria, 29, 340–372.

Arpaly, Nomy [2005], Comments on lack of character by John Doris, Philosophy and Phenomenological Research, 71(3), 643–647, doi: 10.1111/j.1933-1592.2005.tb00477.x.

Badhwar, Neera K. [2009], The Milgram experiments, learned helplessness, and character traits, The Journal of Ethics, 13(2–3), 257–289, doi: 10.1007/s10892-009-9052-4.

Baumrind, Diana [1964], Some thoughts on ethics of research: After reading Milgram’s “Behavioral Study of Obedience”, American Psychologist, 19(6), 421–423, doi: 10.1037/h0040128.

Beauvois, Jean-Léon, Courbet, Didier, et al. [2012], The prescriptive power of the television host. A transposition of Milgram’s obedience paradigm to the context of TV game show, European Review of Applied Psychology, 62(3), 111–119, doi: 10.1016/j.erap.2012.02.001.

Bem, Daryl J. [2011], Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect, Journal of Personality and Social Psychology, 100(3), 407–425, doi: 10.1037/a0021524.

Blass, Thomas [1999], The Milgram Paradigm after 35 years: Some things we now know about obedience to authority, Journal of Applied Social Psychology, 29(5), 955–978, doi: 10.1111/j.1559-1816.1999.tb00134.x.

Blass, Thomas [2004], The Man Who Shocked the World: The life and legacy of Stanley Milgram, New York: Basic Books.

Blum, Ben [2018], The lifespan of a lie, [Blog post], https://medium.com/s/trustissues/the-lifespan-of-a-lie-d869212b1f62.

Bocchiaro, Piero, Zimbardo, Philip G., et al. [2012], To defy or not to defy: An experimental study of the dynamics of disobedience and whistle-blowing, Social Influence, 7(1), 35–50, doi: 10.1080/15534510.2011.648421.

Bock, D. C. [1972], Obedience: A response to authority and Christian commitment, Dissertation Abstracts International, 33, 3276B-3279B, University Microfilms N. 72-31, 651.

Bonny Miranda, Francisca S., Bordes Caballero, Rosa, et al. [1981], Obediencia a la autoridad, Psiquis: Revista de Psiquiatría, Psicología y Psicosom’atica, 2(6), 212–221.

Bortolotti, Lisa & Sullivan-Bissett, Ema [2021], Is choice blindness a case of self-ignorance?, Synthese, 198(6), 5437–5454, doi: 10.1007/s11229-019-02414-3.

Brannigan, Augustine & Perry, Gina [2016], Milgram, genocide and bureaucracy: A post-Weberian perspective, State Crime Journal, 5(2), 287–305, doi: 10.13169/statecrime.5.2.0287.

Brown, Roger [1986], Social Psychology, New York: MacMilllan, 2nd edn.

Browning, Christopher R. [1992], Ordinary Men: Reserve Police Battalion 101 and the Final Solution in Poland, New York: Harper-Collins.

Burger, Jerry M. [2009], Replicating Milgram: Would people still obey today, American Psychologist, 64(1), 1–11, doi: 10.1037/a0010932.

Burger, Jerry M. [2014], Situational features in Milgram’s experiment that kept his participants shocking, Journal of Social Issues, 70(3), 489–500, doi: 10.1111/josi.12073.

Burley, Peter M. & McGuinness, John [1977], Effects of social intelligence on the Milgram paradigm, Psychological Reports, 40(3), 767–770, doi: 10.2466/pr0.1977.40.3.767.

Chambers, Chris [2017], The Seven Deadly Sins of Psychology: A manifesto for reforming the culture of scientific practice, Princeton: Princeton University Press.

Chan, David [2008], So why ask me?—Are self-report data really that bad?, in: Statistical and Methodological Myths and Urban Legends, edited by C. E. Lance & R. J. Vandenberg, New York: Routledge, 309–336, doi: 10.4324/9780203867266-22.

Corneille, Olivier & Lush, Peter [2023], Sixty years after Orne’s American psychologist article: A conceptual framework for subjective experiences elicited by demand characteristics, Personality and Social Psychology Review, 27(1), 83–101, doi: 10.31234/osf.io/jqyvx.

Costanzo, Elaine M. [1976], The effect of probable retaliation and sex-related variables on obedience, Dissertation Abstracts International, 37, 4214B, University Microfilms N. 77, 3253.

Cushman, Fiery, Gray, Kurt, et al. [2012], Simulating murder: The aversion to harmful action, Emotion, 12(1), 2–7, doi: 10.1037/a0025071.

Dambrun, Michaël & Vatiné, Elise [2010], Reopening the study of extreme social behaviors: Obedience to authority within an immersive video environment, European Journal of Social Psychology, 40(5), 760–773, doi: 10.1002/ejsp.646.

Darley, John M. [1995], Constructive and destructive obedience: A taxonomy of principal‐agent relationships, Journal of Social Issues, 51(3), 125–154, doi: 10.1111/j.1540-4560.1995.tb01338.x.

de Melo-Martín, Inmaculada & Intemann, Kristen [2018], The Fight Against Doubt. How to Bridge the Gap Between Scientists and the Public, New York: Oxford University Press, doi: 10.1093/oso/9780190869229.001.0001.

Doliński, Dariusz, Grzyb, Tomasz, et al. [2017], Would you deliver an electric shock in 2015? Obedience in the experimental paradigm developed by Stanley Milgram in the 50 years following the original studies, Social Psychological and Personality Science, 8(8), 927–933, doi: 10.1177/1948550617693060.

Doris, John M. [2002], Lack of Character: Personality and Moral Behavior, Cambridge: Cambridge University Press, doi: 10.1017/cbo9781139878364.

Doris, John M. [2015], Talking to Our Selves: Reflection, Ignorance, and Agency, Oxford: Oxford University Press, doi: 10.1093/acprof:oso/9780199570393.001.0001.

Doris, John M. [2022], Character Trouble: Undisciplined Essays on Moral Agency and Personality, Oxford: Oxford University Press, doi: 10.1093/oso/9780198719601.001.0001.

Doris, John M. & Murphy, Dominic [2007], From My Lai to Abu Ghraib: The moral psychology of atrocity, Midwest Studies in Philosophy, 31(1), 25–55, doi: 10.1111/j.1475-4975.2007.00149.x.

Dunning, David [2006], Self-Insight: Roadblocks and detours on the path to knowing thyself, New York: Psychology Press.

Edwards, D. M., Franks, P., et al. [1969], An experiment on obedience. Unpublished student report, Tech. rep., University of Witwatersrand, Johannesburg, South Africa.

Elms, Alan C. [2009], Obedience lite, American Psychologist, 64(1), 32–36, doi: 10.1037/a0014473.

Epley, Nicholas & Dunning, David [2000], Feeling “holier than thou”: Are self-serving assessments produced by errors in self- or social prediction?, Journal of Personality and Social Psychology, 79(6), 861–875, doi: 10.1037/0022-3514.79.6.861.

Francis, Gregory [2013], Replication, statistical consistency, and publication bias, Journal of Mathematical Psychology, 57(5), 153–169, doi: 10.1016/j.jmp.2013.02.003.

Funt, Peter [2014], Curses, fooled again!, The New York Times, sept., 27, A23, https://www.nytimes.com/2014/09/27/opinion/curses-fooled-again.html.

Gigerenzer, G., Wegwarth, O., et al. [2010], Misleading communication of risk, BMJ, 341(oct12 2), c4830–c4830, doi: 10.1136/bmj.c4830.

Gilovich, Thomas [1991], How We Know What Isn’t So: The fallibility of human reason in everyday life, New York: Free Press.

Glass, Gene V. [1976], Primary, secondary, and meta-analysis of research, Educational Researcher, 5(10), 3–8, doi: 10.3102/0013189x005010003.

Goldhagen, Daniel Jonah [1996], Hitler’s Willing Executioners: Ordinary Germans and the Holocaust, London: Little, Brown and Co.

Gosling, Samuel D., John, Oliver P., et al. [1998], Do people know how they behave? Self-reported act frequencies compared with on-line codings by observers, Journal of Personality and Social Psychology, 74(5), 1337–1349, doi: 10.1037/0022-3514.74.5.1337.

Graham, Jesse [2014], Morality beyond the lab, Science, 345(6202), 1242–1242, doi: 10.1126/science.1259500.

Green, Jeffrey D., Sedikides, Constantine, et al. [2017], Self-enhancement, righteous anger, and moral grandiosity, Self and Identity, 18(2), 201–216, doi: 10.1080/15298868.2017.1419504.

Griffin, Brandon J., Purcell, Natalie, et al. [2019], Moral injury: An integrative review, Journal of Traumatic Stress, 32(3), 350–362, doi: 10.1002/jts.22362.

Hall, Lars, Johansson, Petter, et al. [2010], Magic at the marketplace: Choice blindness for the taste of jam and the smell of tea, Cognition, 117(1), 54–61, doi: 10.1016/j.cognition.2010.06.010.

Hall, Lars, Johansson, Petter, et al. [2012], Lifting the veil of morality: Choice blindness and attitude reversals on a self-transforming survey, PLoS ONE, 7(9), e45 457, doi: 10.1371/journal.pone.0045457.

Hall, Lars, Strandberg, Thomas, et al. [2013], How the polls can be both spot on and dead wrong: Using choice blindness to shift political attitudes and voter intentions, PLoS ONE, 8(4), e60 554, doi: 10.1371/journal.pone.0060554.

Hardesty, David M. & Bearden, William O. [2004], The use of expert judges in scale development, Journal of Business Research, 57(2), 98–107, doi: 10.1016/s0148-2963(01)00295-8.

Harman, Gilbert [1999], Moral philosophy meets social psychology: Virtue ethics and the fundamental attribution error, in: Proceedings of the Aristotelian Society, Oxford: Oxford University Press, 315–331, doi: 10.1093/0198238045.003.0010.

Haslam, Nick, Loughnan, Steve, et al. [2014], Meta-Milgram: An empirical synthesis of the obedience experiments, PLoS ONE, 9(4), e93 927, doi: 10.1371/journal.pone.0093927.

Hedges, Larry V. [1981], Distribution theory for Glass’s estimator of effect size and related estimators, Journal of Educational Statistics, 6(2), 107–128, doi: 10.2307/1164588.

Holland, Charles Howard [1968], Sources of Variance in the Experimental Investigation of Behavioral Obedience, Ph.D. thesis, University of Connecticut.

Hollander, Matthew M. & Turowetz, Jason [2017], Normalizing trust: Participants’ immediately post-hoc explanations of behaviour in Milgram’s “obedience” experiments, British Journal of Social Psychology, 56(4), 655–674, doi: 10.1111/bjso.12206.

Holzman, Philip & Kagan, Jerome [1995], Whither or wither personality research, in: Personality Research, Methods, and Theory: A Festschrift Honoring Donald W. Fiske, edited by P. E. Shrout & S. T. Fiske, New York: Psychology Press, 3–11, doi: 10.4324/9781315806815-1.

Jewett, Andrew [2020], Science under Fire: Challenges to Scientific Authority in Modern America, Cambridge, Mass.: Harvard University Press.

Johansson, Petter, Hall, Lars, et al. [2005], Failure to detect mismatches between intention and outcome in a simple decision task, Science, 310(5745), 116–119, doi: 10.1126/science.1111709.

Johansson, Petter, Hall, Lars, et al. [2006], How something can be said about telling more than we can know: On choice blindness and introspection, Consciousness and Cognition, 15(4), 673–692, doi: 10.1016/j.concog.2006.09.004.

Kilham, Wesley & Mann, Leon [1974], Level of destructive obedience as a function of transmitter and executant roles in the Milgram obedience paradigm, Journal of Personality and Social Psychology, 29(5), 696–702, doi: 10.1037/h0036636.

Landis, Carney [1924], Studies of emotional reactions. II. General behavior and facial expression, Journal of Comparative Psychology, 4(5), 447–510, doi: 10.1037/h0073039.

Le Texier, Thibault [2019], Debunking the Stanford prison experiment, American Psychologist, 74(7), 823–839, doi: 10.1037/amp0000401.

Lilienfeld, Scott O. [2012], Public skepticism of psychology: Why many people perceive the study of human behavior as unscientific, American Psychologist, 67(2), 111–129, doi: 10.1037/a0023963.

Lykken, David T. [1991], What’s wrong with psychology, anyway?, in: Thinking Clearly about Psychology, edited by D. Chiccetti & W. Grove, Minneapolis: University of Minnesota Press, 3–39.

Machery, Edouard [2020], What is a replication?, Philosophy of Science, 87(4), 545–567, doi: 10.1086/709701.

Machery, Edouard [2021], A mistaken confidence in data, European Journal for Philosophy of Science, 11(2), 34, doi: 10.1007/s13194-021-00354-9.

Machery, Edouard & Doris, John M. [2017], An open letter to our students: Doing interdisciplinary moral psychology, in: Moral Psychology, edited by B. G. Voyer & T. Tarantola, Cham: Springer, 119–143, doi: 10.1007/978-3-319-61849-4_7.

Machery, Edouard & Doris, John M. [forth], Mistrusting Science, Princeton: Princeton University Press.

Makel, Matthew C., Plucker, Jonathan A., et al. [2012], Replications in psychology research: How often do they really occur?, Perspectives on Psychological Science, 7(6), 537–542, doi: 10.1177/1745691612460688.

Mantell, David Mark [1971], The potential for violence in Germany, Journal of Social Issues, 27(4), 101–112, doi: 10.1111/j.1540-4560.1971.tb00680.x.

Meeus, Wim H. J. & Raaijmakers, Quinten A. W. [1986], Administrative obedience: Carrying out orders to use psychological‐administrative violence, European Journal of Social Psychology, 16(4), 311–324, doi: 10.1002/ejsp.2420160402.

Milgram, Stanley [1963], Behavioral study of obedience, The Journal of Abnormal and Social Psychology, 67(4), 371–378, doi: 10.1037/h0040525.

Milgram, Stanley [1974], Obedience to Authority: An experimental view, New York: Harper and Row.

Miller, Arthur G. [1986], The Obedience Experiments: A case study of controversy in social science, New York: Praeger.

Miller, Arthur G. [2009], Reflections on Replicating Milgram (Burger 2009), American Psychologist, 64(1), 20–27, doi: 10.1037/a0014407.

Miller, Arthur G. [2016], Why are the Milgram Obedience Experiments still so extraordinarily famous-and controversial?, in: The Social Psychology of Good and Evil, edited by A. G. Miller, New York; London: Guilford Press, 185–223.

Miller, Christian B. [2013], Moral Character: An empirical theory, Oxford: Oxford University Press.

Miller, Christian B. [2014], Character and Moral Psychology, Oxford: Oxford University Press.

Mischel, Walter [1968], Personality and Assessment, New York: John Wiley & Sons.

Mixon, Don [1972], Instead of deception, Journal for the Theory of Social Behaviour, 2(2), 145–178, doi: 10.1111/j.1468-5914.1972.tb00309.x.

Mosier, Charles I. [1947], A critical examination of the concepts of face validity, Educational and Psychological Measurement, 7(2), 191–205, doi: 10.1177/001316444700700201.

Mummolo, Jonathan & Peterson, Erik [2019], Demand effects in survey experiments: An empirical assessment, American Political Science Review, 113(2), 517–529, doi: 10.1017/s0003055418000837.

Open Science Collaboration (OSC) [2015], Estimating the reproducibility of psychological science, Science, 349(6251), ac4716, doi: 10.1126/science.aac4716.

Orne, Martin T. [1962], On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications, Prevention & Treatment, 17(11), 776–783, doi: 10.1037/h0043424.

Orne, Martin T. & Holland, Charles Howard [1968], Some conditions of obedience and disobedience to authority. On the ecological validity of laboratory deceptions, International Journal of Psychiatry, 6(4), 282–293.

Packer, Dominic J. [2008], Identifying systematic disobedience in Milgram’s obedience experiments: A meta-analytic review, Perspectives on Psychological Science, 3(4), 301–304, doi: 10.1111/j.1745-6924.2008.00080.x.

Pashler, Harold, Coburn, Noriko, et al. [2012], Priming of social distance? Failure to replicate effects on social and food judgments, PLoS ONE, 7(8), e42 510, doi: 10.1371/journal.pone.0042510.

Patten, Steven C. [1977a], Milgram’s shocking experiments, Philosophy, 52(202), 425–440, doi: 10.1017/s0031819100028916.

Patten, Steven C. [1977b], The case that Milgram makes, The Philosophical Review, 86(3), 350–364, doi: 10.2307/2183787.

Perry, Gina [2013], Beyond the Shock Machine: The Untold Story of the Milgram Obedience Experiments, Melbourne: Scribe.

Perry, Gina, Brannigan, Augustine, et al. [2020], Credibility and incredulity in Milgram’s obedience experiments: A reanalysis of an unpublished test, Social Psychology Quarterly, 83(1), 88–106, doi: 10.1177/0190272519861952.

Pervin, Lawrence A. [1996], Personality: A view of the future based on a look at the past, Journal of Research in Personality, 30(3), 309–318, doi: 10.1006/jrpe.1996.0021.

Podd, Marvin H. [1970], The relationship between ego identity status and two measures of morality, Dissertation Abstracts International, 31, 5634, University Microfilms No. 71-6107.

Powers, Patrick C. & Geen, Russell G. [1972], Effects of the behavior and the perceived arousal of a model on instrumental aggression, Journal of Personality and Social Psychology, 23(2), 175–183, doi: 10.1037/h0033037.

Reicher, Stephen D., Haslam, S. Alexander, et al. [2012], Working toward the experimenter: Reconceptualizing obedience within the Milgram paradigm as identification-based followership, Perspectives on Psychological Science, 7(4), 315–324, doi: 10.1177/1745691612448482.

Ring, Kenneth, Wallston, Kenneth, et al. [1970], Mode of debriefing as a factor affecting subjective reaction to a Milgram-type obedience experiment: An ethical inquiry, Representative Research in Social Psychology, 1(1), 67–88.

Ritchie, Stuart [2020], Science Fictions: How fraud, bias, negligence, and hype undermine the search for truth, New York: Metropolitan Books.

Rogers, R. W. [1973], Obedience to authority: Presence of authority and command strength, in: Southeastern Psychological Association Meeting.

Rosenhan, David [1968], Some origins of concern for others, in: Trends and Issues in Developmental Psychology, edited by P. Mussen, J. Langer, & M. Covington, New York: Rinehart & Winston, 143–153, doi: 10.1002/j.2333-8504.1968.tb00557.x.

Rosenthal, Robert & Rubin, Donald B. [1978], Interpersonal expectancy effects: the first 345 studies, Behavioral and Brain Sciences, 1(3), 377–386, doi: 10.1017/s0140525x00075506.

Ross, Lee & Nisbett, Richard E. [2011], The Person and the Situation: Perspectives of social psychology, Philadelphia: Pinter & Martin Publishers.

Russell, Daniel C. [2009], Practical Intelligence and the Virtues, Oxford: Oxford University Press.

Russell, Nestar [2018], Understanding Willing Participants. Milgram’s Obedience Experiments and the Holocaust, vol. 1: Milgram’s Obedience Experiments and the Holocaust, Cham: Palgrave MacMilllan, doi: 10.1007/978-3-319-95816-3.

Sabini, John & Silver, Maury [2005], Lack of character? Situationism critiqued, Ethics, 115(3), 535–562, doi: 10.1086/428459.

Schurz, Grete [1985], Experimentelle Überprüfung des Zusammenhangs zwischen Persönlichkeitsmerkmalen und der Bereitschaft zum destruktiven Gehorsam gegenüber Autoritäten [Experimental test of the relationship between personality characteristics and the readiness for destructive obedience authorities], Zeitschrift für Experimentelle und Angewandte Psychologie, 32(1), 160–177.

Shalala, Samuel R. [1975], A study of various communication settings which produce obedience by subordinates to unlawful superior orders, Dissertation Abstracts International, 36(2-B), 979, University Microfilms No. 75-17,675.

Shanab, Mitri E. & Yahya, Khawla A. [1977], A behavioral study of obedience in children., Journal of Personality and Social Psychology, 35(7), 530–536, doi: 10.1037/0022-3514.35.7.530.

Shanab, Mitri E. & Yahya, Khawla A. [1978], A cross-cultural study of obedience, Bulletin of the Psychonomic Society, 11(4), 267–269, doi: 10.3758/bf03336827.

Sheridan, Charles L. & King, Richard G. [1972], Obedience to authority with an authentic victim, in: Proceedings of the Annual Convention of the American Psychological Association, American Psychological Association (APA), vol. 2, 165–166.

Sinatra, Gale & Hofer, Barbara [2021], Science Denial: Why it Happens and What to Do about It, New York: Oxford University Press.

Slater, Mel, Antley, Angus, et al. [2006], A virtual reprise of the Stanley Milgram obedience experiments, PLoS ONE, 1(1), e39, doi: 10.1371/journal.pone.0000039.

Smith, Mary L. & Glass, Gene V. [1977], Meta-analysis of psychotherapy outcome studies, American Psychologist, 32(9), 752–760, doi: 10.1037/0003-066x.32.9.752.

Sober, Elliott [2015], Ockham’s Razors: A User’s Manual, Cambridge: Cambridge University Press, doi: 10.1017/cbo9781107705937.

Stroebe, Wolfgang & Strack, Fritz [2014], The alleged crisis and the illusion of exact replication, Perspectives on Psychological Science, 9(1), 59–71, doi: 10.1177/1745691613514450.

Van Lange, Paul A. M. [1991], Being better but not smarter than others: The muhammad ali effect at work in interpersonal situations, Personality and Social Psychology Bulletin, 17(6), 689–693, doi: 10.1177/0146167291176012.

Vranas, Peter B. M. [2005], The indeterminacy paradox: Character evaluations and human psychology, Noûs, 39(1), 1–42, doi: 10.1111/j.0029-4624.2005.00492.x.

Zeigler-Hill, Virgil, Southard, Ashton C., et al. [2013], Neuroticism and negative affect influence the reluctance to engage in destructive obedience in the Milgram paradigm, The Journal of Social Psychology, 153(2), 161–174, doi: 10.1080/00224545.2012.713041.

Zimbardo, Philip [2007], The Lucifer Effect: Understanding how good people turn evil, New York: Random House.

Haut de page

Notes

1 Scare quotes, for two reasons: (1) Milgram’s individual trials did not compare different conditions, as is done in classical experimental paradigms; (2) it is contested whether the behavior of interest is aptly characterized as obedience [e.g., Reicher, Haslam et al. 2012]. These issues do not affect the discussion here, and we will henceforth use both “obedience” and “experiments.”

2 Some commentators reject the “Milgram-Holocaust Linkage” [Brannigan & Perry 2016]. See Doris [2002, 53–60] and Doris & Murphy [2007] for defense of the linkage, and the “ordinary person” hypothesis more generally. Our primary concern here will not be the linkage, but the scientific credibility of the experiments themselves.

3 To the extent a single experimental paradigm can demonstrate any general proposition.

4 While belief about wrongness is central to the discussion of the Milgram phenomenon, strictly speaking, the construct of destructive obedience does not require that the perpetrator believe they are behaving wrongly.

5 We assume that the readers of this special issue on Milgram are familiar with the basic structure of his experiments. For the details, readers are advised to consult, in addition to Milgram’s own work [esp. Milgram 1974], book-length treatments by A. G. Miller [1986], Blass [2004], and N. Russell [2018].

6 Another classic-but-controversial study in social psychology that has recently been subject to debunking attempts, most egregiously by Blum [2018, June 7], [cf. Le Texier 2019], is Zimbardo’s [2007] “Stanford Prison Experiment.” Some of these efforts are akin to the Incredulity Hypothesis, alleging that participants in Zimbardo’s simulated prison were just playing along to appease the study’s creators. For rebuttal, see [Doris 2022, 203–212].

7 We again note (cf. footnote 1) that we are neutral as to whether the behavior in question is best described in terms of obedience, conformity, and/or another related construct.

8 For discussion of some complexities, see [Sober 2015].

9 In a footnote to his initial publication, Milgram [1963, 377, note 4], reports that he ran a group of 43 unpaid participants, with “results very similar to those obtained with paid subjects.”

10 To be fair, we suspect that EDEs are more often alleged than empirically demonstrated [e.g., Mummolo & Peterson 2019].

11 As appears fairly standard, the canonical version is for us Experiment 5, the “New Baseline” condition, which included vehement vocal protests from the concealed learner, including reference to a heart condition, and had an obedience rate of 65% [Milgram 1974, 55–58], [Doris 2002, 39–42]. This rate of obedience is comparable to Experiment 1 (no vocal feedback), 65%; Experiment 2 (vocal feedback, no reference to heart condition) 62.5% [Milgram 1974, 25]; and Experiment 8 (New Baseline, all women participants), 65% [Milgram 1974, 62–63]. For our purposes, replication attempts may be said to be close replications of Milgram to the extent they follow procedures like those in Experiments 1, 2, 5, and 8. Others of Milgram’s variations changed procedures substantially, and obtained both higher and lower rates of obedience than these four, as did replications and extensions by other researchers. Our approach contrasts with that of Perry, Brannigan et al. [2020], which emphasizes the defiance rate over all the versions of Milgram’s experiment.

12 Holland [1968, 65–68] reports that in his extension of Milgram (from unpublished doctoral research), some participants “minimally cued” for suspicion were “extremely proficient” at acting naïve, despite not having been deceived; he concludes that “to some degree,” Milgram’s findings were “the result of spontaneous simulation of naivete.” There are some interpretive questions about Holland’s study (e.g., at pp. 66-67 below), but in any case, we are skeptical that spontaneous simulation of naivete—sufficient to deceive the many observers of the many replications and extensions of Milgram, some of which are captured on widely viewed films (see pp. 61-66 below)—could be pervasive enough to substantially reduce estimates of credulous obedience.

13 Unfortunately, there does not seem to be a relevant meta-analysis in the literature. This might be because many replications of Milgram’s studies were done before the importance of meta-analysis was clearly recognized in psychology, and before methods for meta-analysis were well established (for the beginnings of contemporary meta-analysis, see [Glass 1976], [Hedges 1981], [Smith & Glass 1977]).

14 We assume that the frequently observed higher rates of obedience do not count as replication failures for present purposes, since they too support the interpretation favored by Conventional Wisdom.

15 The study is a comparative outlier in another respect; gender differences often fail to appear in Milgram-style studies [Doris 2002, 47].

16 Perry [2013, 276–281] discusses Burger’s [2009] replication attempt, which she correctly describes as successful.

17 Perry [2013, 266–267] accuses Milgram [1974, 171] of “deliberate obfuscation” in characterizing rates of obedience in Australian, South African, German, and Italian variations as higher than in his own. We agree with Perry that if Milgram intended to reference Kilham and Mann’s [1974] Australian study (he does not provide citations at that point), his characterization of that study is inaccurate. On the other hand, Milgram is correct about the other three studies, all of which found considerably higher rates of obedience, 85% or more. Perry [2013, 266, note 29] observes that the Italian report states that its 85% rate is lower than the 100% Milgram apparently found in a pilot study; given that the published figure for canonical variations is 65%, and serious debate has never been about whether obedience rates (operationalized as “going all the way” on the shock generator) are 100%, this is irrelevant. Perry also complains that the German study involves variations on the Milgram paradigm—which apparently does not worry her for the Australian version—and used university students as participants—which, once more, does not worry her for the Australian version. Regards the South African study, Perry objects that it had only 16 participants, and that its authors were students. Fair enough on the sample size; Milgram variations were generally smaller studies, probably due to the logistical difficulty and to the then prevalent practice of conducting psychological experiments with small samples. We’re unsure what the South African study authors being students is supposed to show: it might be taken to raise issues of competence, but it might also be taken to suggest that the effect is robust and easily obtained, if “mere students” can do so.

18 For similarly sanguine assessments of Milgram’s replicability, see [Brown 1986, 4], [Burger 2009], [Doris 2022, 203], [Elms 2009], [Miller 2009], [Miller 2016, 199–200], [Vranas 2005].

19 In a classroom showing of Brown’s version during the writing of this paper, Doris’ students didn’t suggest the Incredulity Hypothesis, and rejected it when Doris proposed it. No student of Doris has proposed the Incredulity Hypothesis in numerous classroom showings, for either Brown’s or Milgram’s films. Students do remark on hokiness of the learner’s protests, which makes the evident credulity of the teacher/participants seem a little odd. Perhaps the juxtaposition of less than stellar “acting” from the learner and the deadpan seriousness of the experimenter resulted in a confusing situational ambiguity that made it more difficult for participants to reason effectively (we owe this suggestion to Raphaël Künstler).

20 Nick’s version is recorded in an instructive documentary (https://fr.wikipedia.org/wiki/Le_Jeu_de_la_mort). The reiteration is close to Milgram’s experiment, but it takes place in a public setting and the cover story is different. In this variation, 80% of participants gave the strongest shock, and the public cheered them. As was the case in Milgram’s experiment, participants started laughing nervously when the confederate started protesting (80 volts). At 180 volts, one participant out of five started cheating by surreptitiously “coaching” the learner as to the right answer, a behavior inconsistent with the Incredulity Hypothesis (as the psychologist Jean-Léon Beauvois observes at 50:20 in the documentary). Fifteen percent of obedient participants reported not believing the situation, but this 15% included people who cheated and who were obviously stressed (53:00 of the documentary), suggesting that at least some of these reports were post hoc fabrications. We recommend that you compare the declaration of these “skeptical” participants with their obvious discomfort during the experiment (53:30–55:30). Of course, the videos of Nick’s and Brown’s reiterations were edited by their producers, and we do not know how selective the editing was.

21 As Mixon [1972, 154], another defender of the Incredulity Hypothesis, remarks of Hollands’s report of participant suspicion: “Unfortunately, the force of this finding is somewhat weakened by the fact that Holland’s data also indicate that his least suspicious subjects obeyed and disobeyed in the same proportion as suspicious subjects.”

22 Milgram [1974, 41–43] depicts self-report data for 137 participants: a robust majority (from eyeballing Milgram’s Figure 8) are in the range of “moderately tense and nervous” to “extremely tense and nervous,” with only about 15 out of 137 clearly in the “not at all tense and nervous” range. Milgram does not here divide the data into obedient and disobedient participants.

23 Instead of focusing on harm here, one could focus on negligence. Perhaps Milgram believed that his experiments could cause serious stress, but nonetheless decided to recruit participants. And if he did not believe it, he should have believed it, since stress was a reasonably anticipated risk. This strikes us as a genuine moral reproach, and one compatible with the Incredulity Hypothesis, since it does not require that participants actually be stressed. But the moral reproach due this sort of negligence is plausibly deemed less serious than that due for actively torturing participants.

24 Alternatively, participants might have been stressed merely because they were asked to commit harm, even if they did not comply. Once more, it is difficult to imagine such stress reaching the level of torture.

25 It might be that the doubters and the stressors were different groups. If, per the Incredulity Hypothesis, obedient participants doubted the experiment and if, per the current suggestion, those who doubted the experiment were not stressed (or at least less stressed), then obedient participants would tend not to exhibit dramatic manifestations of stress. We are unaware of any analysis supporting this possibility.

26 In her book, Perry contends “only half of the people who undertook the experiment fully believed it was real, and of those two-thirds disobeyed the experimenter” [Perry 2013, 139]. This math yields a percentage credulously obedient of around 17% (1/3 of 50%).

27 For the interpretation of effect sizes, see [Doris 2022, 215 ff.], [Machery & Doris 2017].

28 If the risk of catching a disease is .00000001% for group A and .00000002% for group B, the relative risk is equal to 2, but the increase in absolute risk is small.

29 If 66% of credulous participants and 50% of skeptical participants are defiant, the odds ratio is 2, for a 16% difference (not negligible, but not huge) and a relative risk equal to 1.32. If 60% of credulous participants and 40% of skeptical participants are defiant, the odds ratio is 3, for a 20% difference (again not negligible, but not huge) and a relative risk equal to 1.5.

30 Here Perry et al. rely on Murata’s dichotomization (which, confusingly, is not the same as the dichotomization in Model 1 discussed in their text): “fully believing” corresponds to a response equal to 1 in Milgram’s post-experimental questionnaire, “not fully believing” to all the other responses.

31 https://www.youtube.com/watch?v=0mf7ULO0Ibs Starting at around 1:00:55.

32 Also worth noting: Russell’s [2018, 124] incredulous/disobedient participant said they “can’t take that chance” the shocks were real; apparently, they were concerned about the sort of negligence we discuss above (p. 73).

33 Like the Incredulity Hypothesis itself, this mischaracterization is not new; for example, we find it in Goldhagen [1996, 383].

Haut de page

Table des illustrations

Titre Figure 1: Defiant and obedient participants’ answers to the question on belief (y-axis, 1 “fully believing,” 5 “certain” incredulity; triangle: mean credulity)
URL http://journals.openedition.org/philosophiascientiae/docannexe/image/4352/img-1.png
Fichier image/png, 85k
Haut de page

Pour citer cet article

Référence papier

John M. Doris, Laura Niemi et Edouard Machery, « True Believers: The Incredulity Hypothesis and the Enduring Legacy of the Obedience Experiments »Philosophia Scientiæ, 28-2 | 2024, 53-89.

Référence électronique

John M. Doris, Laura Niemi et Edouard Machery, « True Believers: The Incredulity Hypothesis and the Enduring Legacy of the Obedience Experiments »Philosophia Scientiæ [En ligne], 28-2 | 2024, mis en ligne le 24 mai 2024, consulté le 11 juillet 2024. URL : http://journals.openedition.org/philosophiascientiae/4352 ; DOI : https://doi.org/10.4000/11pu1

Haut de page

Auteurs

John M. Doris

Cornell University (USA)

Laura Niemi

Cornell University (USA)

Edouard Machery

University of Pittsburgh (USA)

Haut de page

Droits d’auteur

CC-BY-NC-ND-4.0

Le texte seul est utilisable sous licence CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search