Thematic literature review on Expertise and Democracy:

“Trust in Experts”

1. GENERAL STATEMENTS AND POSITIONS

• Dewey, John. “The Eclipse of the Public.” The Public and Its Problems, Holt Publishers, 1927, pp. 110–143.

In this text, John Dewey lays out the fissure between “the public” and experts. The proclamations of experts are unintelligible to most laypersons, arguably by design. As a strong proponent of democracy as a tool of empowerment, Dewey is concerned about how experts can use technology to disseminate authoritative truth, thereby concentrating power and potentially dispersing the public. In order to fight this tide, the public must speak up to curtail their excesses when technocracy meets reality. As he put it, those who possess firsthand experience in field gain a much better understanding of how to improve it: “the man who walks in the shoe knows where it hurts”. For example, Dewey was a big proponent of agricultural policies being set by farmers themselves.

• Habermas, Jürgen. Toward a Rational Society. Polity Press, 1967.

For Habermas, the problem lies with the notion of expertise itself. In this text, Habermas illustrates how experts constitute a class, and how they act collectively in order to legitimise their hidden domination. The self-legitimising system of expertise reinforces itself through the peer review process, which threatens to further enclose the production of knowledge. The opaque unaccountable processes out of which decisions emerge tend to make people feel powerless. Habermas is particularly concerned about the fact that “will formation” is no longer happening, as evidenced by the lack of activism and the increasing amount research designed to justify the status quo. To resist this trend, ad hoc groups of citizens must organise to provide a counterbalance : media watchdogs, protests and demonstrations can constitute a check on the runaway train of self-legitimising expertise.

• Collins, H. M., and Robert Evans. Rethinking Expertise. University of Chicago Press, 2009.

This book takes a drastically different outlook on the role and intentions of experts (in comparison to Dewey, Habermas and Brown). Collins and Evans argue that while we all have more information available to us than any prior generation, our access to this knowledge is largely illusory. They claim that we are unable to hierarchise and make sense of most of it because we remain outsiders to most fields. They therefore argue that it’s always preferable to trust the community of experts within a specialised field and to let them do the thinking. As opposed to Dewey, who yearned for a strong public to hold experts accountable, Collins and Evans are proud elitists: they think that experts need to be insulated and protected from the people’s whims. If they are problems with the knowledge produced by experts, Collins and Evans think the solution is for more experts to call them out. This assumes that such errors are only made in good faith, occluding the possibility of less charitable motives, like cherry-picked data or ideology.

• Lippmann, Walter. The Phantom Public. Transaction Publishers, 1925.

In this book, Lippmann demonstrates his contempt for the unwashed masses. As opposed to Dewey, who is enamored with democratic ideals and constantly seeking ways to surface local expertise and challenge existing power structures, Lippmann is a technocrat. He deems those without expert knowledge unworthy of opining : “the outsider is necessarily ignorant, usually irrelevant and often meddlesome, because he is trying to navigate the ship from dry land”. Lippmann believes experts are the only way that people can make sense of the complex world, and that they are unfairly criticised. Like Collins and Evans, he does not begin to consider that there might be legitimate causes to the criticism of expert consensus, and thus he believes that every problem simply requires more experts. In this sense, he believes elites are inherently superior and should wield their power to constrict public opinion: “the public must be put in its place [...] so that each of us may live free of the trampling and the roar of a bewildered herd”.

• Schudson, Michael. “The Trouble with Experts – and Why Democracies Need Them.” Theory and Society, vol. 35, no. 5-6, 2006, pp. 491–506.

Schudson builds upon Lippmann’s work, echoing the idea that experts are necessary to make sense of the world’s complexity. He has a very optimistic view of the crucial role that experts could play in crafting policy. He believes that experts can amplify dissident voices and speak truth to power. Schudson imagines experts as hyper-ethical, almost monastic, in their commitment to highlighting policy alternatives and selling the public on them. Schudson appears to genuinely believe most experts are doing their best. He isn’t entirely naïve either – he warns against the risk of some experts becoming “toadies” who just tell people what they think they want to hear. This approach circumscribes the issue to a “few bad apples” who give their peers a bad name by pretending to be neutral while they are on the industry’s payroll. In framing the issue thusly, he ignores any form of structural analysis of how expertise is produced, and how experts can behave as a class.

• Bourdieu, Pierre. “Men and Machines.” Advances in Social Theory and Methodology: Toward an Integration of Micro- and Macro-Sociologies, by K. Knorr-Cetina and Aaron Victor Cicourel, Routledge, 1981, pp. 304–317.

Pierre Bourdieu is very attuned to the nuances behind the production of culture, of which expert knowledge is a subset. He believes that you can never extricate a person from the circumstances in which they were socialised, and that this “habitus” shapes their dispositions. This is highly arbitrary and deterministic, but it seems normal to people because we perceive social standing in relative terms. We consequently compare it to our peers’, who are of similar means, and not to society. Standardised tests tend to reflect cultural competency, and therefore largely measure privilege. Along these lines, experts unconsciously reached and maintained their position by assimilating unspoken rules, and knowing how much they can improvise within them. Only a tiny fraction of people – those who have made it to the top – are in a position to reshape the rules of the field. Culture never exists in a vacuum, and it shines a mirror back to those who produce it.

• Bourdieu, Pierre. “The Corporatism of the Universal: The Role of Intellectuals in the Modern World.” Telos, vol. 1989, no. 81, 1989, pp. 99–110.

Bourdieu notes that intellectuals are endowed with a reflexivity that can allow them to think critically about the work produced within their field, and they occasionally confront each other at conferences. In order for their work to be valuable, academics must shun and condemn the influence of special interests (e.g. political parties, pharmaceutical industry, energy companies), which have the potential to poison discourse, instill doubts and sow confusion. Since the experts captured by special interests benefit from significant external support and organisation (as described in Merchants of Doubt by Oreskes & Conway), the voices of intellectuals who genuinely want to help humanity flourish and develop knowledge will not be heard if they are atomised. In this lecture, Bourdieu takes this idea to its logical conclusion and proposes that intellectuals must bind together and organise in order to resist against this nefarious influence.

• Foucault, Michel, and Alan Sheridan. Discipline and Punish: The Birth of the Prison. Pantheon Books, 1977.

Foucault describes power and knowledge as two sides of the same coin. Power is based upon knowledge. But power also reproduces knowledge by shaping it. According to his worldview, everything is politicised, so it’s primordial to consider what kind of ideas shape expertise. Before the Modern Era, only the nobility had the power to ponder how to spend their lives. With industrialisation, everyone became an individual, and society took an interest in surveilling and pathologising their behaviours. Foucault takes “well meaning” paternalistic policymakers to task for perpetuating a power structure in spite of their stated altruistic intentions. Rather than taking these “do-gooders” at their word, Foucault was more interested in mapping out the structures of power, and the constitutional power of categories in order to make sense of them. Along these lines, Foucault sees the change towards viewing actions morally and psychologically as part of a change towards an internalised moralistic system of categorisation, which wields power more efficiently by imposing it less bluntly, effectively “putting a policeman in your head”.

• Fourcade, Marion. Economists and Societies: Discipline and Profession in the United States, Britain, and France, 1890s to 1990s. Princeton University Press, 2010.

In this text, Fourcade compares the categories used by economics in France, the United Kingdom, and the United States over the XXth Century. While the French economists were more focused on productivity, the UK’s economic categories focused more on ethics and economic freedom, and the US’ focused on more utilitarian and practical characteristics.  This book could be criticised for overly focusing on the narcissism of national differences between economic disciplines, when it is ultimately describing very similar ways of tinkering at the edges within neoliberal capitalism. It is however illustrative of how categories were set up by economists at the inception of their field, then quickly self-perpetuated (illustrating Habermas’ concerns about self-legitimisation) to become essential. In this sense, the terms of the debate (for example, that more growth is intently positive) were accepted as a premise, dismissing everything that may fall outside of these terms, and thus enshrining unquestionable assumptions.

• Stampnitzky, Lisa. Disciplining Terror: How Experts Invented "Terrorism". Cambridge University Press, 2013.

Lisa Stampnitzky’s book, like Fourcade’s, is a case study on how a group of experts invented a field. In this case, she looks at how the study of terrorism was created in the 1970s. Stampnitzky identifies how experts shifted away from conceptualising a rational terrorist, motivated by political goals, and instead moved towards profiling “inherent terrorists”, who are described being against knowledge and acting “pre-emptively”. Because this field touches upon everyday life and news coverage (moreso than economics, which is more of a specialised field), its frames directly shape people’s everyday conversations. Furthermore, it appears as though using the appropriate jargon is the primary criterion for being taken seriously as an expert on terrorism on cable television, given the proliferation of “counter-insurgency” experts employed as talking heads by the news networks. This, in turn, makes these so-called experts entirely dependent on interiorising the intelligence community’s received wisdom at a given time and regurgitating it.

• Nichols, Thomas M. The Death of Expertise: The Campaign against Established Knowledge and Why It Matters. Oxford University Press, 2017.

In this book, Nichols decries the fact that experts are no longer listened to because people narcissistically believe themselves to be as knowledgeable as they are. For example, he notes that doctors complain about patients being attached to fantastical self-diagnoses after one Google search. Nichols believes that although this phenomenon is exacerbated by the internet, it predates it and has more to do with how younger generations have been cajoled and encouraged to believe their opinion is as valid as anyone’s. His focus on pathologising an age cohort based on overly supportive parenting norms leads him to ignore a more interesting perspective about the formal qualities of accessing information on the internet. I would argue that what he criticises as narcissism isn’t a character flaw of “kids these days” (since the trend he decries transcends this naive demographic frame), but rather a structural characteristic of accessing information that has been algorithmically ranked based on what advertisers predict a person is already most interested in. These predictions become deterministic, since they are responsible for what a person will be exposed to. Nichols notes that there is exuberant skepticism towards experts, and that people seem eager to catch them in the wrong : “Americans have reached a point where ignorance, especially of […] public policy, is an actual virtue. […] To reject the advice of experts is to assert autonomy, […] to insulate their increasingly fragile egos from ever being told they’re wrong”. Here again, he fails to consider that this might be due to average people losing trust in elites after living through a litany of self-serving betrayals (e.g. the war in Iraq, the subprime mortgage crash and recession). As far as solutions, Nichols encourages people to read ideas they disagree with, but he seems more interested in generalities and moralising others than introspection.

• Eyal, Gil. The Crisis of Expertise. Polity Press, 2019.

This book endeavours to explain the causes underlying people's loss of trust in expertise, specifically around climate change, vaccines and globalisation. Eyal posits that state-regulated capitalist governments require more and more input from experts in order to justify the unequal distribution of the impacts of their policies. Yet he observes an interesting paradox : as society grows more dependent on expertise (because its measurements are embedded into regulation, echoing Jacques Ellul's premonitions about the technical system), people trust it less and less. Specifically, he notes that governments' increasing reliance upon experts (i.e. the "scientisation" of politics) leads to the politicisation of science, and that these two processes amplify each other. Not only have politicians been using science to legitimise themselves (as Foucault critiqued), but science became infected by being used to legitimise politics. Eyal describes this phenomenon as a recursive crisis of expertise. To address this crisis, he steers clear of extreme answers : he does not think it's a good idea for experts to decide in secret (which makes the decisions unaccountable), nor does he think that everyone impacted should have a seat at the table (because it makes decisions impossible). He frustratingly does not make pragmatic recommendations, but he does advise against using the descriptors of "climate deniers" or "anti-vaxxers", because they are inherently polarised terms that back people into a corner where they have to defend their identity. Instead, it is preferable to look for a shared understanding to build upon.

• Editorial, “The EU Must Learn from the Anti-Expert Narrative That Drove Brexit.” Nature, vol. 588, no. 7838, 16 Sept. 2020, pp. 370–370., https://www.nature.com/articles/d41586-020-03540-6.

This editorial alerts researchers against a narrative mounting against them, as demonstrated by the Brexit vote. It outlines how the European Union epitomes technocratic governance : "researchers are integrated into EU decision-making to help to ensure that policy is informed by a consensus of evidence". By extension, the United Kingdom's rejection of a system which values science so highly is the product of an "anti-expert narrative" which threatens to "undermine evidence and the rule of law" in remaining EU countries. Brexit's architects sought to divide “experts” as a separate category from "the people”, as when Michael Gove remarked on Sky News three weeks before the vote that the British people “have had enough of experts from organisations with acronyms saying that they know what is best and getting it consistently wrong”. In saying this, Michael Gove erroneously implied that experts were the only ones who reaped the benefits of free movement of people and the research funding offered by Europe. The editorial therefore calls for European researchers to delve further into untangling this narrative. It refers to the work of Johan Schot, a historian of science and technology policy who looked into popular dissatisfaction with technocracy, and the work of Kalypso Nicolaidis, a researcher in international relations. Nicolaidis writes that when decisions are made only on the basis of expert evidence, people no longer feel like they are in control of their choices. Both of these researchers call for more participatory methods of governance to assuage these concerns and give citizens more agency in decision-making.

• Zappavigna, Michele. “‘Had Enough of Experts’: Intersubjectivity and the Quoted Voice in Microblogging.” Studies in Corpus-Based Sociolinguistics., Dec. 2017, pp. 321–343., https://www.researchgate.net/publication/321488082_Had_enough_of_experts_Intersubjectivity_and_the_quoted_voice_in_microblogging_In_Friginal_E_ed_Studies_in_corpus-based_sociolinguistics_London_Routledge_pp_321-343.

In this article, Zappavigna looks at a common practice in social media posts : commenting on the views of public figures through (directly or obliquely) quoting short snippets of their famous utterances, which are in heavy rotation on cable news. To do so, she analysed a corpus of tweets posted the week after the 2016 UK referendum containing the #brexit hashtag along with the term "expert", "experts" or "expertise". Most of these tweets indirectly refer to Gove's comments without bringing him into the discussion by name. Zappavigna argues that indirect quotes can either serve to align a person with a public statement, or they might serve a disaligning function: "[quoting] can be used to associate the social media user with a [..] perspective, for instance, with enlightened personae who pay attention to the wisdom of experts. This in turn de-aligns the user from other anti-expert viewpoints". She outlines a mutually reinforcing dynamic of exclusion between those who are in favour of experts and those who are against them: "in tandem with political rhetoric devaluing expertise is the tendency to lampoon those who do not accept the validity of expert knowledge". Zappavigna observes that that this mockery often takes the form of derisively performing a stance as though it were the user's own view, to highlight its inherent inconsistencies. In this case, the most frequent joke was to juxtapose Gove's statement with a medical context, e.g. a person saying they wouldn’t trust an expertly trained surgeon to operate on them, or another one saying that they are pivoting to opening their own gynaecology clinic, because people are tired of experts.

2. THE ENVIRONMENT AND CLIMATE CHANGE:

• Jasanoff, Sheila. The Fifth Branch: Science Advisers as Policymakers. Harvard UP, 1994.

Jasanoff emerges from the critical legal studies tradition, which is more interested in the way the law is practiced than what the legislation says. Critical legal scholars look beyond formal definitions of institutions to consider them as communities of practice. In this text, Jasanoff looks at the Fifth Branch, which is her term for the government workers working in regulatory agencies and committees (FCC, EPA, FDA). She says this field is unaccountable : studies produced in the context of a regulatory debate are not peer reviewed, and present pernicious incentives. Jasanoff notes that consumer groups and lobbyists tend to produce studies that offer wildly divergent recommendations, because they designed these studies to measure different things (e.g. looking at the use of a product throughout a day vs. effects of long-term use). She is concerned about who gets to define what counts as good science when all claims are subject to negotiation. When looking at the environmental disaster at Love Canal in NY in the 1970s, Jasanoff argues that deliberation made things worse. It legitimised the polluted situation, and granted authenticity to an insufficiently ambitious deal by giving it the appearance of the stamp of approval of a process.

• Fischer, Frank. Citizens, Experts, and the Environment: The Politics of Local Knowledge. Duke University Press, 2000.

Fischer is interested in tackling environmental issues. He notes that from its onset, environmental policy quickly became very technical, requiring instruments to measure the concentration of particles or toxic chemicals in the air. While technology has been identified as being at the centre of environmental degradation, it has also given rise to the methods of detecting and finding solutions to environmental issues. As a consequence of this, environmental politics became an increasingly professionalised field run by technocrats. This created friction between the citizen movements (who called attention to these issues in the first place and raised consciousness, and were often skeptical of technology), and the experts in government agencies tasked with making decisions. This alienation from policymaking may be exacerbated as the situation worsens : Fischer is concerned that democratic politics will be increasingly undermined as governments face exogenous pressures (such as displaced populations evacuating flooded or burned areas), due to climate crises in the coming decades. He suspects that the military might have to quell civil violence as a result of this instability, and that the chaos will make people more receptive towards authoritarian solutions. To safeguard democratic values, he argues in favour of a participatory model that lets people have a seat at the table. Building upon his focus on the argumentative turn in policy analysis, which he identified with John F. Forester in 1993, Fischer is focused on the applications of research on policy, instead of considering that the researchers' work is done once the work is published and languishing in a journal. He saw that research was used to make arguments for policy, and made the shift to measure what makes a good argument. He concluded that the most effective way to marshal research to argue for policy aims is to put the knowledge context (crafted by experts) in conversation with the normative context (citizen activists who provide the moral framing).

• Hulme, Mike. Why We Disagree about Climate Change: Understanding Controversy, Inaction and Opportunity. Cambridge University Press, 2009.

In this book, Hulme criticises the community of scientific experts for presenting climate change in catastrophic and apocalyptic terms. Rather than adopting a discourse of "imminent peril", Hulme wants people to view climate change as "an opportunity to alter the way we […] achieve our personal aspirations and our collective social goals". Hulme argues that alarmist rhetoric can detract from what science is good at, making disasters appear impossible to solve and demoralising its audience, and expounds upon this so-called opportunity: the apparent intractability of climate change requires us to "reappraise the 'myths' or foundational belief systems in which the science unfolds". He envisions climate change both as a magnifying glass and as a mirror: as a magnifying glass, to draw attention to the long-term implications of short-term choices in the context of material realities and social values; and as a mirror, to "attend more closely to what we really want to achieve for ourselves and for humanity". Hulme sees our species as threatening the pulsating web of life that sustains habitability for all living beings, with the power to create intolerable conditions for most of our offspring. And yet he notes that we are also uniquely situated with the scientific knowledge, economic strength and political capacity to avoid this calamity. Ultimately, he bemoans the fact that climate has supplanted capital and social class to become the organising narrative within which local and global political issues are framed: "all techno-fixes will create the next generation of crisis, because they ignore the fundamental problems of capitalism as a system that ignores injustice and promotes inequity".

• Ottinger, Gwen. Refining Expertise: How Responsible Engineers Subvert Environmental Justice Challenges. New York University Press, 2013.

In this text, Ottinger describes how the notion of environmental safety is invariably mediated by scientific experts. She illustrates this with a case study of residents of New Sarpy, Louisiana who started collecting data on how an oil refinery was causing them health problems – they measured their air for toxic chemicals like benzene using “buckets” (homemade air samplers). Ultimately, though they had been collecting empirical data, the residents of New Sarpy backed down after refinery scientists (misleadingly) assuaged their concerns. Ottinger argues that the engineers were able to quell these environmental and health concerns because of they portrayed themselves as responsible experts acting morally and doing their best, which re-affirmed their authority to dismiss the residents’ data and first-hand experience. She suggests that scientists concerned with environmental justice who would like to reverse this trend must use their position to empower communities. They can do so by telling their stories with data, and by making these datasets easily accessible and interoperable (by providing an API and a download button). Scientists can’t assume that all problems are technological in nature : they should instead look at the underlying social processes that give the numbers meaning in practice.

• Chastain, Andra B., and Timothy W. Lorek. Itineraries of Expertise. Science, Technology, and the Environment in Latin America's Long Cold War. Pittsburgh University Press, 2020.

This collection of essays compiled by Andra Chastain and Timothy Lorek examine Latin America’s Long Cold War by looking at the role of the experts who cast themselves as apolitical technocrats just working on the ground. They argue that these experts and their networks were instrumental in shaping geopolitics and culture. This book's essays specifically look at how knowledge, technology and practice circulated in Chile, Mexico, Cuba, and Peru. For example, in Chile, environmental scientists were hopeful that they would get the ear of the centre-left government that was elected after the fall of Pinochet's dictatorship. Unfortunately, the environmental scientists were only paid lip service to, before the government privatised their energy industry on the advice of economic experts. In looking at the itineraries of experts as individuals travelling across networks, Chastain & Lorek note that they cross political boundaries and exist in multiple communities. They name this concept as the hybrid nationalities of experts, who may have an affinity to their home country, but most importantly belong to and identify with the imagined community of their discipline (in reference to Benedict Anderson's work).

• Moore, Alfred James. Critical Elitism: Deliberation, Democracy and the Problem of Expertise. Cambridge University Press, 2017.

Moore is a scholar of deliberative democracy, specifically the systems approach to deliberation, which is influenced by Dewey and Habermas. This approach envisions a healthy public sphere as being constituted of many smaller fora, leading to many mini publics, each with distinct and specific qualities. Moore ponders how to construct expert authority now that people no longer defer to them, and are increasingly empowered to contest their findings. He notes that this is partially good, because it may uncover cases where expertise is a mask for power : expert claims (and the assumptions embedded into them) implicitly stand outside from democratic debate, and tend to be camouflaged as the precondition for rational deliberation if unchallenged. However, Moore also deems expertise to be crucial to make sense of the technical potentials of issues. He defines critical elitism as resolving this tension of trust by providing the critical judgement necessary to test and verify expert authority. This requires a public empowered to think critically and to express itself via institutions that provide avenues for scrutiny and testing of expert claims. As a case study, Moore looks at the story of "Climategate". When emails from the university of East Anglia got hacked, climate scientists were revealed to have circumvented FOIA requests, denying information to antagonistic critics whom they suspected of wanting to present their research out of context. In doing so, the researchers inadvertently provided ammunition that their critics use to undermine the notion of scientific consensus. Rather than the independent convergence of the community of experts, critics were able to frame the process as being gamed by a group of professionally motivated scientists, which undercut trust in the expert consensus on climate. Moore argues this is a failure of institutional design, because international panels of experts tend to erase internal disagreement and present their results as a consensus rather than articulate uncertainties out in the open. This paradoxically makes their work appear less accountable, and therefore more vulnerable to attacks.

• Angelo, Hillary. How Green Became Good: Urbanized Nature and the Making of Cities and Citizens. University of Chicago Press, 2021.

Angelo coins the term of urbanised nature : a social imaginary (in reference to Benedict Anderson’s imagined communities) that emerges from processes of urbanisation and makes urban greening possible. She describes how parks are seen as an inherent public good, where the only factor that urbanists have to consider is whether they are accessible. This book details a case study of the greening decisions made in Germany’s Ruhr Valley over the second half of the XXth Century. After World War II, German urban planners were animated by a pragmatic desire to create spaces that were safe, welcomed difference and invited discussion. To achieve these universalist goals, they took inspiration from Habermas’ research about the public sphere. The parks quickly became a destination that bourgeois motorists from neighbouring cities and towns regularly drive out to, which has contributed to carbon emissions. Angelo inscribes this within a larger trend of bias towards aesthetically green solutions in urban sustainability planning around the world. She observes that "green" solutions (such as creating a green garden, or planting more trees) are invariably put ahead of more efficient environmental solutions (such as improving public transit, which reduces emissions). She warns that this green rhetoric can produce a form of eco-spectacles with regressive impacts, like green gentrification: when people are displaced after newly planted trees make property values go up.

• Conway, Erik, and Naomi Oreskes. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. Bloomsbury Publishing Plc, 2010.

This book describes the lengths to which industries go in order to obfuscate the nefarious second order effects of their moneymaking enterprises on our species. Companies in the tobacco and oil & gas industries have been proactive about funding “alternative” research that sows doubts on the deathly consequences of their unencumbered expansion. Their goal was rarely to directly disprove the demonstrations of the harms that they are responsible for, but rather to create a pseudo-justification for pushing a counter-narrative to the research that threatened their business. The message pushed by these stooges found a receptive audience among quite a few complacent citizens, who preferred this corporate fairytale to being confronted with the uncomfortable fact that their lifestyle was having a negative impact on their future. Conway and Oreskes pinpoint how the prestige of science was exploited for profit, describing this heinous phenomenon as “merchandising”.

3. SCIENTIFIC AND TECHNICAL LITERACY AND DEMOCRACY

• Ellul, Jacques. The Technological System. Calmann-Lévy, 1977.

In this book, Jacques Ellul says that science, technology and production come together to constitute a technological system (la technique). He based this work on observations of the beginnings of online banking. This technological system is blind, and exclusively concerned with its own expansion, which renders nature artificial and alienates mankind. As Ellul puts it, “wherever we find research and new methods based on a criteria of efficiency, we can say that the technical system has arrived”. He posited that this system would progressively invade all leisure and encompass all society. This societal shift towards technique will not be imposed by explicit force, but rather under the guise of free will – after citizens have been moulded by school and advertising, it will seem like the only viable option. To people stuck in such a system, Ellul remarks that everything scientific is inherently deemed authentic (which brings to mind Habermas’ ideas about the self-legitimation of experts), which allows the system to propagate itself. Meanwhile, this system is incapable of self-correcting, which tends to exacerbate irrationality. Ellul is very critical of how this closed system biases science by “begging the question” : “the ends are so implied within the technical means used to achieve them that it has become meaningless to try to distinguish between ends and means”.

• Winner, Langdon. “Chapter 4: Technocracy”. Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought. MIT Press, 1977, pp. 408–437.

In this chapter, Winner is concerned about technocracy : experts crafting power. He decries the ways in which technocrats legitimise their advantage by wielding science. Winner notes that this organisation of society is often presented as the logical end point of every kind of political and economic system. He argues that this process is used to suppress any forms of knowledge that doesn’t redound to their benefit. In this sense, experts serve a pacifying function, by letting people externalise their critical thought onto them. To Winner, this represented a tragedy: “The power described here is the cancellation of all other varieties of power and the cancellation of the historical debate about how power exists and how it works. The au­thority rests on a human population dwarfed and submissive before forces it cannot understand or influence but entirely content with the services offered”.

• Latour, Bruno, and Steve Woolgar. Laboratory Life: The Construction of Scientific Facts. Sage Publications, 1979.

In this pioneering text for the field of science studies, Latour deployed the tools of ethnography to embed himself among scientists within a laboratory at the Salk Institute and describe how they produce their work. He did so by dispensing with the deference that was usually offered to their intellect, and describes their work with the tone that an anthropologist would use to describe a tribe, ridiculing their attachment to white lab coats. He makes fun of their apparent authoritativeness by pointing out that they don’t believe anything until it’s written down. However, he remarks that once something is inscribed, it takes on a magical quality, and they are able to make sense of it by organising in little groups with competing citations to signal who respects whom’s work. Following this frame, Latour ironically concludes that the goal of funding research is to make different groups of scientists fight among themselves.

• DiMaggio, Paul J., and Walter W. Powell. “The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields.” American Sociological Review, vol. 48, no. 2, Apr. 1983.

This paper notes how highly structured organisations competing within a given field tend to operate in a similar fashion and adopt the same methods. DiMaggio and Powell posit that this happens because they are basing their behaviour on what their peers have been successful with. Therefore, changes made within a given organisation tend to go in the direction of what everyone else has been doing. In these highly structured fields, “individual efforts to deal rationally with uncertainty and constraint often lead in the aggregate to homogeneity in structure, culture and output".

• Hilgartner, Stephen. Science on Stage: Expert Advice as Public Drama. Stanford University Press, 2000.

Hilgartner is interested in the concept of a backstage/onstage division in terms of how science gets presented to the public. When a report is presented, it is made in a single authoritative voice using clear declarative statements, which obscures the backstage arguments out of which it was forged. He says that once science is up on stage for the audience, it becomes a mediated performance. All the actors are ushered into their expected seats – this idea of everyone performing their role (including the audience) resonates with Guy Debord’s Society of Spectacle. Following along this metaphor, Hilgartner concludes that the production of knowledge is shaped by “stage management” who seek to justify their position and maintain their credibility (which echoes Habermas’ concerns about self-legitimising expertise). Experts wield power by producing certainty and uncertainty, for instance dismissing local knowledge as anecdotal and upholding specialist voices that use a rarified lexicon.

• Brown, Mark B. Science in Democracy: Expertise, Institutions, and Representation. MIT Press, 2009.

Like Habermas, Brown emphasises the importance of a public check on experts, who often hide behind the complexity of their lexicon to appear neutral. This is always a fiction, because science is inherently politicised. In spite of this, scientists refuse to acknowledge that their representations are merely arbitrary. Brown is particularly concerned about how this performance of neutrality camouflages ideology (when “the facts speak for themselves”), thereby making it harder for the public to question the assumptions underlying a technocratic pronouncement, and shutting down any argument. Brown gives the example of the social science research around the concept of the “culture of poverty”, which put forth the theory that living in poverty creates short-term values and disrespect of authority. The researchers reached this conclusion based on a study design that asked participants what they would do if given 1000$, so it’s hardly surprising that there was a correlation between poverty (e.g. hunger as opposed to savings) and “short-term values”. This pernicious research was subsequently used to justify repressive practices in the 1980s, such as “broken window” policing.

• Khelfaoui, Mahdi ; Gingras, Yves ; Lemoine, Maël ; Pradeu, Thomas. “The Visibility of Philosophy of Science in the Sciences, 1980–2018.” Synthese, 2021, http://philsci-archive.pitt.edu/18789/.

This paper provides a meta-analysis of scholars of the social sciences and humanities. It measures visibility through number of citations between academics across disciplines. It describes how the only voices that can talk about philosophy of science to an audience of scientists either publish frequently in STEM disciplines, or are a big deal in their own field of philosophy of science. The paper's authors observe that “about half of citations received by the field of philosophy of science come from outside the field of philosophy. This share of external citations was found to be particularly important when compared to that of the rest of the field of philosophy". They conclude that "philosophy of science as a specialized field is far from autarchic and closed on itself“. As far as recommendations, they argue that this dynamic should be further encouraged by interdisciplinary studies: “Philosophers of science, especially the younger ones, already have, in most cases, a strong background in science. Yet this could be strengthened, and it could also be more valued in philosophy departments. In parallel, scientists should receive training in history and philosophy of science, not aiming to transform them or to challenge their practice, but rather to help them better understand how to use philosophers’ unique competences to improve their scientific practice".

• Benenson, Fred, and Tyler Woods. “'Mathwashing,' Facebook and the Zeitgeist of Data Worship.” Technical.ly, Technically Media, 8 June 2016, technical.ly/brooklyn/2016/06/08/fred-benenson-mathwashing-facebook-data-worship/.

This interview with Fred Benenson (formerly VP of Kickstarter) is centered around “mathwashing”, a neologism which he defines as "exploiting the objective connotations of math terms to describe products and features that are probably more subjective than their users might think”. He admonishes this practice as an abuse of the trust and authority that laypersons have placed into mathematics, which is being twisted for marketing purposes and will eventually wear out. He warns that this trust may have been unearned in the first place, since computers are only as good as their programmers. Benenson argues that “algorithm and data-driven products will always reflect the design choices of the humans who built them” and that “anything we build using data is going to reflect the biases and decisions we make when collecting that data. […]  if we want to ‘stick to the numbers’, [we have to understand] how we recorded those numbers”.

• Scheuerman, Morgan Klaus ; Denton, Emily ; Hanna, Alex. “Do Datasets Have Politics? Disciplinary Values in Computer Vision Dataset Development.” 9 Aug. 2021, doi:10.1145/3476058. https://arxiv.org/abs/2108.04308

This paper is an analysis of the disciplinary practices around documentation (how data is collected, curated, annotated) of the datasets used to train image recognition algorithms. The researchers based their meta-analysis on a corpus of 500 computer vision datasets. By focusing on the language choices made by scientists during the dataset curation process, Scheuerman, Denton & Hanna identify 4 recurring biases about what these scientists value and what they leave out. They note that authors of datasets always make note of efficiency (budgetary and computer resources), but they do not talk about care (ethics, consent, or compensation). Secondly, dataset curators strive to create models that can generalise well, so they value large-scale, diverse, realistic data which “len[ds] to a belief in inherently comprehensive […] categorical classifications of real-world phenomena". This belief implies universality, "insinuating a world that is able to be neatly captured and classified" which is prioritised ahead of contextuality, which looks at how circumstances shape data. Thirdly, dataset curators also tend to focus on impartiality (concerns about selection bias) when they should be reporting on their own positionality (a reflexive analysis of "how one’s social and professional position can give rise to differential resources and knowledge gaps"), rather than assuming that neutral decisions exist, much less striving for them. Finally, dataset curators value model work over data work. Scheuerman, Denton & Hanna discovered that many pieces of the data collection and curation process are missing from documentation of the datasets they examined, often including the data itself, hidden behind a URL going to to a website that's now offline (also known as "link rot"). Instead, when writing about the datasets they curated, the scientists choose to focus on the details of the algorithms, also known as "model work". Scheuerman, Denton & Hanna conclude by reaffirming that all technical artifacts are imbued with underlying politics and values, and recommend potential steps that dataset authors could take to fix the 4 blind spots that they outline.

• Warzel, Charlie. “Opinion | the Ufo Saga Shows What Happens When Conspiracies and Reality Collide.” The Washington Post, WP Company, 1 June 2021, www.washingtonpost.com/opinions/2021/05/29/ufo-news-conspiracy-theories-disinformation-report/.

This article examines the trajectory that some conspiracy theories have taken (for example, UFOs), moving from the fringes to become a national security story. This happens through the legitimisation of experts, often from the intelligence community. The plethora of online information shifts the burden of sorting information onto the reader, who will usually seek to defer to a consensus. This creates complications when there are open debates and disagreements on an uncertain scientific topic. In the rush to take a side, investigations into these domains run the risk of becoming overrun with the signifiers that people assign to a given position.

• Zimmer, Marc. The State of Science: What the Future Holds and the Scientists Making It Happen. Prometheus Books, 2020.

In this book, Zimmer traces the problems with science's reputation back to its funding mechanism. Self-funded science is extremely rare – even crowdsourcing could only ever fund starting a project, not sustaining it. This makes scientists dependent on foundations, companies, or the government. Scientists must demonstrate results in order to get approval and continuation of grants. This creates a perverse incentive to publish too soon, and often leads researchers to overhype their results. Zimmer observes that papers have become the currency of science (for jobs and funding), which creates another incentive to publish a large quantity of papers in prestigious journals rather than publishing thoughtfully. He notes that the pressure of funding biases what scientists propose in the first place : researchers write the grants that they think will get funded, not what they think are their best ideas or those that excite them most. In fact, some particularly novel or iconoclastic proposals get shot down precisely because there isn't enough support in the existing literature for them, and because this process is endogenous, their ideas are relegated to staying on the margins. Zimmer suggests these incentives should be broken by tying funding to the trust in a person. He also looks peer review and concludes that the system is exploitative: researchers work in a lab (with government or foundation funding), they come up with a theory, get data to prove it, write a paper (all for free). Then, they send it to a journal, where the editor looks at it to decide whether it’s a good fit, then sends it to 3 external reviewers (experts in the field) who each spend about 5 hours looking over the research, deciding whether it makes sense and send back their comments (also for free). Then, the researcher applies corrections, and the journal publishes their paper. In order to read it, people must either be a member of an academic institution which pays thousands of dollars for a subscription to the journal, or they can get online access for 50$. Zimmer notes that these big companies end up profiting off free labour to lock up access to research and sell it, against the wishes of those who produced it in the first place. In my opinion, Sci-Hub is precious because it rights this moral injustice. Ultimately, Zimmer deems the peer review process to have good quality control but to be too slow. For example, when COVID hit, it was hard to get 5 hours of attention from an epidemiologist to look at new research. He says one solution to this is the new system of pre-prints, where almost mature work can be commented on by other scientists before being sent out for review, which speeds up quality control.

4. MEDICINE

• Jureidini, Jon, and Leemon B. McHenry. The Illusion of Evidence-Based Medicine: Exposing the Crisis of Credibility in Clinical Research. Wakefield Press, 2020.

Jureidini & McHenry use the lens of Karl Popper’s falsification theory to distinguish science (which requires rigorous testing of hypotheses) from pseudoscience, with regard to abuses by the pharmaceutical industry. The authors are concerned that fake science is designed to be indistinguishable from the real thing, which destroys confidence in the whole edifice. They describe a hierarchy of evidence-based medicine which emerged as part of systematisation of testing in the 1990s: at the top, the most trustworthy scientific evidence is randomized placebo-controlled clinical trials. Below that, various other forms of less rigorous testing such as observation studies, retrospective studies, naturalistic studies, and comparative studies. At the bottom, the judgements of expert researchers, mechanistic reasoning, consensus formed from various kinds of committees. However, the authors ultimately deem all evidence-based medicine to be nothing more than an ideal to aspire to, doomed to remain illusory so long as most clinical trials are conducted by the industry itself. This book details work by researcher Peter Doshi, who challenged the pharmaceutical industry to share their raw data (including abandoned trials) so that independent researchers could analyse whether the reporting of clinical trials in medical journals depicted an accurate picture. He concluded that pharmaceutical PR companies and marketing departments of drug companies have co-opted academics, using their legitimacy as a veneer to turn them into promoters of their products, and only publishing research that fostered the creation of new drugs. Medical journals have become dependent upon industry money (for advertising, reprint revenue, open access fees). To remedy this corruption and rebuild trust in medical experts, Jureidini & McHenry propose that all scientific testing be done by governments and universities in order to insulate industries from regulating their own products. They also warn the new generation of doctors and academics to be vigilant and ward off the industry's seduction tactics.

• Epstein, Steven. “The Construction of Lay Expertise: AIDS Activism and the Forging of Credibility in the Reform of Clinical Trials.” Science, Technology, &; Human Values, vol. 20, no. 4, 1995, pp. 408–437. JSTOR, www.jstor.org/stable/689868.

In this article, Epstein unpacks the history of AIDS activism and presents it as an illustration of an oppressed group coming together to establish their credibility in front of experts. He notes that experts' failure to quickly solve the AIDS crisis "as they were supposed to do" heightened resentment towards the establishment and opened up space for dissident voices to organise. Epstein notes that the "disease constituency" of AIDS in the 1980s mapped onto already constituted social groups: a gay movement engaged in identity politics, linking tangible goals to the assertion of their group identity. Furthermore, the gay community was highly skeptical of the medical community since doctors had only just stopped stigmatising their identity as a mental illness a decade prior. To illustrate this distrust, Epstein quotes John James, the editor of AIDS Treatment News, as writing in 1986 that "relying solely on official institutions for our information is a form of group suicide". Consequently, AIDS activists quickly appraised the disease as a threat against their group that they could mobilise against. In doing so, they became what Epstein coined as "lay-experts", which is similar to what Collins and Evans will later call "experience-based experts”.

• Epstein, Steven. Impure Science: Aids, Activism, and the Politics of Knowledge. University of California Press, 1996.

In this book, Epstein builds upon his retelling of how AIDS activists became lay-experts about the virus, which allowed them to get a hearing from researchers, and influence the treatment process. Epstein looks at the tactics employed by the activists in order to be taken seriously by researchers and government officials. This was largely a matter of demonstrating codes of cultural competence: activists learned to adapt to the medical lexicon. As Epstein put it, they "under[went] a metamorphosis, to [...] speak credibly in the language of the researchers". This was facilitated by the fact that many in the community possessed high cultural capital (since they were cosmopolitan, educated), and demographic factors (since they were mostly white and male) that led their scientific interlocutors to take them more seriously. Epstein notes how AIDS activists "attended scientific conferences, scrutinised research protocols, and learned from sympathetic professionals", and they quickly compiled and disseminated a glossary of medical terms. Learning the technical jargon yielded results: "once they could converse comfortably, [...] activists [...] discovered that researchers felt compelled by their own norms of discourse and behaviour to consider activist arguments on their merits". Once they were able to be heard, activists yoked together methodological and moral arguments to mobilise different forms of credibility. They reminded experts that the problems they were tackling were not abstract, and that people were suffering. Epstein outlines how clinical trials were a site for these debates, where activists underlined the importance of trying to save as many of their dying friends as possible with experimental treatments rather than looking to constitute representative control groups which excluded people outside of a certain range "in the name of clean data". Epstein demonstrates that the research for AIDS medication was not the product of "pure" science descended from the Ivory Tower, but the product of involving lay-experts to legitimise decision-making: "arguments of AIDS activists [...] have created new pathways for the dissemination of medical information. [...] Their networking has brought [...] communities of scientists into cooperative relationships with one another”.

• McMillan Cottom, Tressie. “I Was Pregnant and in Crisis. All the Doctors and Nurses Saw Was an Incompetent Black Woman.” Thick and Other Essays, The New Press, 2019.

This essay describes the author's harrowing experience with trying to give birth as a black woman in an American hospital. McMillan-Cottom recounts how nurses and doctors constantly dismissed her butt pains at the end of her pregnancy as the result of poor lifestyle choices, and neglected to notice that they were contractions. Nurses then blamed her for their own failure to notice that she had been in labour for three days, scolding her for not telling them. The anesthesiologist demanded that she be quiet or he would leave without administering pain relief. Then, her daughter died immediately after birth. McMillan recounts how she was not taken seriously : "everything about the structure of trying to get medical care had filtered me through assumptions of my incompetence. […] Like millions of women of color, especially black women, the healthcare machine could not imagine me as competent and so it neglected and ignored me until I was incompetent. […] When I called the nurse and said that I was bleeding and in pain, the nurse needed to hear that a competent person was on the phone in order to process my problem for the crisis that it was". She remarks that this racism could not be challenged: "nothing about who I was in any other context mattered to the assumptions of my incompetence. I spoke in the way one might expect of someone with a lot of formal education. I had health insurance. I was married. All of my status characteristics screamed “competent,” but nothing could shut down what my blackness screams when I walk into the room".

• Rhodes, Rosamond. The Trusted Doctor: Medical Ethics and Professionalism. Oxford University Press, 2020.

This book proposes that field of medical ethics requires a particular set of morals. Rhodes argues that medicine's distinctive ethics should be explained in terms of the trust that society grants the profession. She builds a moral framework centered around the belief that doctors must "seek trust and be trustworthy", and demonstrates that this framework is consistent with the codes of medical ethics of societies around the world. Rhodes explains that in order gain this trust, physicians must develop the attitudes or "doctorly" virtues that comprise the character of trustworthy doctors, i.e. they must behave according to their patient's expectations of the conduct of a trustworthy doctor. She argues that the concepts around "flattening the curve" that were communicated by experts on media around the world at the start of the COVID 19 pandemic were an example of scientists successfully leveraging trust. By clearly explaining what they were measuring and the range of possible outcomes, they were (initially) able to garner most of the public's acceptance for exceptionally constricting measures, such as lockdowns, which ended up saving lives.

5. WMDs in the NYT: Manufacturing consent for another Gulf War

• Halberstam, David. The Best and the Brightest. Fawcett Publications, 1972.

The investigative reporter David Halberstam predicted the public's turn against institutions, and how political science can be used to legitimise war. In this book, he profiles the architects of the war in Vietnam : the so-called "best and brightest". John F. Kennedy recruited "whiz-kid" experts from leadership positions in industry and academia to join his administration to work on foreign policy and craft "brilliant policies that defied common sense", often against the advice of career diplomats with local experience. Halberstam observed a selection bias among the new experts : intellectuals who questioned American interventionism were not moved to leave academia and join Kennedy's cabinet, whereas those who did tended to be motivated by hubris and a desire to forcefully stamp out totalitarianism. They justified their abuses of power by "their own conviction that the Communists were worse, which justified […] dirty tricks [&] toughness". Furthermore, they were moved by the fear of being portrayed by media and opponents as weak against communism. This anti-Communist dogma blinded the experts from leaning about the past. Ultimately, Halberstam describes these technocrats as believing that their sheer intelligence and rationality could solve anything, blinded by ego: they "flaunted [their] intellectual superiority and […] superior academic credentials" which bolstered them to disregard any history about the region they invaded. These "arrogant men of the Atlantic [did not need] to know about such a distant and […] less worthy part of the world [which only contained] second-rate minds". "By 1950, caught up increasingly in our own global vision of anti-Communism, we chose not to see [the Vietnam] war as primarily an anticolonial war […]. Where our money went our rhetoric soon followed. We adjusted our public statements [and] journalism, to make it seem as if this was a war of Communists against anti-Communists, instead, as the people of Vietnam might have seen it, a war of a colonial power against an indigenous nationalist force". History proved this description right, since Robert McNamara, one of the main subjects of this book, wrote a book two decades later in the 1990s called In Retrospect in which he admitted that they acted wrongly: “we didn’t understand the nature of what [we] were doing”.

• Gordon, Michael R. “U.S. Confirms Iraq Has Launched Rocket That Can Carry Satellites.” The New York Times, 9 Dec. 1989, www.nytimes.com/1989/12/09/world/us-confirms-iraq-has-launched-rocket-that-can-carry-satellites.html.

This article by Michal Gordon from December 1989 in the New York Times showcases him already credulously passing on State Department officials' fear-mongering about Iraqi weapons, in this case about rockets that can launch a satellite into orbit. In an admirable commitment to stenography of the intelligence community, Michael Gordon also relays State Department officials' disbelief that this "third-world nation”'s missile launch could be scientific in nature.

• Gordon, Michael R. ; Miller, Judith. “Threats and responses: the Iraqis, US Says Hussein Intensifies quest for A-Bomb parts.” The New York Times, Sept. 8, 2002, https://www.nytimes.com/2002/09/08/world/threats-responses-iraqis-us-says-hussein-intensifies-quest-for-bomb-parts.html.

This front page story in the New York Times by Judith Miller and Michael Gordon cites anonymous administration officials saying that Saddam Hussein has repeatedly tried to acquire aluminum tubes “specially designed” to build centrifuges to enrich uranium and build an atomic bomb, based on their diameter and thickness. This article was instrumental in shaping elite consensus to manufacture consent for the war in Iraq, before it was shown to be false. It incorrectly yet declaratively describes "signs of a renewed Iraqi interest in acquiring nuclear arms", bemoans "Mr. Hussein's dogged insistence on pursuing his nuclear ambition" and states that "Iraq has stepped up its quest for nuclear weapons" by embarking on a "worldwide hunt for materials to make an atomic bomb", via Administration official quotes. The article puts forth the wild unsubstantiated conjecture that Saddam's supposed move to acquire nuclear weapons "might" presage his use of chemical or biological weapons (hypothesizing without evidence that he may unleash a plague of smallpox), because he would no longer fear retaliation. Regardless, the authors make sure to describe the workings of the VX nerve argent in gruesome detail, to imprint clear images of Iraq's imminent threat upon their readership. Gordon and Miller pass on their anonymous government source's ravenous appetite for warmongering: "the question is not ‘why now?’ The question is why waiting is better". The article quotes the anonymous official parroting an alarmist talking point that the president had made in a speech justifying the need to disarm Saddam a day prior to publication: “the first sign of a ‘smoking gun,’ they argue, may be a mushroom cloud”. The debacle depicted in this piece of stenography masquerading as journalism was instrumental in laundering expert legitimisation for the war. The article inadvertently reminds the reader that its claims just so happen to confirm the conspiracy theories of the administration's military hawks (likely to be its sources) who had been calling for war: "Iraq's pursuit of nuclear weapons has been cited by hard-liners in the Bush administration to make the argument that the United States must act now, before Mr. Hussein acquires nuclear arms and thus alters the strategic balance in the oil-rich Persian Gulf". This view echoed Dick Cheney’s ominous speech from August 26th, two weeks prior to this article, where he called for "pre-emptive action" to ward off Saddam’s "nuclear blackmail”.

• Miller, Judith. “Threats and Responses: Inspections; Verification Is Difficult at Best, Say the Experts, and Maybe Impossible.” The New York Times, 18 Sept. 2002, www.nytimes.com/2002/09/18/world/threats-responses-inspections-verification-difficult-best-say-experts-maybe.html.

In this article, Miller quotes a variety of officials and former inspectors about the nearly insurmountable obstacles that weapons inspectors would face if they were to return to Iraq. She attributes but never actually sources these statements: "'I don't want to knock the new inspection regime or my successors' efforts or abilities,' said David Kay, a former inspector who led the initial nuclear inspections in Iraq in the early 1990's, 'but their task is damn near a mission impossible'". She also quotes Milton Leitenberg, a scientist and biological weapons expert saying that his experiences of the past 10 years suggested that "Iraq will do anything but comply" with inspectors. Miller presents this as a consensus view: "all but a small minority [of] weapons experts [...] echoed such skepticism".

• Shafer, Jack. “Follow That Story: Deep Miller.” Slate Magazine, Slate, 23 Apr. 2003, slate.com/news-and-politics/2003/04/follow-that-story-deep-miller.html.

In this article, Shafer frames the question as whether the New York Times has been breaking news or flacking for the military when reporting on Iraqi weapons. Shafer criticises Miller for simply relaying information that her military handlers want her to, without looking to back up their information with any independent confirmation, and remarks that her copy was "submitted for a check by military officials". This explains why Shafer describes Miller's writing as "more like a government press release than a news story". He remarks that this must be an intentional choice on her part, since "on NewsHour, Miller confides for the first time I’ve seen that she’s embedded with the unit searching for WMD. But, since the embedding rules specifically freed reporters from direct military censorship, inquiring minds want to know: Why did Miller agree to their review?". Shafer was among the first voices to repeatedly call Miller's reporting into question, possibly breaking the dam of the authority conferred upon her by her status at the Times.

• North, David. “War, Oligarchy and the Political Lie.” World Socialist Web Site, 7 May 2003, www.wsws.org/en/articles/2003/05/war-m07.html.

This transcript of a speech by David North addressing university students at Notre Dame outlines how shamelessly the media carried water for the government's wartime propaganda. He intones: "the principal justification given by the United States government [...] for [...] the suffering it has inflicted upon the Iraqi people [...] is that the [Hussein] regime [...] was in possession of so-called 'Weapons of Mass Destruction' that posed an immense and imminent danger to the United States and the rest of the world". He traces back the history of this lie : "this was not the invention of the present Bush administration. Saddam’s 'weapons of mass destruction' were invoked by the Clinton administration to justify the bombing campaign that it initiated against Iraq in 1998". It was but one of many justifications made for attacking Iraq, including Colin Powell's stirring speech at the United Nations, which was covered in the Washington Post as "evidence" which "prove[d] to anyone that Iraq [...] still retains [its weapons of mass destruction]. Only a fool —or possibly a Frenchman— could conclude otherwise". He also describes a New York Times column by Tom Friedman that waxes poetically over a close up photograph of a skull alleged to have belonged to of one of Saddam's prisoners, and uses it as a jumping off point to legitimise the war: "we do not need to find any weapons of mass destruction to justify the war. That skull, and the thousands more that will be unearthed, are enough for me". To North, these two examples are illustrative of the media's cooperation in the drumming up of war: "the mass media was not duped by the Bush administration, but functioned as its willing accomplice in the deliberate deception of the American people. There was nothing that was particularly sophisticated in the government’s propaganda campaign. Much of what it said was contradicted by both established facts and elementary logic. [...] Even when it was established that the administration’s claim that Iraq had sought to obtain nuclear material was based on crudely forged documents, the media chose not to make a major issue of this devastating exposure".

• Massing, Michael. “Now They Tell Us.” The New York Review of Books, 29 Jan. 2004. https://www.nybooks.com/articles/2004/02/26/now-they-tell-us/.

This article by Michael Massing was instrumental in prompting the New York Times to apologise for its pre-war coverage. Massing criticised the media's pack mentality: "editors and reporters don’t like to diverge too sharply from what everyone else is writing". He notes that not sticking out is necessary in order to maintain access. Massing recalls that before the war, "US journalists were far too reliant on sources sympathetic to the administration. Those with dissenting views [...] were shut out. [...] This was especially apparent on the issue of Iraq’s weapons of mass destruction — the heart of the President’s case for war. Despite abundant evidence of the administration’s brazen misuse of intelligence in this matter, the press repeatedly let officials get away with it". He chastises the Times, not only for putting its imprimatur to advance the Bush administration’s chief claims with high certitude in the September 8th front page story by Gordon and Miller, but also for "establish[ing] a position at the paper that apparently discouraged further investigation into this and related topics". In Massing's interview with Judith Miller, she describes how she sees her duty as an investigative reporter in the field of intelligence: "my job isn’t to assess the government’s information and be an independent intelligence analyst myself. My job is to tell readers of The New York Times what the government thought about Iraq’s arsenal". Massing adds nuance to her claim, saying that "many journalists would [...] [instead] consider offering an independent evaluation of official claims one of their chief responsibilities". Massing clearly describes how officials laundered their story through the New York Times to legitimate it. He quotes Gordon as telling him that "the administration wasn’t really ready to make its case publicly at the time, but somebody mentioned to [him] this tubes thing". Massing notes that the very same morning that Gordon and Miller's front page story was published, Cheney, Powell, Rumsfeld & Rice all referred to the story in televised appearances – lifting the mushroom cloud imagery verbatim. Yet nuclear experts were skeptical of the New York Times article, because it portrayed the WMD theory as having "wide support, particularly among the government’s top technical experts and nuclear scientists". Former weapons inspector David Albright, having worked with Miller in the past, tried to correct the record: "I asked Judy [Miller] to [...] alert people that [...] there are competent people who disagreed. [...] [The Times] made a decision to ice out the critics and insult them on top of it. People were bitter about that article — it says that the best scientists are with [the administration]". Ultimately, this reflected an editorial prioritisation: "most investigative energy was directed at stories that supported, rather than challenged, the administration’s case". He notes that skeptical coverage was never given prominent placement, and that this was no accident. He quotes the journalist Walter Pincus of the Washington Post describing the situation thusly: "Front pages are like writing a memo to the White House. [Newspaper editors] went through a [...] phase in which they didn’t put things on the front page that would make a difference”.

• “FROM THE EDITORS; The Times and Iraq.” The New York Times, 26 May 2004, www.nytimes.com/2004/05/26/world/from-the-editors-the-times-and-iraq.html.

This statement by the New York Times' editors is the result of an internal examination that delves into the paper's errors in covering the march to war. They defensively state that they are proud of the journalism they published in most cases, and that wrong information was "later overt[aken with] stronger information". They single out instances of coverage where information that "was controversial then, and seems questionable now" was allowed to stand unchallenged. They blame this on one of their main sources, a circle of Iraqi defectors fanatically bent on organising 'regime change' in their home country and overthrowing Saddam. These Iraqi exiles' accounts "were often eagerly confirmed by United States officials convinced of the need to intervene in Iraq". The statement does not lay blame at the feet of individual reporters, instead criticising the rush for scoops for coming before skepticism. Regarding the article from September 8th about Saddam building an atomic bomb, the editors argue that that claim "came not from defectors but from the best American intelligence sources available at the time. Still, it should have been presented more cautiously. [...] Administration officials were allowed to hold forth at length on why this evidence of Iraq's nuclear intentions demanded that Saddam Hussein be dislodged from power". The editors note that Times reporters learned five days after publication that the tubes were a subject of debate among intelligence agencies, but only share these misgivings deep over the fold in an article on page A13, with a headline that did not make it clear that they revised their view.

• Kurtz, Howard. “N.Y. Times Cites Defects in Its Reports on Iraq.” The Washington Post, WP Company, 26 May 2004, www.washingtonpost.com/wp-dyn/articles/A56265-2004May26.html.

This Washington Post article documents the New York Times' acknowledgment that its coverage of weapons of mass destruction "was not as rigorous as it should have been" and that they should have re-examined and challenged claims more aggressively. This article replaces this reversal in the context of the Times' recent warmongering zealotry: "While many news organizations reported on WMD claims before the war, few did so as aggressively as the Times". Kurtz also remarks upon Judith Miller's questionable proximity to intelligence and influence: "Miller played an unusually active role while embedded last year with an Army unit searching for weapons of mass destruction, at one point writing to object to a commander's order that the unit withdraw from the field and suggesting she would write about it unfavorably in the Times. The pullback order was later rescinded".

• Foer, Franklin. “The Source of the Trouble.” New York Magazine, Vox Media, 28 May 2004, nymag.com/nymetro/news/media/features/9226/.

In this article, Franklin Foer calls out the hypocrisy of the New York Times joining in with the rejoicing schadenfreude that followed the discrediting of Iraqi defector Chalabi as a source: their "analysis by David Sanger went so far as to name names of individuals who had associated themselves with the discredited leader of the Iraqi National Congress". Foer notes that this list conveniently ended with the phrase "among others", which he describes as a "highly evocative one, because that list of credulous Chalabi allies includes the New York Times’ own reporter, Judith Miller, [who] produced a series of stunning stories about Saddam Hussein’s ambition and capacity to produce weapons of mass destruction, based largely on information provided by Chalabi and his allies—almost all of which have turned out to be stunningly inaccurate". Foer takes issue with the New York Times' revisionism specifically because of how the paper's coverage was cited by the Bush administration to buttress its case for going to war. He points out that Miller's coverage didn't just depend on Chalabi, but "also relied heavily on his patrons in the Pentagon [...] like Richard Perle and Paul Wolfowitz, [who] would occasionally talk to her on the record [along with] a controversial neocon in the [Office of Special Plans] named Michael Maloof". Foer profiles Miller and ascribes part of the problem to her drive and status: "Miller is a star, a diva [who] won big prizes, [...] a newsroom legend. [...] The very qualities that endeared Miller to her editors at the New York Times –her bottomless ambition, her aggressiveness, her cultivation of sources by any means necessary, her hunger to be first— were the same ones that allowed her to get the WMD story so wrong". He remarks that while Miller was often hostile to other reporters, "her sources were her friends". Foer summarises her wrongheaded careerism: "the war in Iraq was going to be Miller’s journalistic victory lap. [...] No other journalist would have such access, which meant she would have the exclusive when they uncovered the WMD stockpiles, the smoking gun”.

• Okrent, Daniel. “Weapons of Mass Destruction? Or Mass Distraction?” The New York Times, 30 May 2004, www.nytimes.com/2004/05/30/weekinreview/the-public-editor-weapons-of-mass-destruction-or-mass-distraction.html.

This piece by the New York Times' Public Editor comments on the paper of record finally addressing its lies on WMDs. Okrent is quick to absolve himself of responsibility, noting that the offending articles preceded his appointment as public editor, but he ponders why the Times had failed to revisit its own coverage of Iraqi weapons from Summer 2002 to Summer 2003 – given that its readers sustained an "unmistakable [...] impression" that Hussein possessed a frightening arsenal. Okrent's column lands four days after the editors' statement. He deems it mostly appropriate, though he questions its placement, by relaying a question posed to him by a reader: "Will your column this Sunday address why the NYT buried its editors' note -- full of apologies for burying stories on A10 -- on A10?". Okrent also points out that the blame goes beyond Miller to extend to the Times' editors: "pinning this on Miller alone is both inaccurate and unfair: in one story on May 4, editors placed the headline 'U.S. Experts Find Radioactive Material in Iraq' over a Miller piece even though she wrote, right at the top, that the discovery was very unlikely to be related to weaponry". Okrent also takes editors to task for their refrain that they are only reporting on claims, not confirming them – he argues this logic only applies to statements of people speaking on the record and can be held responsible. "For anonymous sources, it's worse than no defense. It's a license granted to liars". He concludes that the failure was not individual but institutional, and describes the pressures of sensationalism: "You can 'write [your way] onto [page] 1,' as the newsroom maxim has it, by imbuing your story with the sound of trumpets".

• Calame, Byron (The Public Editor). “The Miller Mess: Lingering Issues among the Answers.” The New York Times, 23 Oct. 2005, www.nytimes.com/2005/10/23/opinion/the-miller-mess-lingering-issues-among-the-answers.html.

In this article, the New York Times' new public editor (as of April 2005) looks back upon the "Miller Mess", and lays out three concerns that the Times must face up to: "First, the tendency by top editors to move cautiously to correct problems about prewar coverage. Second, the journalistic shortcuts taken by Ms. Miller. And third, the deferential treatment of Ms. Miller by editors who failed to dig into problems before they became a mess". Calame also picks up on the "troubling ethical issue" of "whether Ms. Miller holds a government security clearance", which would "restrict her ability to share with editors the information she gathers", quoting her writing during from the run-up to war: "the Pentagon had given me clearance to see secret information as part of my assignment 'embedded' with a special military unit hunting for unconventional weapons". Calame notes that the paper's new executive editor already directed Miller to stay away from all national security issues, and suggests it should be difficult for her to return to the paper. He also calls for an update on the Times ethics guidelines rules on granting sources anonymity.

• Matisonn, John. God, Spies and Lies: Finding South Africa’s Future through Its Past. Missing Ink, 2015.

This book recounts the story, among many others, of how South African president Thabo Mbeki attempted to convince both Tony Blair and George W. Bush that Iraq was not developing any WMDs. South Africa had particular expertise in the region, because they had previously collaborated with Iraq in the 1980s to develop weapons. In 2003, Mbeki put together a commission of expert investigators from his country known as "Project Coast" and sent them to Iraq to investigate the US and UK’s assertions about WMDs, which Saddam agreed to. Matisonn describes how the South African experts marshalled their prior knowledge of the facilities to good use: "they already knew the terrain, because they had travelled there as welcome guests of Saddam back when both countries were building weapons”. On their return, they reported that there were no WMDs in Iraq. “They knew where the sites in Iraq had been, and what they needed to look like. But there were now none in Iraq”. Matisonn writes about how Mbeki tried in vain to use these experts' findings  to strongly lobby Western leaders behind the scenes against the evidence being used to justify the war in Iraq.

• Lake, Eli. “Insiders Blame Rove for Covering Up Iraq’s Real WMD.” The Daily Beast, 16 Oct. 2014, www.thedailybeast.com/insiders-blame-rove-for-covering-up-iraqs-real-wmd.

This story relays dynamics within the Bush administration, who were under pressure from some Republican lawmakers to continue trying to spin the WMD in Iraq tale in 2006 even though Karl Rove had told them "we have lost that fight, so better not to remind anyone of it". Republican lawmakers, led by Rick Santorum, nonetheless thought it was a good idea to try and redefine the narrative and shift the goalposts. They repeatedly urged George W Bush to hold a gaudy stunt press conference posing with some of the discarded old Iraqi chemical munitions (which had been recently uncovered, but not active and dating from before 1991) while wearing a protective suit. A Bush administration official is said to have killed the idea, deeming it "very dangerous" to "have the president near this stuff".

• Schwarz, Jon. “Twelve Years Later, US Media Still Can't Get Iraqi WMD Story Right.” The Intercept, 10 Apr. 2015, theintercept.com/2015/04/10/twelve-years-later-u-s-media-still-cant-get-iraqi-wmd-story-right/.

This article by Jon Schwarz in the Intercept looks back at a bounce in coverage of the WMD story, following a 2014 New York Times series of the US military's mistreatment of soldiers exposed to decades-old chemical weapons in Iraq. Schwarz looks at how this report unleashed a wave of opportunistic coverage from conservatives who looked to it for vindication of the fact that US troops did eventually stumble across Iraqi chemical munitions after all. Schwarz corrects this interpretation, by pointing out that Iraqi leaders didn't know about these stockpiles left lying around (abandoned and forgotten during the Iran-Iraq war), and therefore couldn't possibly be described as hiding them, which was the original justification for the war. The article bemoans the fact that this cycle of coverage has "cemented as an article of faith on much of the right that Iraq was concealing weapons of mass destruction".

• Walker, Peter. “The Rock Movie Plot 'May Have Inspired MI6 Source's Iraqi Weapons Claim'.” The Guardian, Guardian News and Media, 6 July 2016, www.theguardian.com/uk-news/2016/jul/06/movie-plot-the-rock-inspired-mi6-sources-iraqi-weapons-claim-chilcot-report.

This article highlights a humorous revelation uncovered by the Chilcot public inquiry, the UK parliament's investigation of the decision to go to war in Iraq. Indeed, it appears as though a key piece of intelligence regarding the description of Saddam's chemical arsenal was actually lifted wholesale from The Rock, a 1996 thriller starring Sean Connery and Nicholas Cage. Glass containers are not typically used to store chemical munitions, yet the MI6 intelligence's report closely lined up with the inaccurate depiction of glass beads as a vessel for nerve agents, just as it appeared in the 1996 movie.

6. COVID’s Origins: Evolving signifiers behind the lab leak hypothesis

• Pompeo, Joe. “‘The Discussion Is Basically Over’: Why Scientists Believe the Wuhan-Lab Coronavirus Origin Theory Is Highly Unlikely.” Vanity Fair, 8 May 2020, www.vanityfair.com/news/2020/05/why-scientists-believe-the-wuhan-lab-coronavirus-origin-theory-is-highly-unlikely.

This article, written a couple months into the global Covid-19 pandemic, casts the theory that COVID might have leaked out of a lab as little more than a pawn in the Trump administration's new cold war with China. It establishes a parallel between the intelligence community pushing the lab leak theory with its lies about Iraqi weapons: "American allies […] appear to be freaked out by the rumor-mongering". Joe Pompeo quotes a former senior Australian security official: "We can’t repeat the mistakes of the past. The WMDs fiasco was not that long ago”. The author makes a point to say that passing judgement of these theories is a feat only accessible to experts: "to laypeople, the research is esoteric, if not inscrutable". Consequently, he lets researchers speak for themselves verbatim, uncritically printing Kristian Andersen's insanely high threshold for what data would be necessary to establish a link to the lab. Andersen brazenly claims the only valid proof for a lab leak would be if we could find a suicide note from a whistleblower at the lab explaining that COVID was their experiment, along with a frozen sample. This article is illustrative of the expert consensus dismissing the lab leak theory, as well as the smug authoritative tone plastered over unquestioning reporting.

• Jacobsen, Rowan. “Could Covid-19 Have Escaped from a Lab?” Boston Magazine, 9 Sept. 2020, www.bostonmagazine.com/news/2020/09/09/alina-chan-broad-institute-coronavirus/.

This story traces Harvard & MIT researcher Alina Chan's search for the viral origins of COVID, and the extreme backlash she received for suggesting that scientists shouldn't rule out the hypothesis of the virus having leaked out of a laboratory. When listing the incentives for disregarding the lab leak hypothesis, the article quotes Richard Ebright, a Rutgers microbiologist and another founding member of the Cambridge Working Group, as saying that “For the substantial subset of virologists who perform gain-of-function research, avoiding restrictions on research funding, avoiding implementation of appropriate biosafety standards, and avoiding implementation of appropriate research oversight are powerful motivators”. The biomedicine editor of MIT Technology Review, Antonio Regalado, went so far as to suggest in a tweet from March 2020 that if it turned out COVID-19 came from a lab, “it would shatter the scientific edifice top to bottom”. Alina Chan concludes that scientists shouldn't be censoring themselves, and that they are doomed to lose credibility and the public's trust if they decide what people should know about.

• Wade, Nicholas. “The Origin Of COVID: Did People or Nature Open Pandora's Box At Wuhan?” Bulletin of the Atomic Scientists, 5 May 2021, https://thebulletin.org/2021/05/the-origin-of-covid-did-people-or-nature-open-pandoras-box-at-wuhan/.

This story helped turn the tide of perceptions of the lab leak theory. It criticises the bad science on display from two early authoritative dismissals of the theory, open letters signed by scientists which both tried to frame inconclusive speculations authoritatively: ”The Daszak and Andersen letters were really political, not scientific, statements, yet were amazingly effective. Articles in the mainstream press repeatedly stated that a consensus of experts had ruled lab escape out of the question or extremely unlikely". Even science journalists unquestionably repeated the dogma passed on by these expert letters, choosing not to challenge them: "The virologists’ omertà is one reason. Science reporters, unlike political reporters, have little innate skepticism of their sources’ motives; most see their role largely as purveying the wisdom of scientists to the unwashed masses. So when their sources won’t help, these journalists are at a loss". Wade partially blames an academic chilling effect: "Science is supposedly a self-correcting community of experts who constantly check each other’s work. So why didn’t other virologists point out that the Andersen group’s argument was full of absurdly large holes? Perhaps because in today’s universities speech can be very costly. Careers can be destroyed for stepping out of line. Any virologist who challenges the community’s declared view risks having his next grant application turned down by the panel of fellow virologists that advises the government grant distribution agency”. He also ascribes it to self preservation, noting how Daszak used authoritative dismissals to save his own business interests: "he immediately launched a public relations campaign to persuade the world that the epidemic couldn’t possibly have been caused by one of the institute’s souped-up viruses".

• McNeil, Donald G. “How I Learned to Stop Worrying and Love the Lab-Leak Theory.” Medium, 17 May 2021, donaldgmcneiljr1954.medium.com/how-i-learned-to-stop-worrying-and-love-the-lab-leak-theory-f4f88446b04d.

This post by Donald McNeil, who was the New York Times' chief COVID reporter, traces back the internal newsroom debates around the legitimacy of the lab leak theory, which was largely pushed by the intelligence community at first, and how they granted more credibility to the scientists, who were largely unanimous in dismissing it. He also notes how editors dismissed the idea that COVID escaped from a lab in Wuhan because it had been voiced by Trump, which made it easily dismissible as a conspiracy theory. Reading Wade's reporting two weeks prior, along with finding out more about China's persistent secrecy on the matter, appear to have further opened McNeil's mind to the possibility of a lab leak, which lead him to call for a renewed investigation.

• Gordon, Michael R. ; Strobel, Warren P. ; Hinshaw, Drew. “WSJ News Exclusive | Intelligence on Sick Staff at Wuhan Lab Fuels Debate on Covid-19 Origin.” The Wall Street Journal, Dow Jones & Company, 23 May 2021, www.wsj.com/articles/intelligence-on-sick-staff-at-wuhan-lab-fuels-debate-on-covid-19-origin-11621796228.

This Wall Street Journal article features Michael Gordon, who previously reported on Iraqi WMDs, doing what he does best: carrying water for the intelligence community by giving them a narrative that can justify conflict. In spite of Gordon’s penchant for militarism, his story may well be true nonetheless, at least in part. It provides plausible circumstantial evidence that could indicate a cover-up, describing how three members laboratory staff in Wuhan reportedly fell ill in late 2019 before the outbreak, and also how the Chinese authorities destroyed DNA evidence and obliterated the first team who sequenced the COVID genome, and started suppressing reporting back in November 2019.  Critics of this story tried to dismiss it by arguing that there was nothing particularly noteworthy about a story of three people in Wuhan, a city of 11 million, exhibiting flu symptoms during flu season.

• Kessler, Glenn. “Timeline: How the Wuhan Lab-Leak Theory Suddenly Became Credible.” The Washington Post, WP Company, 25 May 2021, www.washingtonpost.com/politics/2021/05/25/timeline-how-wuhan-lab-leak-theory-suddenly-became-credible/.

This article establishes a chronology of public positions that had been taken up to its publication regarding the plausibility that COVID leaked from the Wuhan laboratory. Glenn Kessler, the Washington Post's own fact checker, lists the order in which the narrative was shaped and shifted: at first, there were early speculations in the beginnings of the pandemic. These were swiftly stopped when the scientists closed ranks in mid February 2020 with a statement by 27 scientists in Lancet drafted by Peter Daszak, in which they overwhelmingly concluded that the virus emerged in wildlife. This incidentally provided the basis for Kessler's own authoritative dismissals of the lab leak theory in his fack checking column. Then, around April 2020, the intelligence community weighed in, giving more credence to the theory. In July 2020, new evidence to substantiate the theory emerged.

• Tufecki, Zeynep. “Checking Facts Even If One Can’t.” The Insight, 25 May 2021, www.theinsight.org/p/checking-facts-even-if-one-cant.

This post by Zeynep Tufecki traces how quickly the lab leak theories went from being considered misinformation to being acceptable in mainstream discourse within the space of one week. She considers that the "fact checking" publications were too quick to defer to the perceived expert consensus and dismissed the theory with a level of certainty that was not rigorous.

• Taibbi, Matt. “‘Fact-Checking’ Takes Another Beating.” TK News by Matt Taibbi, 25 May 2021, taibbi.substack.com/p/fact-checking-takes-another-beating.

Matt Taibbi focuses on Anthony Fauci's answer to a question about whether COVID had natural origins (he was “not convinced” that COVID developed naturally) asked at a fact checking festival as evidence of a breaking point to indicate that it has become acceptable. He argues that the choice of venue is ironic, since Fauci's claim was recently considered fringe. Taibbi frames the answer thusly: "Fauci’s new quote about not being “convinced” that Covid-19 has natural origins, however, is part of what’s becoming a rather ostentatious change of heart within officialdom about the viability of the so-called “lab origin” hypothesis". He argues this illustrates how fact checkers provide more of a safety mechanism than an infallible verification.

• Lima, Cristiano. “Facebook No Longer Treating 'Man-Made' COVID as a Crackpot Idea.” POLITICO, POLITICO, 26 May 2021, www.politico.com/news/2021/05/26/facebook-ban-covid-man-made-491053.

This article in Politico documents a choice made by Facebook to reverse its decision to flag posts about COVID having leaked from a lab as dangerous conspiracies, which allowed them to appear in people's feeds again. Facebook's decision to remove "lab leak" from its running list of debunked claims about COVID "due to the renewed debate about the virus' roots" acted as a kind of lagging indicator of the shift in expert consensus to accommodate the theory: a spokesperson claimed that Facebook "would no longer remove the claim that COVID-19 is man-made from [its] apps”. It is an interesting reminder of the arbitrariness of the categories of what theories are deemed acceptable at a given time, and how these parameters end up setting the boundaries of discourse that people are allowed to be exposed to.

• Yglesias, Matthew. “The Media's Lab Leak Fiasco.” Slow Boring, 26 May 2021, www.slowboring.com/p/the-medias-lab-leak-fiasco.

In this article, Yglesias looks at the media's treatment of the lab leak story, and the pressures that might have kept journalists from looking to bolster its claims: "Unless you […] want to be known to the world as a follower of Cotton-Pompeo Thought, it is not very compelling to speak up in favor of a minority viewpoint among scientists. […] If you secure your impression of what “the scientists” think about something from scanning Twitter, you will perceive a consensus that is not really there. If something is a 70-30 issue but the 30 are keeping their heads down, it can look like a 98-2 issue. […] Almost everyone is disproportionately avoiding statements they believe to be locally unpopular in their community. […] My strong suspicion is that this is true across domains of expertise, and is creating a lot of bubbles of fake consensus that can become very misleading”.

• Chait, Jonathan. “How Twitter Cultivated the Media's Lab-Leak Fiasco.” New York Magasine, Vox Media, 26 May 2021, nymag.com/intelligencer/2021/05/lab-leak-media-liberals-covid-china-biden-fauci-investigation.html.

In this article, Jonathan Chait explains how he doesn't care about whether the lab leak is true, because it would not imply any difference in policies. He does however find that the dismissal of the lab-leak hypothesis reveals "the vulnerabilities in the mainstream- and liberal-media ecosystem", and sources a lot of its groupthink back to Twitter. Chait describes Twitter as "the milieu in which the opinions of elite reporters take shape", but also "very often […] a petri dish of tribalism and confirmation bias". He notes that journalists framed restrained quotes in which experts shared an inclination that the virus was of natural origin as though they were virulent denials of the hypothesis. He argues this happened because publications were driven by the desire to contradict Trump (or Trumpists like Tom Cotton): they "embraced a 'moral clarity' ethos of forgoing traditional journalistic norms of restraint and objectivity in favor of calling out lies and bigotry". He finds that this motivated reasoning has blinded people from pursuing truth: "Progressive advocates will take strong positions on a factual question, such as whether COVID-19 originated inside or outside a laboratory, based entirely on how they believe political actors will use the answer". He bemoans how the loudest voices were also incurious: "The problem is that the people with the strongest views had the weakest interest in the truth. An asymmetry of passion between their insistence that the lab-leak hypothesis was false and racist and the weaker feelings of others — who, at most, believed the hypothesis was only possibly true — created a stampede toward the most extreme denial”.

• Damon, Andre. “Author of Wall Street Journal ‘Wuhan LAB’ Story Wrote Lies about Iraqi ‘Weapons of Mass Destruction.’” World Socialist Web Site, 1 June 2021, www.wsws.org/en/articles/2021/06/01/wuha-j02.html.

This article calls into question the legitimacy of Michael R. Gordon’s May 23rd report in the Wall Street Journal, by profiling his close ties to the intelligence community, and detailing his previous efforts in uncritically laundering military talking points by reporting on the tips that his connections have given him as “exclusive leaks”. While it could be argued this article’s approach is veering on an ad hominem fallacy, its perspective is of particular interest because it makes direct the connection to Michael R. Gordon’s role in getting inaccurate “WMD in Iraq” stories published at the New York Times in late 2002, in the run-up to the war. Indeed, although Michael R. Gordon had co-authored these stories with Judith Miller, she took the brunt of the blame and was fired for this episode, while he has remained a relatively respected Pentagon reporter, as evidenced by the relatively few objections to his byline on this piece by those who shared it. The argument laid out here is that there is a concerted effort to blame China for the pandemic, and that the lab leak hypothesis can be wielded for sabre rattling which China, which would presumably be a boon for military contractors.

• Frank, Thomas. “If the Wuhan Lab-Leak Hypothesis Is True, Expect a Political Earthquake.” The Guardian, Guardian News and Media, 1 June 2021, www.theguardian.com/commentisfree/2021/jun/01/wuhan-coronavirus-lab-leak-covid-virus-origins-china.

In this article, Thomas Frank applies his insights about the blind spots of elite consensus to the COVID lab leak theory. While he initially deferred judgement to the medical community and dismissed the lab leak story a crackpot conspiracy, he changed his mind after reading Nicholas Wade's story in the Bulletin of Atomic Scientists. He notes that reporters had mostly just been passing on information without assessing its accuracy, only considering the credibility of the person telling them. He remarks that debate on this theory was set by experts: the letter of virologists in Lancet magazine dismissing the theory as conspiracy was coordinated by Peter Daszak, a researcher whose reputation (and therefore funding) would have been tarnished if it turned out to be true. The experts' line was then algorithmically enforced by blocking any social media posts that contained the key words. This social media ban remained in place until Biden asked for an investigation into COVID's origins in late May 2021, thereby granting elite accreditation to the theory. Frank posits that because "reality punished leaders like Trump who refused to bow to expertise” during the pandemic (regarding masks, vaccines), Trump's choice to wrap himself around the lab leak theory led many in the media to dismiss it as equally unmoored from facts. Meanwhile, this emerged during a moment of crisis in which public health experts were put in charge of policy, and where unquestioning trust in experts has become a cultural signifier – as illustrated by the proliferation of "In this house, we believe in science" yard signs. This speaks to a conflation of the idea of expertise and personal virtue, pushed by an elite attempting to rationalise its aristocratic ruling position under the guise of meritocracy – the belief that they must somehow deserve their standing because they're better people. Frank warns that these conditions are ideal for a failure of elite groupthink: he deems blind faith in "the word of experts" as equally wrongheaded as blind faith in Fox News. He is concerned that sorting issues into these two cultural poles creates a Manichaean worldview. He concludes this will likely lead the public to lose trust in institutions (science, media) and the professional elite. In turn, elites are likely to lose trust in regular people and favour more of an anti-populist technocratic government.

• Eban, Katherine. “The Lab-Leak Theory: Inside the Fight to Uncover Covid-19's Origins.” Vanity Fair, 3 June 2021, www.vanityfair.com/news/2021/06/the-lab-leak-theory-inside-the-fight-to-uncover-covid-19s-origins.

This story by Katherine Eban retraces through the efforts of various members of the intelligence community to look into the lab leak theory. Eban observes how researchers were hampered by conflicts of interest: many virologists who received grants for "gain-of-function" research (creating new pathogens) had a vested interest in discounting the possibility of a lab leak. She quotes former Deputy National Security Advisor Matthew Pottinger, who built a COVID-19 origins team under NSC supervisions, as saying that these experts' "conflicted status played a profound role in muddying the waters". Eban uncovers multiple warnings in emails by government officials warding researchers off from looking into the topic, which they described alternatively as a "can of worms" or "Pandora's Box". She notes that these had the opposite effect for some researchers, who became more suspicious of a cover-up, and continued sifting through old intelligence reports on a quasi-freelance basis. She also notes how this was enforced by scientists themselves: former CDC director Robert Redfield received death threats from fellow scientists after a CNN appearance in which he lent credence to lab origin as a possible hypothesis for COVID's origin. As he put it, he "was threatened and ostracized because [he] proposed another hypothesis". This kind of closing of the ranks runs counter to the scientific method of testing evidence.

• Engber, Daniel. “Don't Fall for These Lab-Leak Traps.” The Atlantic, Atlantic Media Company, 10 June 2021, www.theatlantic.com/ideas/archive/2021/06/lab-leak-trap/619150/.

This article comments on the recent uptick in coverage for the lab leak theory, and suggests that its readers should be weary of three common traps. First, he warns against people conflating the absence of evidence (in this case, the lack of evidence to prove the lab leak theory) with the evidence of absence (the definitive proof that COVID didn't leak from a lab), and points out that new information has been reported since the pandemic started. He advises readers: "The mere existence of evidence for the lab-leak hypothesis, or of new evidence since last spring, is not up for debate. When the experts tell you otherwise, take it as hyperbole: 'This evidence is so lame, it might as well not exist'. Then try to understand why they think it’s lame". Secondly, he warms of the "Mad Scientist Trap", or the idea that COVID would be the result of reckless experimentation or an attempt to manufacture a bioweapon, whereas microbiologists tend to invoke more innocuous accidents as the likeliest non-natural possibilities. Engber argues this disturbing yet improbable interpretation of lab leak makes it rest upon nefarious scientific motivations, and runs the risk of discrediting other non-natural explanations. Finally, Engber warns against the "Culture-War Trap", which seeks to impose the frame of "cancel culture" and "political correctness" to explain why certain scientists refused to speak up. Ultimately, he rejoices that the vow of silence on the theory has been lifted.

• Ling, Justin. “The Lab Leak Theory Doesn't Hold Up.” Foreign Policy, 15 June 2021, foreignpolicy.com/2021/06/15/lab-leak-theory-doesnt-hold-up-covid-china/.

This article by Justin Ling goes through a timeline of reporting about the lab leak theory and challenges its legitimacy. He argues that the theory is not based the search for evidence but rather for a narrative. Ling remarks that humankind loves to invent stories, and that we tend to blame humans for pandemics, even though nature is almost always the culprit behind infectious diseases "when they can be explained". He traces the lab leak theory back to the fringes of "shady and disreputable websites armed with little more than questions and supposition", who argued that the media's silence was evidence of complicity, until it was picked up and amplified by Fox News, which led it to be endorsed by Trump. Ling argues the shift in discourse around lab leak is not based on uncovering new evidence, but narrative. He dismisses the evidence of 3 people being ill in Wuhan as not being noteworthy. He also notes that it would be logistically very difficult to conceal a novel hyperinfectious virus, implying that the theory is not as straightforward as it might sound.

• Tufekci, Zeynep. “Where Did the Coronavirus Come from? What We Already Know Is Troubling.” The New York Times, 25 June 2021, www.nytimes.com/2021/06/25/opinion/coronavirus-lab.html.

In this New York Times piece, Zeynep Tufecki considers hypotheses for Covid's origins. She goes through the history of the virologists at Wuhan and the re-emergence of pathogens. She dismisses the likelihood that the virus' release could have been pre-meditated: "The secrecy and the cover-ups have led to some frantic theories — for example, that the virus leaked from a bioweapons lab, which makes little sense, since, for one thing, bioweapons usually involve more lethal pathogens with a known cure or vaccine, to protect those who employ them". But she notes a number of lax practices that could have led to an accidental leak. She bemoans a lack of caution in investigating this class of pathogens, notably in the fact that researchers at the Wuhan institute of virology had built up a live bat colony inside the lab. Instead, Tufecki suggests that "bats should be treated as a serious threat in labs". Apparently, the Wuhan institute reported experimenting in 2016 on a live bat coronavirus that could infect human cells in a BSL-2 lab — a biosafety level equivalent to a dentist’s office. Furthermore, she notes that "nearly every SARS case since the original epidemic has been due to lab leaks - six incidents in three countries, including twice in a single month from a lab in Beijing".

• Marcotte, Amanda. “Great Work, Useful Idiots of the Media: Most Americans Buy the Unsubstantiated ‘Lab Leak’ Theory.” Salon, Salon.com, 9 July 2021, www.salon.com/2021/07/09/great-work-useful-idiots-of-the-media-most-americans-buy-the-unsubstantiated-lab-leak-theory/.

This article by Amanda Marcotte warns its readership to stay vigilant about to the lab leak story, which has been gaining in popularity. She expresses concern about this story being instrumentalised in the same way that WMDs were: "As with Saddam Hussein's mythical ‘weapons of mass destruction’, which led to the Iraq War, evidence for the "lab leak" theory is mostly right-wing wishful thinking, tied to a couple of thin pieces of not-really-evidence, and held together with the duct tape of speculation". She draws out this comparison twice more: "As the 'WMDs in Iraq' fiasco shows, the stakes are extremely serious when speculation about foreign adversaries is allowed to dwarf actual facts in the press". Finally, once more, she refers to the Wall Street Journal story by Michael Gordon about three ill employees at Wuhan, and how it opened the floodgates for discussing the lab leak theory "in a way strikingly reminiscent of the hype around WMDs during the lead-up to war in Iraq". Marcotte posits that the lab leak theory is about a group of racists nationalists collectively duping the people into buying a rationale for war with China. She therefore bemoans the face that the lab leak theory has been gaining in popularity, to become twice as believed among Americans as the animal-to-human transmission story. She seems more concerned by who is espousing views rather than the ideas themselves, and places a lot of important on aesthetic characteristics of writing: to back up her incorrect assertion that "no new evidence" emerged to support the theory, she quotes a piece in the New Republic which she describes as "science-heavy" and "deeply technical", as though the complexity itself were an argument of authority. Yet she is quick to point out this flaw in critical thinking as it applies to the unwashed masses: "most news consumers don't do a great job of distinguishing speculation from evidence, especially when it's presented in a bunch of scientific jargon. Instead, they just absorb phrases like "lab leak," assume that must be a fact, and get on with their day".

• MacLeod, Alan. “‘Unchallenged Orientalism’: Why Liberals Suddenly Love the Lab Leak Theory.” Monthly Review Online, 13 July 2021, mronline.org/2021/07/13/unchallenged-orientalism-why-liberals-suddenly-love-the-lab-leak-theory/.

This article argues that the Biden administration simply took up the case that was being built by the intelligence community under Trump to manufacture a conflict with China. Biden's team were able to pick up where the previous administration left off by opening up an investigation into Covid's origins, thereby granting a new sheen of respectability to the lab leak theory. MacLeod rightfully points out that in spite of rhetorical disagreements during campaigns, there is an awful lot of continuity in American foreign policy as carried out by state department personnel and intelligence experts. He remarks that the public has been primed to want to punish China: 83% of Americans support sanctioning China if the lab leak is proven correct, which he describes as a domestic influence operation by military hawks: ”[that level of support for sanctions on China is] music to the neocons’ ears, who likely can barely believe that so many progressive, anti-war voices are going along with their theory". MacLeod concludes by comparing the situation to the stories about Iraqi WMDs : "The lab leak theory bears a striking resemblance to the weapons of mass destruction hoax of 2002-03, not only in the fact that one of its key players is literally the same journalist using potentially the same anonymous sources, but also in the bipartisan political and media support for the project, all while ignoring the opinions of the scientific community”. He illustrates that WMD have been invoked both by people espousing and people discrediting the lab leak theory: “That so many in alternative media who question war and U.S. intervention not only cannot see that, but are invoking the WMD story to bolster their own side, is extraordinary".