newcohospitality.com

The Fallacy of Scientific Consensus: A Critical Examination

Written on

I have engaged in numerous discussions surrounding the notion of scientific consensus, and the latest debate has compelled me to explore this topic further. The concept of scientific consensus gained traction after reports indicated that 97% of research papers on climate change affirm its occurrence. While I won’t delve into the validity of climate change theories, it’s crucial to highlight several concerns regarding the reliance on consensus among scientists.

To begin with, the landscape of peer-reviewed publishing is largely dominated by a small group of authors. A study titled "Estimates of the Continuously Publishing Core in the Scientific Workforce" reveals significant insights. It analyzed the Scopus database and found that, from 1996 to 2011, there were approximately 15,153,100 unique authors, but only 150,608 of them (less than 1%) published continuously throughout those years. This small subset of continuously publishing scientists produced 41.7% of all papers during that time frame and accounted for 87.1% of highly cited papers.

Essentially, the output of about 1% of all publishing scientists constitutes a substantial portion of academic literature from 1996 to 2011. This suggests that analyses of published works may be heavily biased, reflecting the perspectives of this small group.

Beyond the "Academic 1%," other biases exist within the scholarly community. Research titled "Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data" indicates that the "publish or perish" culture can lead to a bias favoring "positive results." Studies that yield inconclusive findings or contradict prevailing theories tend to be disregarded, while those demonstrating positive outcomes receive more attention. This trend is concerning, as the primary goal of science should be to test and potentially disprove theories rather than merely support them.

Neil deGrasse Tyson once tweeted that “anyone who thinks scientists like agreeing with one another has never attended a scientific conference,” which holds some truth. However, it’s also evident that scientists often hesitate to express views that deviate significantly from the consensus. A historical example is the rivalry between Isaac Newton and Robert Hooke. While Newton is widely recognized, fewer know about Hooke, who once served as the curator of experiments for the Royal Society. Hooke championed the wave theory of light, while Newton supported the particle theory. Newton delayed publishing certain findings until after Hooke’s death, reflecting the fear of backlash for contradicting established views.

While this doesn’t completely invalidate the notion of using scientific consensus to gauge the strength of a theory, it does raise questions about the uncritical faith placed in consensus. To demonstrate that consensus is an unreliable measure, one would need to present statistically significant evidence. Conversely, those asserting that consensus is a valid measure must also provide supporting data.

Appeal to Authority

A philosophical critique also exists regarding the credibility of consensus, even among experts. The appeal to authority is often deemed fallacious; however, it is only a fallacy when the authority in question lacks expertise on the subject. For instance, if someone argues that the Earth is flat based solely on their parents' claims, that’s a false appeal to authority. Conversely, citing a NASA astronaut’s assertion that the Earth is round constitutes a valid appeal.

The validity of such an appeal hinges on the assumption of expertise. When relying on an authority, it is presumed that they possess comprehensive knowledge on the topic and are willing to acknowledge any gaps in their understanding. In this context, appealing to two authorities does not yield additional information.

Despite this assumption, relying on consensus presents its own challenges. In response to my arguments, Jeremiah Traeger posed intriguing questions:

Under Bayesian reasoning, if nine out of ten dentists indicate that you have a cavity, does this increase or decrease the likelihood that you actually have one? If nine out of ten doctors say you have cancer, do you pursue treatment? Similarly, if a survey reveals that 97 out of 100 climate scientists assert global warming is real, how should one interpret that? — A Tippling Philosopher

These inquiries are compelling, yet they lack definitive answers. The author appeared to accept the question as justification without deeper analysis. Bayesian inference requires several assumptions regarding knowledge distribution among individuals. Does each person possess unique knowledge? If so, how significant are these differences? Even if disparities exist, they might be negligible, leading to minimal changes in collective knowledge. Therefore, referencing Bayesian inference alone does not provide a solid justification; more information is necessary.

It is concerning how many individuals accept scientific consensus without question. The premise that we can gauge a theory's strength based on the number of scientists supporting it is intriguing, yet untested. This position is itself a falsifiable claim and should be rigorously examined before drawing conclusions. Until such testing occurs, our focus should be on scrutinizing the data and the theory, assessing their alignment with actual observations through meta-analyses.

Pertussis

In my initial draft, I neglected to include an example of consensus lacking empirical support. My research into the effectiveness of B. pertussis vaccines has revealed substantial consensus regarding their efficacy, yet scientific data does not consistently align with this consensus. Several studies indicate that while vaccines may prevent disease, they do not necessarily prevent infection. Despite these findings, the medical community often overlooks them, and no effort is made to validate their conclusions. Consequently, pertussis could be approaching epidemic levels without detection, as many infections remain asymptomatic. Here is my analysis of a Chinese study that could easily be replicated in the United States.

Update

This discussion has evolved over time, and I find the responses I receive intriguing. One notable trend is the shifting of the burden of proof from those engaged in philosophical discourse. A recent rebuttal to my arguments exemplifies this shift.

To Quoque, my friend

A significant criticism of my position is that my own arguments can be directed back at me. The critic states:

“I’m still waiting for empirical data consistent with the assertion that the percentage of scientists that agree with a theory is a valid measure of the robustness of a theory.”

However, I can counter this:

“I’m still waiting for empirical data consistent with the assertion that the percentage of scientists that agree with a theory is NOT a valid measure of its robustness.”

The critic emphasizes the need for empirical evidence to support the claim that consensus is a reliable indicator of robustness, yet fails to recognize that the burden of proof applies equally to the opposing argument. Essentially, proponents of consensus assert their position, while critics offer rebuttals.

This challenge of falsifiability applies to both claims. Neither can be confirmed; only disproven. Which assertion will ultimately be shown to be false?

Now, the statement made by Tippling would be reasonable if I had indeed claimed that scientific consensus is not a valid measure of theory robustness. However, I never stated such a claim. I responded to the assertion that it is a reasonable measure and called for evidence to support that claim. Therefore, Tippling's statement is merely an attempt to force me to defend a dismissal of an unsubstantiated assertion.

Doctrine

Since writing this article, I have encountered numerous calls for reliance on scientific consensus, as well as an intriguing issue. Academia has largely transformed into a system of doctrine. The cult-like aspects of academia are evident in reactions to a question posed on Stack Exchange regarding whether one should publish findings that contradict previously established mathematical results. My response is affirmative; the sequence in which results are presented does not alter their validity. What matters is the result itself. If no errors are apparent upon review, then both findings should be regarded as equally valid. To imply otherwise (1) introduces doctrine and (2) suggests that the likelihood of a result being accurate is determined by its order of presentation, an absurd notion that nonetheless exerts influence over consensus. Regardless of the validity of a result, any finding that contradicts the consensus faces heightened scrutiny and thus requires more substantial evidence than if it had been published first and accepted as the "truth."