Coldfusion blog

RealClimate: the problem of modern demarcation

Defining (and enforcing) a clear line between information and misinformation is impossible, but that does not mean that misinformation does not exist or that there is nothing to be done to combat it.

I got caught up in a series of “interesting” exchanges on Twitter a few weeks ago (I won’t link to save you the trouble, but you could probably find it if you search hard enough). The nominal question was whether the known bulls platform was useful in stemming the spread of misinformation (especially when it comes to discussions around COVID). There is some evidence that it does in fact work to some degree, but the Twitter feed quickly turned to the question of who decides what misinformation is in the first place and then descended into a free-for-all where the even mention of this misinformation existed or that the scientific method provided a ratchet to detect it, met instinctive references to the Nazis and the Inquisition. So far, so usual, right?

While the specific thread wasn’t particularly uplifting, and I’ll admit my tweets weren’t perfectly tailored to a specific audience, this is a very modern example of the classic demarcation problem (Stanford Encyclopedia of Philosophy) in the philosophy of science.

Science and pseudo-science

The problem of demarcation is classically linked to the difficulty of identifying general principles which distinguish real science from pseudo-science. Anyone can name what (for them, and perhaps for many others) are examples of pseudo-science: astrology, homeopathy, cold fusion, etc., but come up with defining characteristics “pseudo-science” but excluding “science” is very difficult (and perhaps impossible).

For example, the popularity of a scientific idea is not a useful measure, because many ideas that were initially marginal later became the consensus (perhaps all of the consensus ideas at one time were marginal?). More fruitfully, is the way in which pseudo-scientific ideas are defended salient? Obviously, ideas based on logical errors, cherry picking, or rhetorical tricks should not be considered and are common signs of misinformation. However, the existence and use of mediocre arguments does not preclude the existence of better ones. Rightly so, we tend not to pay much attention to unfalsifiable theories, but not all unfalsifiable is pseudo-science (string theory, for example, although some might argue that! ). However, the predictions of some theories simply cannot (yet) be tested. Gravitational waves weren’t pseudo-science just because it took a century to verify their existence.

Conversely, many pseudosciences make many falsifiable claims (many, indeed, that have already been falsified). Popper’s assertion that scientific claims are bounded by falsification is therefore difficult to sustain. What about the skill of predictions? After all, it is the gold standard of scientific theories. Theories with a track record of successful predictions (not just hindcasts) would seem sufficient to be considered scientific, but clearly that’s not necessary. Etc…

Pseudo-science and misinformation

Pseudoscience is often thought of at the level of a theory or body of work, not at the level of a single fact or argument. However, misinformation can be much more granular and does not have to be tied to a consistent view in all respects. Like pseudo-science, misinformation is often clear in specific examples (claims that vaccines implant microchips! or that they make you magnetic! or that the Space Force is about to stage a coup). State!) but difficult to define in general.

As above, misinformation cannot be defined simply as anything that is not part of (too broad) scientific consensus or that is not falsifiable. Of course, it might be easy to say that misinformation is information that is patently false, but that raises the question of how it should be proven and who should judge when it has been.

Going further, what about information that is simply misleading? As we’ve seen in the climate talk, it’s easy to find red herrings that are true all the way, but don’t go very far. Did you know that the climate has already changed? Or that water vapor is the most important greenhouse gas? These claims are not false, but are often used in the service of a false premise (that human-caused climate change is not happening or not important). Even here there is a normative (subjective) judgment. Who can say what is important? why? or who?

Cherry picking, where a specific, often noisy metric is highlighted to counter the larger scale picture (see all that has been posted by Steve Koonin) or a single outlier study is highlighted disregarding caveats or other work in the field (anything pushed by Bjorn Lomborg), is another example. These claims are meant to mislead, but that is often the implication to guarantee of the argument that is misinformation. And how can you reliably detect what is implicit in an argument for a particular audience? Thus, misinformation is often context dependent.

However, the existence of extreme cases like this does not mean that one can never say that something is misinformation. Exactly in the same way that just because science is uncertain about certain things does not mean that everything is uncertain. Perhaps we should follow Judge Potter Stewart?

I will not attempt today to further define the types of material I understand to be encompassed within this abbreviated description; and maybe I’ll never be able to do it intelligibly. But I know it when I see it.’

Can the impacts of misinformation be minimized?

Despite the difficulties in trying to come up with hard and fast rules that separate misinformation from information, it’s always worth pushing back. [And contrary to the opinions of some, pushing back is just as clear an exercise of one’s free speech as is the misinformers pushing their misinformation].

There was a conference last week (#DisInfo2022) on the role disinformation plays in our political discourse and there was a lot of discussion about what it is and what to do about it.

Most pushbacks, however, are reactive. Someone somewhere comes up with something stupid, and more knowledgeable people respond with facts, or context, or disdain. The pushback rarely receives the same attention as the push-up, and the exercise is generally futile except perhaps on the sidelines, or as a record that can be reviewed later. The disinformation peddler takes advantage of the attention and makes a reputation as an “owner of the libs” or a “brave truth-teller oppressed by the establishment” or an “unjustly persecuted victim” – a veritable Galileo even!

Perhaps more useful is the idea of ​​inoculation against misinformation (e.g. van der Linden et al (2017) or [2]). The idea is that if people know what type of argument or tactic is being used by disinformators, they will recognize it when it is used and can dismiss bad arguments when they arise without additional help. I think at the end of the day that’s how most bad arguments die – people develop a kind of “herd immunity” against them and disinformators find that those bad arguments no longer generate the buzz they once had. But like viruses, bad arguments will evolve as disinformators try to find something that works, and sometimes they can come back with a vengeance when everyone else thought they were dead and buried. So maintaining “herd immunity” against misinformation is a constant battle. It never settles because it’s never really the issue, but it’s almost always an indicator of a deeper values ​​clash.

However, empirical evidence suggests that the most effective way to prevent the spread of misinformation is simply to reduce exposure to it. For example, paying people to watch CNN instead of Fox News seems to work. Deplatforming repetitive misinformation does that too. Here is another case.

Freeze the peach!

The social media platform is often heavily criticized as being against the “spirit” of free speech (and not the First Amendment, which only enjoins the US government). But should the free market of ideas be a free market for all, where voices are drowned out by robot farms dumping shit on everyone’s booth? Creating and organizing accessible spaces and environments that elevate information above misinformation seems to me to be an essential part of building an informed democracy (that’s what we want, right?). This might not be completely compatible with platforms truly optimized for engagement rather than speech (that remains to be seen). But it’s surely an impossible task if we don’t take disinformators seriously.

References

  1. S. van der Linden, A. Leiserowitz, S. Rosenthal and E. Maibach, “Inoculating the Public Against Climate Change Misinformation”, World challenges, flight. 1, pages 1600008, 2017. http://dx.doi.org/10.1002/gch2.201600008

  2. J. Cook, S. Lewandowsky and UKH Ecker, “Neutralizing misinformation through inoculation: exposing deceptive argumentative techniques reduces their influence”, PLOS ONE, flight. 12, p. e0175799, 2017. http://dx.doi.org/10.1371/journal.pone.0175799