Published on:

How scientists collect positive evidence rather than test theories

My
latest Mind and Matter column
in the Wall Street Journal:

There’s a myth out there that has gained the status of a cliché:
that scientists love proving themselves wrong, that the first thing
they do after constructing a hypothesis is to try to falsify it.
Professors tell students that this is the essence of science.

Yet most scientists behave very differently in practice. They
not only become strongly attached to their own theories; they
perpetually look for evidence that supports rather than challenges
their theories. Like defense attorneys building a case, they
collect confirming evidence.

In this they’re only human. In all walks of life we look for
evidence to support our beliefs, rather than to counter them. This
pervasive phenomenon is known to psychologists as “confirmation
bias.” It is what keeps all sorts of charlatans in business, from
religious cults to get-rich-quick schemes. As the
philosopher/scientist Francis Bacon noted in 1620: “And such is the
way of all superstition, whether in astrology, dreams, omens,
divine judgments, or the like; wherein men, having a delight in
such vanities, mark the events where they are fulfilled, but where
they fail, though this happen much oftener, neglect and pass them
by.”

Just as hypochondriacs and depressives gather ample evidence
that they’re ill or ill-fated, ignoring that which implies they are
well or fortunate, so physicians managed to stick with ineffective
measures such as bleeding, cupping and purging for centuries
because the natural recovery of the body in most cases provided
ample false confirmation of the efficacy of false cures. Homeopathy
relies on the same phenomenon to this day.

Moreover, though we tell students in school that, as Karl Popper
argued, science works by falsifying hypotheses, we teach them the
very opposite-to build a case by accumulating evidence in support
of an argument.

The phrase “confirmation bias” itself was coined by a
British psychologist named Peter Wason in 1960. His classic
demonstration of why it was problematic was to give people the
triplet of numbers “2-4-6” and ask them to propose other triplets
to test what rule the first triplet followed. Most people propose a
series of even numbers, such as “8-10-12” and on being told that
yes, these numbers also obey the rule, quickly conclude that the
rule is “ascending even numbers.” In fact, the rule was simply
“ascending numbers.” Proposing odd numbers would have been more
illuminating.

An example of how such reasoning can lead scientists astray was
published last year. An experiment had seemed
to confirm the Sapir-Whorf hypothesis that language influences
perception. It found that people reacted faster when discriminating
a green from a blue patch than when discriminating two green
patches (of equal dissimilarity) or two blue patches, but that they
did so only if the patch was seen by the right visual field, which
feeds the brain’s left hemisphere, where language resides.

Despite several confirmations by other teams, the result is now
known to be a fluke, following a comprehensive series of
experiments by Angela Brown, Delwin Lindsey and Kevin Guckes of
Ohio State University. Knowing the word for a color difference
makes it no quicker to spot.

One of the alarming things about confirmation bias is that it
seems to get worse with greater expertise. Lawyers and doctors (but
not weather forecasters who get regularly mugged by reality) become
more confident in their judgment as they become more senior,
requiring less positive evidence to support their views than they
need negative evidence to drop them.

The origin of our tendency to confirmation bias is fairly
obvious. Our brains were not built to find the truth but to make
pragmatic judgments, check them cheaply and win arguments, whether
we are in the right or in the wrong.

The first of three columns on the topic of confirmation
bias.

By Matt Ridley | Tagged:  rational-optimist  wall-street-journal