A matter of evidence
“This experimental evidence does not match our theoretical prediction. We shall admit its failures and attempt a new theory with neither shame nor pride.” That’s how good science should work. It’s what we strive for. But it’s not what always happens in practice. Sadly, the following has become all too familiar: “This evidence does not match our theoretical prediction. Let’s pick out the best bits and publish anyway or we’ll be accused of not doing our jobs properly and lose funding.” Outside of science you might get: “It’s a shame the evidence isn’t what we hoped for. Perhaps if we put trust in groundless hope, then it might change tomorrow.” Or, worse still: “If the evidence suits us, then use it. If it partly suits us, then twist it. If it doesn’t suit us, then either ignore or debunk without backup.” And, of course, there’s the not-too-uncommon: “To hell with evidence!”
Why is “evidence” such an emotive, even political, concept? For one thing, it’s easy to find evidence to suit your needs. For example: on 4 June 2020, the temperature in Bristol was below the long-term average for that day of the year. This is a piece of evidence that could be used to suggest that climate change is not happening. It might well be an inconvenient truth that evidence to the contrary is overwhelming, but it’s still easy to find some evidence either way.
Why is “evidence” such an emotive, even political, concept?
Consider the other side to evidence: our theoretical understanding of the world. In our early years of life, we learn basic mechanics at a considerable rate, and we do so mostly independently. Before our first birthday, we learn that if you push a block on a surface, then it moves. Not much later, we experiment with rough and smooth surfaces and see moving blocks come to a halt. We are using these experiments to build rudimentary empirical models of the world and these models tend to be just good enough for what we need: “live” models in our brain to help us spear a mammoth, or push a block along an inclined plane. At school, we teach the formal theories of classical mechanics to numerically predict the motion of said blocks. At university, physics students learn about how relativity and quantum mechanics affect motion at extreme speeds or scales.
Does this succession of progressively more sophisticated modelling mean we disregard the previous, simpler methods? Of course not. Even the most enthusiastic physicists would not expect relativity to be considered when checking their car speedometers. For the mechanics case above, we have three layers of sophistication at our disposal: an empirical model for day-to-day life, a classical theoretical model for most engineering tasks, say, and relativistic/quantum theories for extreme situations. I’m sure we could subdivide these layers further, but the point is this: we use a sophistication level to match the needs of the problem at hand. Yet, this idea of simplified models of a given phenomenon causes much confusion – just as the significance of different forms of evidence does. It has a similar propensity to carve chasms between science and the wider world.
When it comes to theory and simulation, we face similar communication difficulties to those for evidence. A recent article in the Washington Post, published during the early days of the COVID-19 pandemic, presented successively sophisticated numerical simulations of the spread of the virus, subject to varying social-distancing measures. The simulations were all grossly simplistic – modelling people as spheres bouncing against each other in a confined plane – but it was a fine article that made its limitations explicit.
According to the scientific method, the response might be: “The simulations show greatly reduced loss of life for the case of moderate-to-extensive distancing. The model’s assumptions are huge so we must improve it, but the general conclusions are compatible with other works so regard this as further evidence in support of social-distancing measures, subject to the results of future improved simulations, manageable financial impacts, and other considerations.” However, a well-meaning, if somewhat naive, reader might argue: “Well that was interesting, but people aren’t anything like little balls floating in space! Clearly this doesn’t prove anything.” A more hostile reader might contend: “This is ridiculous; no wonder people are sick of experts.”
If a science communicator over-simplifies evidence or theory, then they might be accused of “dumbing down” or presenting pointless material. If an attempt is made to convey the intricacies, including (heaven forbid) maths, then the articles become inaccessible to most people. If it is claimed that something is proven beyond doubt (stating with 100.0000% certainty that climate change is real and anthropogenic, for example) then the article is probably dubious itself. Yet if you state something is not technically proven in a true scientific sense, but faces overwhelming evidence and consensus among experts (climate change again, for example), then it’s “just a theory”.
So how best to communicate science? Call it a campaign problem: (a) present evidence, (b) show that a solution exists (assuming it does). The poor old harbinger of bad news who only ever does (a), faces much hard-talk: “Give me solutions – or votes – not problems!” Perhaps the world shouldn’t be that way. But it is. And scientists need to accept that. Just as everyone needs to accept the evidence for climate change and the necessity of pandemic-mitigation strategies.