When facts matter and when they don’t

7th August 2020 By: Saliem Fakir

It is common wisdom that, if you have evidence, that evidence should speak for itself. But the history of knowledge and ‘factfulness’ shows that the causal relation between evidence and good decision-making is not a direct one.

Prevailing social consensus on scientific or controversial issues can also tilt the lens of believability.

The main message here is that one has to locate evidence not only in the picture of reason but also psychology as the determiner of how reason and action will be exercised. This touches on issues like the cognitive state and the social context of the individual – we are getting to know more about this as a result of pioneering work in behavioural sciences. This work is also challenging the rational model theory so evident in economics.

The cardinal rule of communication that experts often fail in is that they pivot the entirety of their narrative around the idea that they themselves are viewed as objective and the facts or evidence they generate are a result of objective processes.

This may well be true but that is not how the world of public opinion works.

The Covid-19 pandemic shows that laboratory tests and the identification of the coronavirus were confirmatory evidence that the virus does exist. We may dispute the origins of the virus – whether it was the result of a Chinese ‘biological warfare’ programme gone awry or whether the virus came from bats or pangolins when it broke natural boundaries and found a home in a new carrier – humans.

The former is the stuff of conspiracy and the latter the stuff of science and evidence – but that’s for later, when the pandemic is over. Scientific evidence aids the medical fraternity and their actions, but not all evidence aids politics and the interest that comes with it. This ought not to be a puzzle because the world of facts lives simultaneously in different forms of social existence and uses.

Not taking into account the politics of facts is what evidence practitioners and scientists face as frustrations. The politics of facts have four main properties.

First, the seriousness of an issue can narrow the bandwidth of decision-making in that it unceremoniously cuts out all other noise and focuses the mind. Decisions taken in this context often go the right way. Evidence has a direct influence on decision-making. It seems a crisis can have a dramatic effect on bending the attention span and what needs to be done.

Second, the status of the recipient in terms of his or her frame of mind can also determine the degree of receptivity to evidence – the effects of bias can be telling, especially if politicians do not want to hear certain facts. For example, when people are debating the value of vaccines versus no vaccines, confirmation bias can be a barrier against good evidence to counter false premises.

Psychologist Barry Schwartz points out that defenders of scientific opinion and authority when tackling myths tend to go at myths with more facts.

This approach can backfire. Schwartz’s work shows that, if you juxtapose one set of facts against ‘the wrong facts’, the unintended consequence is that you can give prominence to the myth and perpetuate a cycle that is very hard to claw back from.

Drawing attention to the thing you want to counter can predispose people to begin to have doubts about who is telling the truth, especially if the evidence is not conclusive in one direction or the other.

Myth busting can distance the public from established scientific authority, especially if the view is seen to be increasingly polarised and politicised.

In some respects, this is playing itself out in the renewables-versus-coal debate in South Africa. If you add a nationalist and racial lens on top of this debate, it can develop into a logic that is not based on facts but on whether a particular constituency’s interests are being served or not.

In the case of renewables, you can argue till you are blue in the face that renewables are cheaper than coal, gas or nuclear. There will always be certain interest groups and their supporters that will not believe you, or engage in denial or misinformation. If renewables are seen as ‘foreign’ and come against coal, which is seen as a national resource, national aspirations can be a strong emotive force that is not easy to dislodge just on the basis of ‘factfulness’.

Third, the level of the relation between evidence and decision-making has to take into account the place where decisions have to be made. This plays itself out very well on the energy infrastructure front. Energy security may dictate a certain rational path to more energy security, but institutional incentive and path dependency may dictate otherwise. Decisions will either be delayed or never made. This is due to either political pressures or simply the timing for taking certain types of decisions does not exist.

The fourth issue is dislodging of true evidence with (deliberate) misinformation so as to postpone the receptivity of that information and insert doubt into what is compelling new scientific evidence. We find this debate, for instance, on issues related to cancer-causing pollutants, cigarette smoking, climate change and other issues that have influence on the sale of products, commodities or markets. Deep economic interests can be resistant to change, not because the truth is not known, but because the truth hurts.

The picture painted here casts a shadow on the problem of expert evidence and decision-making. It shows that we must have a much more nuanced picture of facts and reasoning. That facts translate into the right reason is a misplaced faith in the idea that all objective forms of reasons lead to objective outcomes. As we go higher up along the decision-making tree, politics and facts can clash.