You’d be better off reading tea leaves than relying on fiscal forecasting

However well suited financial forecasting may be to predicting fiscal growth, with regard to major swings in the economy they might as well be reading tea leaves. For instance, after the 2008 global recession, once the US economy had already begun to tank, it took the Survey of Professional Forecasters up to a year to begin adjusting their jobs forecasts downwards. By mid-2009, their year-end job forecasts had been overestimated by a figure of seven million. A recent critical piece in The Conversation describes Randomised Control Trials (RCTs) as highly contextual experiments that use small samples to compare some or other treatment effect with a placebo. They are, therefore, considered low in generalisability. This means that if the experiment were to be conducted a second time on a new sample from the same population, the results would probably not be replicated. Today, RTC accounts for up to 80% of evaluations conducted by the World Bank. Lee Blake, writing for FirstRand Perspectives, argues that we may as well rely on goat’s entrails, The Simpsons or satirical cartoons to inform our forecasts for the future. – Sandra Laurence

The future is animated

By Lee Blake

Comparing science and satire to predict the future

Our desire to see into the future is as old as it is obvious: with prescience comes the ability to avoid major catastrophes, as well as capitalise on boons. However, although we all wish we could foresee such trouble and opportunity accurately, I think we are also aware of how much this ability could suck the joy out of existence.

This tradeoff has recently been well portrayed in the FX limited series Devs. Devs follows the fate of a team of developers who use a sufficiently strong (quantum) computer and super slick algorithm to map the entire universe—atom for atom. The team then uses this model to project into the future, to see exactly what will come to pass. The result is that those characters who end up using the machine become severely disillusioned drones resigned to their murderous futures.

The cost of prediction, however, has never sufficiently deterred us from fortune telling. Haruspicy—the reading of animal entrails—is one of the oldest methods of divining the future. Nearly four thousand years ago, the priests of ancient Babylon consulted clay models of sheep livers to determine many things, including the causes of illness and predicting the weather. Bronze models were similarly used in ancient Italy and Greece. The liver was thought to be so important because it was considered the source of blood and, therefore, life itself.

Using the language of a data scientist, we might say that the liver is a sample to be generalised to all living things. This practice continued for some time, and it is even alleged that Saint Thomas of Canterbury consulted a haruspex (a reader of entrails) prior to invading Brittany in the twelfth century. Far less bloody practices persist today, including astrology, tasseography and palmistry. However, the most relied upon and respected consultants today are our data experts— especially economists and data scientists. But how good are they, really?

However well suited financial forecasters and forecasting may be to predicting fiscal growth, with regards to major swings in the economy they might as well be reading tea leaves. For instance, after the 2008 global recession, once the US economy had already begun to tank, it took the Survey of Professional Forecasters up to a year to begin adjusting their jobs forecasts downwards. By mid-2009, their year-end job forecasts had been overestimated by a figure of seven million. Similarly, political scientists used polling data to falsely predict a Clinton victory in key battleground states against Trump.

While it is highly likely that these high-tech silicone models are vastly more accurate than their clay and bronze predecessors, the same constraints apply today as they did four thousand years ago. Whether it be data collected and analysed from the entrails of a sheep, or from a scraping of Facebook, to predict with any semblance of confidence, they must be representative of and generalisable to the population from which they are gathered. We might believe that modern social scientists are doing a more careful analysis of data than were Babylonian priests, but one only has to consider the impact of Randomised Control Trials (RCT) today in development economics to become disillusioned.

A recent critical piece in The Conversation describes RCTs as highly contextual experiments that use small samples to compare some or other treatment effect with a placebo. They are, therefore, considered low in generalisability. This means that if the experiment were to be conducted a second time on a new sample from the same population, the results would probably not be replicated. Today, RTC accounts for up to 80% of evaluations conducted by the World Bank.

A second major aspect of a prediction’s accuracy rests on its sophistication. That is, the more assumptions that are made, the greater the likelihood an error will occur. For example, of all the advances being made in tech in the 1990s, to have predicted the success of video conferencing on handheld devices and wearable tech more generally, would have required a multitude of assumptions: advances in telecoms and video and silicon microchip capacity and lithium ion batteries… and, and, and, and. The same could then be said for highly specific events. Few, for instance, could have foreseen Trump’s rise to the presidency. Such a prediction would have required leaps of imagination, especially had that prediction been made in the mid- 1990s.

So how is it that a TV show, The Simpsons, predicted all things? Yes, you read that right. And for sure, there is always an element of confirmation bias – where we only look back at what they guessed right and ignore what they got wrong. Not to mention that we have hundreds and hundreds of episodes to sample from, to the extent that some predictions – even some of the most outrageous – would eventually be realised.

Yet, hit rewind to a hundred years ago, and English humorist GK Chesterton wrote something very interesting about the powerful capacity of satire to predict future events. Chesterton tells us that the comics on the back page of the newspaper are a better predictor of the future of democracy than any of the social sciences: “The vulgar comic pages are so subtle and true that they are even prophetic… If you want to know what will happen, study [these] pages… as if they were the dark tablets graven with the oracles of the gods… They contain what is entirely absent from all the… sociological conjectures of our time: … some hint of the actual habits and manifest desires of the… people”.

Perhaps Chesterton is talking about a difference between what is happening and why it is happening. In other words, to predict the future we can collect data about exactly what is happening around us, and then either say this will probably continue to happen, or won’t. Alternatively, we could look to deeper levels of what we believe is behind what is happening and then make predictions based on that.

For example, when asked how they managed to predict Trump’s rise to the presidency, writer Dan Greaney of The Simpsons said “that [it] just seemed like the logical stop before hitting (rock) bottom. It was consistent with the vision of America going insane.” Greany, therefore, looked beyond what was happening and looked to why it was happening. He specifically envisioned Americans to be losing their minds, with the logical conclusion of such a hypothesis being the election of the most absurd kind of person to the nation’s highest office – a reality TV star.

Compare this with the aforementioned political scientists. In 2016, nearly every major poll had falsely predicted a Clinton win over Trump in key battleground states. In a stumbling defence of this failure, Nate Cohn of the New York Times writes that “errors have happened enough in past elections to know that an upset was well within the realm of possibility in 2016.” Cohn then cites three errors that could have contributed to the fiasco, one of which is that “turnout among Mr Trump’s supporters was somewhat higher than expected”. In other words, in hindsight, pollsters should have anticipated more support for Trump. While this is clearly tautological, it is also solely concerned with what was happening at the polls, not why it was happening.

Perhaps we should be more cautious about the claims of big data and forecasting in general, especially where it ignores theory and what lies beneath apparent happenings. Data science could learn a lot from Homer and the haruspex: to look beyond the data and get their hands a little dirty.

Read also:

Visited 314 times, 1 visit(s) today