🔒 WORLDVIEW: How to read Covid-19 science like a scientist

These days, everyone is an epidemiologist with a speciality in Covid-19. Everyone has a PhD in Covid-19 bioscience from WhatsApp University or the medical school at the University of Facebook. There is so much bad Covid-19 science circulating the social media scene I can’t bear to log on anymore.

Now, this is natural enough. Pandemics are frightening and lockdown is frustrating and economically costly. It is very natural for us to want to understand what is happening so that we can feel a greater sense of control.

It’s also very human to seek out information that confirms our beliefs (such as the belief that lockdowns are a bad idea). There’s nothing necessarily wrong with this, we all do it – me included. It can be comforting to believe that we have found a solution to the terrifying problems we face. But reading a piece of research from the position of wanting to believe it is a really bad way to understand reality. All real science starts with scepticism and an open, curious mind.

This hasn’t been much in evidence lately. Instead, my blood has been set to boil by people who read the top line of a piece of research and immediately accept it as the truth – provided it supports what they want to believe, of course.

To be fair, I think that in many cases, the underlying problem is that people don’t always understand how to read and digest a published scientific study. It’s not like reading a fact on a Snapple bottle top – it’s something difficult you must learn how to do. I know this because I spent many years at a top Ivy League US university doing a PhD and learning, slowly, how to read and understand research.

So, for many people who sincerely want to learn about what’s happening in reality, the problem is that they lack some of the practical tools they need to get a handle on the information that’s emerging.

Of course, for other people, the problem isn’t a lack of scientific tools, it’s a stubborn desire to believe what they want to believe. For those folks, the only evidence that seeps through is confirmatory – any contrarian evidence is immediately dismissed or ignored. For these individuals, there’s no real hope. When people are taking in information through a filter, no amount of reason can help them see clearly.

But for those who are sincerely curious and want to avoid false hope and false fear, there are a few easy ways to get to grips with even complicated science.

  1. Start with scepticism, trust the broader scientific process

Let’s get this out the way. As I said, all science starts with scepticism. But that doesn’t mean being sceptical of our ability to do science! It means looking at each piece of research as a small part of a greater whole. Throughout the history of our advancement from superstition to modern medicine and air travel, science has progressed on the principle that, while any individual study may be wrong or partial or flawed, over time and taken together, we are capable of growing our knowledge and understanding.

No single Covid-19 study is the last word. Every study adds a little bit to our knowledge of what’s happening – sometimes by pointing out where previous studies went wrong. But every study has its flaws and limitations. We should never put a cult-like faith in a single piece of evidence. But we should absolutely look at the broad balance of evidence and accept its conclusions – always keeping in mind they may change as we learn more.

This flexible mindset is a challenge to maintain, especially when things are scary, or we already have strong beliefs about a topic. But without this mindset, you cannot – absolutely cannot – read and absorb research.

  1. Read the results, but always look at the limitations

Typical research studies are made up of the following components:

  • A literature review – This looks at what we already know (or think we know) about a topic.
  • A statement of the problem or question under investigation – This is what the research is trying to find out, maybe “Does the BCG vaccine reduce the severity of Covid-19?” or “How many people in Santa Clara have been infected with the coronavirus?”
  • Methodology – This is the how piece – how is the researcher going to find evidence to answer the question. For almost 100% of Covid-19 studies, this is the biggest source of problems, because we do not have good ways to study this virus yet. More on this later.
  • Results – This is the answer to the question. But note, this is the answer the researchers could come up with given the limitations of their methodology. This is not THE answer. It’s AN answer. Imagine I asked you the population of Malawi but didn’t allow you to use the internet to find an answer. To get an answer, you may find an old Encyclopaedia Britannica on your shelf from 1987, take the population from there, and increase it by 50% to guess the population today. Your answer is probably not very good, although it’s probably also not completely wrong. It’s the best you could do given your methodological limitations. More broadly, even the answer you find online probably isn’t exactly right due to methodological limitations. There is a definite, real number of people in Malawi – but unless we go and count them all, individually, today, any number we estimate is an estimate, a guess based on, say, their last census two years ago plus a guess about births and deaths since then. Our guess could be very close to the truth, but it will always be an approximation because we can almost never actually go and find the exact, real truth. This is the same for all research.
  • Limitations – This is the most important piece of the research, believe it or not. This is the bit where the researchers explicitly tell you why they may be wrong. They explain the limitations of their methodology, of their data, of their instruments. You should never read research without reading and understanding the limitations because they really matter. And they matter because:
  1. Some research studies are better than others

Not all research is created equal. You may have heard that the gold standard for research on matters impacting humans is the double-blind, placebo-controlled random experiment. This is the best we can do, as humans, to find answers about research questions.

As studies get further away from this standard, they get worse – they are most subject to error and false positives or negatives, as well as confounding effects (when you think one thing is causing your result but it’s actually another – you may think BCG is causing lower coronavirus infection rates, but it may just be that a lack of testing is hiding infections). Badly designed studies give us evidence that is not generalisable – it cannot be extended from the people you studied to the rest of the population.

Imagine I told you that I did a survey of 1,000 people and found that 98% of them think that Justin Bieber is the greatest musical artist of all time. You may be suspicious of my results – and you should be! It turns out I did survey 1,000 people, but I surveyed them in the parking lot of a stadium after a Justin Bieber concert. In other words, I surveyed Bieber fans. Thus, the results of my survey are not representative – they cannot be generalised to other people who aren’t Bieber fans.

This is a HUGE issue for Covid-19 studies. For example, consider the question of how many people have been infected by the coronavirus. Right now, countries are testing millions of people. But they are – for the most part – only testing people with symptoms. And the types of tests being used only work on people with an active viral load that meets a certain threshold. So, people who had symptoms three months ago or who don’t have bad symptoms or who are in the early stages aren’t getting tested, and people in the early or late stages of the disease who have a low viral load aren’t showing up as positive. We also know that the tests aren’t very good – many people test negative and then test positive a day later. So, the number of confirmed cases isn’t a very helpful guide to how many people actually have coronavirus infections.

Now, many people are putting their faith in antibody tests – these are designed to find out who had coronavirus infections in the past and fought them off successfully, producing Covid-19-specific antibodies.

But let’s not get too carried away, because there are problems here too. First, these tests aren’t necessarily very reliable. Some have reported the tests to have a 20% reliability – they only catch 20% of actual cases. In other words, they generate a lot of false negatives or false positives. Second, only a handful of antibody tests are being done on random groups of people. Many instead are recruiting participants through social media. The problem here is pretty obvious – it’s hard to get coronavirus tests in most places, so people who have had symptoms and couldn’t get tested will probably sign up for antibody tests at a greater rate than healthy people, looking for confirmation of their suspicions that they had Covid-19. This would give us a high incidence of positive antibody tests that wouldn’t be generalisable to people who never had symptoms.

Again, we have to look at the balance of evidence. So far, it seems very likely that we have dramatically undercounted coronavirus infections, but we don’t really know by how much. As antibody tests become more reliable (able to properly identify positive and negative cases) and researchers use them on truly random groups of people, we will get a better sense of how widespread coronavirus truly is. At this point, estimates are varied – somewhere between 2% and 15% of populations in Europe and the US have been exposed. We eagerly await better data.

As far as deaths go, we know there is also undercounting. China has already upped its Wuhan death count by 50%, and we know that many people dying at home in New York and the UK are not being counted. The only thing we know for sure, from evidence in Wuhan, New York, and Northern Italy, is that when coronavirus outbreaks are allowed to spread, hospitals become overwhelmed and thousands of people die.

Right now, governments are making policy based on the best available science – at least in most cases. They will make errors. Things are moving fast and there is a lot of uncertainty. As new evidence emerges, governments should be ready to alter policy. Given what’s at stake, however, good leaders will be erring on the side of caution. When faced with the choice between disruption and a pile of corpses, even if you aren’t sure how certain the pile of corpses outcome is, it is better to choose disruption.