# The 8 Most Important Statistical Ideas: Counterfactual Causal Inference

This article is the first in a series of posts where I dive into the 8 most important statistical ideas of the past 50 years, as reviewed by Gelman & Vehtari (2021). I invite you to join me on this learning journey as we delve into the first concept: **counterfactual causal inference**.

## Introduction to the series: The 8 Most Important Statistical Ideas

The last 50 years have seen important advancements in the field of statistics, shaping the way we understand and analyse data. Gelman & Vehtari (2021) reviewed the 8 most important statistical ideas of the past 50 years.

As part of my learning journey, I decided to deepen my understanding of these 8 ideas and share my findings with you. In each article, you'll find an introduction to the concept at hand, along with some of the learning resources. So, if you're keen to deepen your grasp of statistics, you're in the right place!

## Counterfactual Causal Inference

â€śCorrelation doesnâ€™t imply causation.â€ť

This truism begs the question: is there any way to identify causation?

â€ścausal identification is possible, under assumptions, and one can state these assumptions rigorously and address them, in various ways, through design and analysis.â€ť

â€” Gelman & Vehtari (2021)

Different fields (econometrics, epidemiology, psychologyâ€¦) have seen the appearance of various methods for **causal inference** â€”the process of determining the cause-and-effect relationship between variablesâ€”, with a common thread: the modelling of causal questions in terms of **counterfactuals**.

A counterfactual is something that didnâ€™t happen but *could have*. Hereâ€™s an example of a counterfactual question: â€śWhat would your life be like if you had taken that job offer you declined?â€ť.

Itâ€™s a counterfactual because itâ€™s inquiring about a potential outcome that did not occur. Indeed, another way of thinking about counterfactuals is to consider them **potential outcomes**.

The methods for causal inference aim to quantify these answers by using statistical methods, causal models, and other techniques. Using causal inference, we can address questions such as:

- Does the early introduction of allergenic foods reduce the risk of developing allergies in children?
- What is the effect of drug decriminalisation on substance abuse rates?
- Should I get a pet?
- What if I had exercised regularly for the past six months?

Letâ€™s explore this last example.

Ideally, we would use a **Randomised Controlled Trial** (RCT) to answer this question. We would randomly assign people into two groups: one would exercise regularly for a month, and the other would not. Both would record their mood periodically.

Why canâ€™t we just look at people who exercise regularly and compare their mood to those who donâ€™t? **Confounding variables** is why.

A **confounder** is a variable that influences both the treatment and the outcome. For example, people who exercise regularly might have more free time. Wouldnâ€™t your mood improve with more free time? And wouldnâ€™t this extra time also make it easier for you to exercise regularly? You get the idea: the confounding variable (free time) can influence both the treatment (exercise) and the outcome (mood).

Thus, an RCT has two key advantages: it removes bias from confounding variables by isolating the effects of the treatment/intervention through randomisation, and it allows us to quantify the uncertainty in our estimates. The **counterfactual outcome** for each person who exercised is estimated to be the average mood of the group who did not exercise, and vice versa.

But RCTs are not always possible. Sometimes theyâ€™re unethical^{1}, or too expensive, or take too long. In these cases, we can use counterfactual causal inference to estimate the effect of an intervention. There are several methods for this, like matching, difference in differences, or instrumental variables estimation.

In our example, we could use **matching** in lieu of an RCT. Namely, we would gather (observational) data from various individuals, and we would look for pairs of people who are similar in all aspects (e.g. age, diet, sleep patterns, amount of free timeâ€¦) except for their exercise habits. These extra variables would help us control for confounders. With this data, we would compare the mood of these pairs to answer our question. Much simpler! Letâ€™s take a look:

Subject | Age | Diet Quality | Hours of Sleep | Free Time (hours) | Exercise (hours/week) | Mood Score |
---|---|---|---|---|---|---|

Adam | 25 | Good | 8 | 2 | 4 | 8 |

May | 24 | Good | 8 | 2.5 | 0 | 6 |

Anton | 52 | Average | 6.5 | 1 | 4 | 8 |

Kashika | 54 | Average | 7 | 1 | 0 | 6 |

Oluchi | 35 | Poor | 6 | 3 | 4 | 7 |

KĂlian | 32 | Poor | 6.5 | 3 | 0 | 5 |

Take the illustrative table above. We could match 3 pairs of subjects based on all variables except exercise. Can you see a pattern in terms of mood?

By comparing these matched pairs, we could estimate the causal effect of exercise on mood, answering our question through counterfactual causal inference. A (very specific and simplified) counterfactual answer would be â€śif May exercised 4 hours per week, her mood would be ~2 points higherâ€ť.

## Bringing It All Together

In this article, weâ€™ve explored the world of causal inference. While correlation can give us valuable clues, itâ€™s through causal inference that we come closer to understanding the â€śwhyâ€ť behind phenomena. Understanding causality is crucial for making informed decisions and implementing effective interventions in diverse fields.

Weâ€™ve seen how RCTs are the gold standard for causal inference, but that theyâ€™re not always possible. In those cases, we can use counterfactual causal inference.

Counterfactual causal inference is used in healthcare (to estimate the effect of a treatment), education (to determine the efectiveness of a new curriculum), technology (to see how users react to a new feature), and many other fields.

â€śHe whoâ€™s been able to learn the causes of things is happy,

and has set all fear, and unrelenting fate, and the noise

of greedy Acheron, under his feet.â€ťâ€” Virgil

â€śFelix qui potuit rerum cognoscere causas

Atque metus omnes, et inexorabile fatum

Subjecit pedibus, strepitumque Acherontis avari.â€ťâ€” Virgil

This quote from Virgil (29 BCE) reminds us that the quest to understand the â€ścauses of thingsâ€ť is not just an academic exercise; itâ€™s an ancient human pursuit that can bring us closer to a harmonious understanding of our world and our place in it.

Thatâ€™s it for today! In the next article, weâ€™ll learn about **bootstrapping and simulation-based inference**.

## Learning Resources

**Fun project**: Spurious correlations is a project by Tyler Vigen that showcases strong (but spurious) correlations between seemingly unrelated variables like the number of people who drowned by falling into a pool and the number of films Nicolas Cage appeared in each year.

**Podcast**: Alan HĂˇjek on puzzles and paradoxes in probability and expected value â€” 80,000 Hours Podcast (2022). A deeply interesting discussion on probability, expected value, and counterfactuals. The discussion delves (starting in the 02:16:38 mark) into the nuances of counterfactual reasoning, and the need for precision in establishing logical rules for them. HĂˇjek argues that most counterfactuals are essentially false but â€śapproximately true and close enough to the truth that they convey useful informationâ€ť.

**Academic paper**: Causal inference based on counterfactuals â€” HĂ¶fler (2005). A quick overview on the counterfactual and related approaches. Reviews imperfect experiments, adjustment for confounding, time-varying exposures, competing risks and the probability of causation.

**Book**: The Book of Why â€” Judea Pearl (2018). A deep dive into the science of causal inference and discovery, exploring theories, methods, and real-world applications. Pearl argues that all potential outcomes can be derived from Structural Equation Models, and lays out the problems with other approaches like matching.

^{1}

Sadly, there are far too many real examples of unethical research. See the Tuskegee syphilis experiment, the Guatemala syphilis experiments, or the many examples of pharmaceutical companies failing to respect the core principles of ethical research in African countries. Counterfactual causal inference methods can offer an alternative when conducting an RCT is ethically problematic.