Field Experiments

Donald Green

Rigorous quantification of uncertainty is a hallmark of scientific inquiry


What I like

  • I have a deeper respect for the scientific process after reading this book

  • Book is very thorough and dives deep into complex topics that are hard to understand

  • Many great examples from real scientific studies especially in areas like politics

What’s missing

  • I wish there were simpler explanations of topics, it was hard to wrap my head around

  • It was difficult to understand how all of the topics in the book were connected

  • The statistic formulas were very complex and intimidating


Key Topics

Experiments, Measurement, Methods, Statistics, Science, Creating Value for Research


Review

In 2022, our team was conducting research in UK train stations with the goal of answering some pretty heavy complicated questions. We ended up running experiments to test which types of shoppers were going to different types of stores- grocery stores or convenience stores. We also ran tests to understand how long they were shopping for and for what. When we were done with our day of research, we would debrief together to discuss and share our results. It was a great experience, but it left me wanting to learn more about how to run proper field experiments which is what brought me to this book.

This is not an easy book to read, and the contents are dense and jam packed with complex statistics. I probably could only grasp about 25% of the book’s contents by the end of reading it. But I was able to grasp the bigger ideas and high points of the book, namely that running real scientific research is a very complicated process that requires attention to detail. Green’s book here will step you through everything you need to know about experimental design (though I wish it had a chapter dedicated to recruitment). It will step you through the best practices and importance for random assignment, which is what separates scientific from non-scientific research. It will step you through how to organize your data and outcomes on a schedule and apply labels to pieces of data to support with analysis. It will also explain how things like covariates can be used to help you improve the precision of your average treatment effect (ATE). It will also step you through issues like non-compliance or attrition and how to account for them in your results when it comes to determining your ATE. There is so much in this book to unpack and if I ever need to read up on how to run a scientific experiment then I’ll likely return to it. It was a hard book to read, as some of these concepts were explained mostly through statistics, which is not something I’m well versed in.

What I did find helpful and enjoy in this book were the stories about actual studies that helped illustrate the topics that Green was introducing or elaborating on. The book references many interesting studies in politics, health, and education, and what lessons could be garnered from them. One of the biggest lessons I got from this book was about bias and how quickly it can be introduced without randomizing assignment to treatment and control conditions, or by breaking random assignment later in the study. When running studies similar to what we did in the UK in the future, I will be sure to borrow concepts from this book and organize my data more thoroughly to assist with synthesis. I enjoyed this book, hard to read, but still learned a lot from it and still have much to learn.


Learnings

  • Random sampling has become a solution to reducing the impact of confounding variables on experimental analysis.

  • Typically when running experiments our goal is to understand the cause of an effect, which is described as the effect of a treatment on any given unit of population.

  • Uncertainty in statistics is understood through parameters like variance, standard deviation, standard error, confidence, and power. Ideally your study should have as much power and give you as much certainty as possible given available resources. The goal is usually to understand the average treatment effect, which is an estimate of the effect of a cause.

  • Covariates are variables that impact outcomes outside of the independent variable we are interested in, and they can be helpful for improving the precision of our average treatment effect.

  • One-sided compliance occurs when a set of participants in the treated condition are untreated, which introduces some bias.

  • Two-sided non-compliance involves four groups: Compliers, never-takers, always-takers, and defiers, where always-takers are all participants that took the treatment regardless of their assigned group, and deniers did the opposite of their assignment group.

  • Attrition is the process where outcomes data is missing from an experiment and is sometimes a feature of the study itself and must be accounted for as excluding attrition can severely bias a studies results.

  • The non-interference assumption states that the outcomes from one treatment assignment are unaffected by other individuals assigned to the treatments. In the case that there is an effect, we regard this spill-over.

  • Oftentimes in a study we want to understand the context in which the treatment effect is the strongest which is the basis for studying heterogeneity.

  • In an experiment it is sometimes not the treatment that results in an outcome but a mediating variable that exists within the treatment.

  • Oftentimes we want to determine not just the sample average treatment effect, but the population average treatment effect as well, and we can do this by introducing meta-analysis and pooling similar research together to form broader conclusions.

  • Documentation is very important when conducting research, particularly for making experiments replicable so that they can be explored again in the future, and so that others can access your results.

Previous
Previous

Predictably Irrational

Next
Next

Trustworthy Online Controlled Experiments