Last week’s New Yorker featured an engaging portrait by Ian Parker of MIT development economist Esther Duflo, perhaps the leading light among that field’s “randomistas.” These (mostly) young economists have made their mark on their profession by applying randomized control trials (borrowed from medicine) to development strategies. This really shouldn’t surprise anyone — randomized control trials have been used for other types social programs (e.g., delinquency prevention) for years now, and given that economics is about human behavior it’s surprising that economists haven’t embraced this earlier.
Also familiar to behavioral scientists are the objections to assigning participants at random to experimental and control groups.
“You shouldn’t be experimenting on people.” O.K., so you have no idea whether [your programs] work–that’s not experimental?
The former is met far to infrequently with the latter. Someone comes up with an idea for some intervention, they announce their intentions and put that idea into practice, and all of a sudden it is accepted as the right thing to do… and to test whether it works better than doing nothing (which really means “better than engaging the variety of things people do that you don’t know about”) thus becomes the wrong thing to do. That’s some sloppy ethics, at best.
The one objection to randomized control trials mentioned in the article that might hold water is that an intervention shown to be empirically supported in one context might not be empirically supported in another due to variation in ecological and temporal phenomena. Of course, the logical solution is more experimentation, not less. In their psychosocial programs in the Democratic Republic of Congo, the Center for Victims of Torture has instituted what Research Director Jon Hubbard calls “rolling control groups” to address the problem of changing context. The situation in conflict zones is often very fluid, and so if a program is shown to be better than doing nothing during one intervention period (6 weeks for CVT’s program) that doesn’t mean that it will be better during the next. So Hubbard came up with the rolling control: at the beginning of each intervention period, the program accepts and screens 125% of their capacity, then randomly assigns 25% to a wait list control; after the intervention period they give post-tests for each group, viola! They have a small-scale randomized control trial that shows their funders that they are monitoring the effectiveness of their programs for each cohort.
The article on Duflo ends with a couple paragraphs on the art of the evaluator’s profession that I found particularly striking — but admittedly, maybe only a data nerd like myself would love:
“It can’t only be the data,” Duflo said, showing a rare willingness to generalize. “Even to understand what data means, and what data I need, I need to form an intuition about things. And that process is as ad hoc and impressionistic as anybody’s
It can’t only be the data, but there must be data. “There is a lot of noise in the world,” Duflo said. “And there is a lot of idiosyncrasy. But there are also regularities and phenomena. And what the data is going to be able to do–if there’s enough of it–is uncover, in the mess and noise of the world, some lines of music that may actually have harmony. It’s there, somewhere.”