From the same story in this post:
In 1983 the first rotavirus vaccine was ready for testing. ... From all vantages, the first trial, conducted in Finland was a landmark success: the vaccine reduced the changes that a vaccinated child would get severe rotavirus by 88 percent demonstrating that immunity could be induced with a live oral vaccine. Moreover, the vaccine had no troubling side effects.
Encouraged, Smith Kline-RIT (now GlaxoSmithKline Biologicals) launched trials in other countries, and by the late 1980s the end of rotavirus-related deaths seemed at hand. But then the results from trials in Africa and Peru proved inconsistent and disappointing. Lacking certainty about the reasons for the troubles - although poor health untreated infections, malnutrition and parasites are known to affect a child's immune response to vaccines - the company put its rotavirus program on hold.
I highlight this and the earlier post because development economists have begun to embrace randomized trials as a solution to finding out what kind of development aid works. Randomized trials have had a long history in medicine and economists have the advantage of adopting the best practices from those experiences. While they have not ignored the main criticism that randomized trials are a 'black box' the enthusiasm that has been coursing through the veins of development economists is palpable. Why have economists so ardently embraced randomized trials?
1. It gets at the question of causation. No more instrumental variables! The treatment-control differences are a clear cut answer as to what works and what doesn't. It doesn't matter if we do not know why it doesn't work -- accountability is ensured when a program is thrown out on the basis of a randomized trial. Yet as the above story shows, it can matter to know why a program works in one location but not another. As Atul Gawande writes so eloquently in the New Yorker, even knowing what works is not sufficient. We need to get at the question: why does it work better in one place and not another. While it is not a story about randomized trials, it is a story about treatment.
Over the phone, the doctor told Honor that her daughter’s chloride level was far higher than normal. Honor is a hospital pharmacist, and she had come across children with abnormal results like this. “All I knew was that it meant she was going to die,” she said quietly when I visited the Pages’ home, in the Cincinnati suburb of Loveland. The test showed that Annie had cystic fibrosis. ...
The one overwhelming thought in the minds of Honor and Don Page was: We need to get to Children’s. Cincinnati Children’s Hospital is among the most respected pediatric hospitals in the country. It was where Albert Sabin invented the oral polio vaccine. The chapter on cystic fibrosis in the “Nelson Textbook of Pediatrics”—the bible of the specialty—was written by one of the hospital’s pediatricians. The Pages called and were given an appointment for the next morning. ...
The one thing that the clinicians failed to tell them, however, was that Cincinnati Children’s was not, as the Pages supposed, among the country’s best centers for children with cystic fibrosis. According to data from that year, it was, at best, an average program. This was no small matter. In 1997, patients at an average center were living to be just over thirty years old; patients at the top center typically lived to be forty-six. By some measures, Cincinnati was well below average. The best predictor of a CF patient’s life expectancy is his or her lung function. At Cincinnati, lung function for patients under the age of twelve—children like Annie—was in the bottom twenty-five per cent of the country’s CF patients. And the doctors there knew it.
2. The earlier post highlighted that even within a randomized trial, subgroup interactions can be important. In that case, it was infants under 3 months. The search for subgroup effects (i.e. is the treatment most effective for this particular subgroup?) can easily become a fishing expedition or is sometimes referred as data mining. The number of hypotheses tested can easily reach into the hundreds. This type of analysis essentially puts the analyst back where they were before the randomized trial.
Because a randomized trial is so expensive to run, finding a positive treatment effect on a particular subgroup, means that another randomized trial on that subgroup cannot be repeated. The analyst has to stand by the statistical analysis which raises issues of multiple comparisons, sample power, independence and a whole host of other issues that the analyst had tried to avoid in the first place by putting his faith on the results of a randomized trial.
3. One of the main reasons for using randomized trials is to get at the selection problem. Looking at the results of a program by comparing those in and out of the program is biased because some participants self select into the program. However, the selection problem has only been pushed back one step in a randomized trial. There are experiments where those selected to receive the treatment refuse the treatment (known as "non-compliance") while those who are selected to receive the placebo somehow manage to circumvent the experimental controls to receive treatment ("crossovers"). In randomized trials and in econometrics, the selection problem has given rise to a whole host of estimators: ATE (average treatment effects), ITT (intent to treat), TOT (treatment on treated) and some others I'm not familiar with. Not surprisingly, the compliance problem has own estimator: CACE (complier average causal effect).
Just as growth econometrics has abandoned its search for causes of economic growth and settled for correlations, development economists seem to have abandoned its search for explanations and settled for finding out what works without knowing fully why it works.
This is meant to only to be a note of caution and nothing more but economists need to look at the experiments they are conducting and subject themselves to a cost-benefit analysis: Are the costs of running randomized trials greater than the benefits received by the participants? What about the benefits of the data that is gathered - can they be used to advance the knowledge in the field or are the results of these 'black box' experiments to be filed away and forgotten when the fad is over?
No comments:
Post a Comment