There is widespread agreement that economic and statistical models have failed us. (See some old posts here here, here, here, and here.) Some recent posts in the blogosphere continue to remind us of this but have not provided any alternatives. The main point that all model users fail to remember is that ALL MODELS ARE FALSE. It is when they decide to drink their own model elixir and to substitute models for reality that instead of living in the real world they begin to live in a dream like state. (If I could substitute my spouse for a model I'd probably live in a dream like state as well.)
First off, John Cassidy is absolutely correct:
To repeat myself, the problem wasn’t so much with the models themselves, but with how they were utilized. Rather than being used to discipline individual traders and trading desks, they were used to justify bigger and bigger speculative positions, and more and more leverage.
I take exception however to the tone of the article that blames model builders without considering the incentives. What incentive is there to tell a CEO or a trader that he should not do a trade even when the payoffs are huge because even though the model may be right, there is a small chance that it could be wrong?
Another recent post attacks utility maximization. I am sympathetic to to the idea that life is not all utility maximization. My more general attack on microeconomics here - in fact, the failure of contract theory, principal-agent models and pay for performace indicates a failure of the expected utility maximization. These models typically assume that utility is unbounded from below - yet as the blogger cites Carol Graham:
"People seem to be able to adapt to high levels of adversity, poor health and all kinds of things and retain their natural cheerfulness or their natural happiness…People really can adapt to adversity."
I would also indicate that Herbert Simon's concept of satisficing has been around a long time (and surprisingly did not make it into the bloggers comments) and for all purposes appear to be a possible alternative but has not made it into mainstream economics.
While there may be a case for starting anew, it is still possible to retain the EU framework which almost all economists love. Hyperbolic discounting is one that comes to mind. Moreover, it is also possible to re-cast Carol Graham's comment into a framework of binding budget constraints. And while it is easy to talk about models in lyrical terms or using analogies, the preferred language of economists is mathematics. Whether this is restraining economists is a debate for another post, i.e. are economists constrained by their tools (mathematics) or by their lack of imagination? For instance, economists (still?) struggle to put Nelson and Winter's evolutionary approach into mathematical models.
This point is brought recently to force by Rajiv Sethi's discussion on John Geanakoplos' Leverage Cycle. Rajiv blogs:
David at Deus Ex Macchiato agreed that the work is important, but added:
What astonishes me however is that this is in any way news to the economics community. Ever since Galbraith’s account of the importance of leverage in the ‘29 crash, haven’t we known that leverage determines asset prices, and that the bubble/crash cycle is characterised by slowly rising leverage and asset prices followed by a sudden reverse in both?
I would add that it is one thing for Galbraith to articulate an idea but what is more important in PhD programs these days is not exploring ideas but model-building and sometimes putting ideas into models is easier said than done. One idea dating at least to Zarnowitz tha thas been challenging is that every boom sows the seeds for its eventual bust. It is possible that economists have not been re-reading popular works as much as they should (after all these are frowned upon since they should be building models or extending DSGE models) and Rajiv agrees:
Implicit in David's question is the accusation that the training of professional economists has become too narrow, and on this point I believe that he is absolutely correct.
Econbrowser's Jim Hamilton agrees that economists think too narrowly:
We're fond of building models of rational people reacting in a predictable way to the incentives they face; if their behavior changes, we look for an explanation in terms of changed incentives. It turned out to be in the fund managers' short-term interests to go with the more aggressive strategy, with disastrous longer-run consequences. Was the manager rational before 2006 and irrational after 2006, or did the incentives fundamentally change?
One of the explanations I sometimes hear is a story about "search for yield," which appears to be a combination of the two interpretations, attributing some of the altered risk-taking strategy to the period of very low interest rates in the preceding years. If this indeed accounts for some of the changed behavior by lenders, it is a channel for the transmission of monetary policy to the economy that's left out of the Fed's standard models, and another reason to be cautious about overestimating the benefits that are practical to achieve from a stimulative monetary policy.
Finally, Sciencenews reminds us:
The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
Even randomized control trials should be viewed with skepticism:
Statistical problems also afflict the “gold standard” for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients’ personal characteristics won’t bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in ControlledClinical Trials. “Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality,” they wrote.
Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit 10 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized.
No comments:
Post a Comment