1. A defense by Robert Lucas:
It has been known for more than 40 years and is one of the main implications of Eugene Fama’s “efficient-market hypothesis” (EMH), which states that the price of a financial asset reflects all relevant, generally available information. If an economist had a formula that could reliably forecast crises a week in advance, say, then that formula would become part of generally available information and prices would fall a week earlier. (The term “efficient” as used here means that individuals use information in their own private interest. It has nothing to do with socially desirable pricing; people often confuse the two.)
The Economist’s briefing also cited as an example of macroeconomic failure the “reassuring” simulations that Frederic Mishkin, then a governor of the Federal Reserve, presented in the summer of 2007. The charge is that the Fed’s FRB/US forecasting model failed to predict the events of September 2008. Yet the simulations were not presented as assurance that no crisis would occur, but as a forecast of what could be expected conditional on a crisis not occurring. ... Mr Mishkin recognised the potential for a financial crisis in 2007, of course. Mr Bernanke certainly did as well. But recommending pre-emptive monetary policies on the scale of the policies that were applied later on would have been like turning abruptly off the road because of the potential for someone suddenly to swerve head-on into your lane. The best and only realistic thing you can do in this context is to keep your eyes open and hope for the best.
2. A rebuttal by Ping Chen:
Lucas was silent about the major questions, which were brought about by the current crisis: What is the nature of financial crisis, what is the role of government in macro management, and who should be responsible for economics’ ill prevention and preparation of crisis.
Lucas was the leader of the so-called counter Keynesian revolution under the banner of rational expectations and microfoundations since 1970s. According to his simplistic but elegant theory, unemployment is worker’s rational choice between work and leisure. The source of business cycles is external shocks. There is no room for government intervention, since market system is inherently stable and always in equilibrium. We found out that Lucas theory of microfoundations had weak evidence under the Principle of Large Numbers in 2002. The rational expectations may also defeated by arbitrage activity when pair of relative prices moving to opposite directions, say, stock price went down but housing price went up, or wage down but consumption up under easy credit. This financial crisis gave a historic blow to his microfoundations theory, since financial crisis was rooted not from microfoundations at household level, but meso foundation, i.e. the financial intermediate itself. The Great Depression and the current crisis show clearly that financial market is inherently unstable, ... Lucas had no courage to defend his infamous theory of microfoundations, but tried to shift the debate from macroeconomics to financial theory.
Surprisingly, Lucas claimed that the current crisis even strengthened the credit of the efficient market hypothesis (EMH). His argument was that no one could make a short-term forecast of crisis and make profit from the right forecast. Mr. Lucas seems have more belief in laissez fair economics than his knowledge of EMH and its alternatives. The fundamental assumption behind EMH is that financial market is ruled by random walks or the Brownian motion. If this theory is true, then it is very unlikely to have large price movements like financial crisis.
3. Mark Thoma's rebuttal:
I agree that the analytical tools economists use are not the problem. We cannot fully understand how the economy works without employing models of some sort, and we cannot build coherent models without using analytic tools such as mathematics. Some of these tools are very complex, but there is nothing wrong with sophistication so long as sophistication itself does not become the main goal, and sophistication is not used as a barrier to entry into the theorist's club rather than an analytical device to understand the world.
But all the tools in the world are useless if we lack the imagination needed to build the right models. Models are built to answer specific questions. When a theorist builds a model, it is an attempt to highlight the features of the world the theorist believes are the most important for the question at hand.
One major debate, for example, was the rate at which the macroeconomy returns to its long run equilibrium after a shock. Both New Keynesians and Chicago type equilibrium theorists believed the economy was always moving in the right direction—toward long-run equilibrium—the question was simply how fast that movement occurred and whether there was any role for policy to help the process along. Neither side of the debate seriously considered the possibility that the economy would continue to move away from its long-run equilibrium outcome for a substantial period of time—for years—as a housing price bubble developed, and that once the bubble popped the interconnectedness of financial markets would cause the problem to spread in a falling domino fashion that would throw the entire economy into a deep recession.
... policymakers couldn't and didn't take seriously the possibility that a crisis and meltdown could occur. And even if they had seriously considered the possibility of a meltdown, the models most people were using were not built to be informative on this question. It simply wasn't a question that was taken seriously by the mainstream.
Why did we, for the most part, fail to ask the right questions? Was it lack of imagination, was it the sociology within the profession, the concentration of power over what research gets highlighted, the inadequacy of the tools we brought to the problem, the fact that nobody will ever be able to predict these types of events, or something else?
It wasn't the tools, and it wasn't lack of imagination. As Brad DeLong points out, the voices were there—he points to Michael Mussa for one—but those voices were not heard. Nobody listened even though some people did see it coming. So I am more inclined to cite the sociology within the profession or the concentration of power as the main factors that caused us to dismiss these voices.
And here I think that thought leaders such as Robert Lucas and others who openly ridiculed models they disagreed with have questions they should ask themselves (e.g. Mr Lucas saying"At research seminars, people don’t take Keynesian theorizing seriously anymore; the audience starts to whisper and giggle to one another", or more recently "These are kind of schlock economics"). When someone as notable and respected as Robert Lucas makes fun of an entire line of inquiry, it influences whole generations of economists away from asking certain types of questions, some of which turned out to be important. Why was it necessary for the major leaders in macroeconomics to shut down alternative lines of inquiry through ridicule and other means rather than simply citing evidence in support of their positions? What were they afraid of? The goal is to find the truth, not win fame and fortune by dominating the debate.
... I don't know for sure the extent to which the ability of a small number of people in the field to control the academic discourse led to a concentration of power that stood in the way of alternative lines of investigation, or the extent to which the ideology that markets prices always tend to move toward their long-run equilibrium values caused us to ignore voices that foresaw the developing bubble and coming crisis. But something caused most of us to ask the wrong questions, and to dismiss the people who got it right, and I think one of our first orders of business is to understand how and why that happened.
Most economists took Lucas' view:
1) cost of stabilizing business cycles is small,
2) the problem of business cycles have been solved. (See here.)
Most of these economists then went on to build DSGE models to explain "stylized facts". (Previous comment here.) Perhaps the Chicago school (or any school) should be prevented from being too influential in a field? Or is this a failure of economists to police themselves in a free market of ideas? Will rational expectations one day be the lobotomy of economics?
4. Robert Solow (Via Mark Thoma):
What is needed for a better macroeconomics? My crude caricature of the Ramsey-based model suggests some of the gross implausibilities that need to be eliminated. The clearest candidate is the representative agent. Heterogeneity is the essence of a modern economy. In real life we worry about the relations between managers and shareowners, between banks and their borrowers, between workers and employers, between venture capitalists and entrepreneurs, you name it. We worry about those interfaces because they can and do go wrong, with likely macroeconomic consequences. We know for a fact that heterogeneous agents have different and sometimes conflicting goals, different information, different capacities to process it, different expectations, different beliefs about how the economy works. Representative-agent models exclude all this landscape, though it needs to be abstracted and included in macro-models.
I also doubt that universal rational expectations provide a useful framework for macroeconomics. One understands the appeal. Think of it this way: Herb Simon was surely right about bounded rationality; no one would deny that most economic agents are actually like that, and natural selection does not work fast enough to eliminate them. Why did the notion of "satisficing" never catch on? I think it is because the assumption of complete rationality tells the modeller what to do, whereas bounded rationality only tells the modeller what not to do. That is not helpful. Something similar is true about rational expectations. If there were a nice parametric family of alternative ways to model expectations, it might catch on. Most of us would happily go along with the notion of expectational equilibrium: if specific underlying expectations generate an outcome in which those expectations are systematically and non-trivially violated, that situation can not be an equilibrium. It is what happens then that needs thought. The situations that agents need to anticipate need not even be probabilistic, surely not stationary. The popular device used to be adaptive expectations; that may have been inadequate. Maybe this is a case for the application of psychological research (and sociological research as well, because the formation of expectations is a social process). Maybe experiments can be designed. Heterogeneity across agents and classes of agents is certainly important precisely here. One would like a simple, definite way to proceed, if that is possible. A good example of the sort of thing I mean is the way the Dixit-Stiglitz model made monopolistic competition easy. (The trouble is that we are dealing with an unobservable.)
Of course, it was Solow's growth model that was adopted by the freshwater schools to provide microfoundations of a representative agent model (RAM) even though Solow himself had very little to do with it. At the same time, the RAM was a serious modeling attempt to derive microfoundations for the AD-AS curves of macroeconomists. So if RAMs are a problem then perhaps the entire AS-AD/IS-LM framework should be seriously questioned as well.
No comments:
Post a Comment