William Buiter was perhaps one of the ones who fired the early shots as summarized here. Recently, the Economist magazine has revived this with its cover story which Econbrowser reflects on here. While I am sympathetic to Buiter's frustrations my view is closer to that in Econbrowser (although I have never learned how to build or calibrate DSGE models).
For instance, Buiter takes to task the building and modeling of DSGE models:
Linearize and trivialize
If one were to hold one’s nose and agree to play with the New Classical or New Keynesian complete markets toolkit, it would soon become clear that any potentially policy-relevant model would be highly non-linear, and that the interaction of these non-linearities and uncertainty makes for deep conceptual and technical problems. Macroeconomists are brave, but not that brave.; So they took these non-linear stochastic dynamic general equilibrium models into the basement and beat them with a rubber hose until they behaved.; This was achieved by completely stripping the model of its non-linearities and by ... mappings into well-behaved additive stochastic disturbances.
Those of us who have marvelled at the non-linear feedback loops between asset prices in illiquid markets and the funding illiquidity of financial institutions exposed to these asset prices through mark-to-market accounting, margin requirements, calls for additional collateral etc.; will appreciate what is lost...; Threshold effects, non-linear accelerators - they are all out of the window.; Those of us who worry about endogenous uncertainty arising from the interactions of boundedly rational market participants cannot but scratch our heads at the insistence of the mainline models that all uncertainty is exogenous and additive.
Technically, the non-linear stochastic dynamic models were linearised (often log-linearised) at a deterministic (non-stochastic) steady state.; The analysis was further restricted by only considering forms of randomness that would become trivially small in the neigbourhood of the deterministic steady state.; Linear models with additive random shocks we can handle - almost !
Even this was not quite enough...; When you linearize a model, and shock it with additive random disturbances, an unfortunate by-product is that the resulting linearised model behaves either in a very strongly stabilising fashion or in a relentlessly explosive manner.; ... The dynamic stochastic general equilibrium (DSGE) crowd saw that the economy had not exploded without bound in the past, and concluded from this that it made sense to rule out ... the explosive solution trajectories.; What they were left with was something that, following an exogenous; random disturbance, would return to the deterministic steady state pretty smartly.; No L-shaped recessions.; No processes of cumulative causation and bounded but persistent decline or expansion.; Just nice V-shaped recessions.
When I was in graduate school I remember being excited about learning DSGE models but as is described accurately by Buiter the process takes away all the most interesting parts - dynamics and stochastics and focuses on the steady state. It is unfortunate but is perhaps unavoidable due to limitations in computational ability (of economists and computers). (Those more knowledgeable would know better than I.) However, this criticism is not just limited to macroeconomics. In microeconomics almost all policy analysis assumes that markets are in equilibrium (of demand and supply), the counterpart to the macroeconomics steady state. Calculation of welfare losses or effects of price ceilings and all other rigidities considered in economics texts assume that the markets are in equilibrium. Why has microeconomics escaped the wrath of the popular press?
There is in some sense that after the rational expectations devastation created by Robert Lucas, economists began to look elsewhere - mainly the micro foundation based macro models that underlie all DSGE models. This period also coincided with the Great Moderation and almost all DSGE models that I was exposed to in graduate schools virtually sought to "explain stylized facts" rather than to make policy recommendations or for forecasting. Policy recommendations are for politicians - economists (armed with DSGE models) only make positive statements of policy effects based on simulations. The results of these simulations (policy oriented or not) depended on many underlying assumptions and competing models with different assumptions could change these results resulting in a great deal of confusion (to me - especially when as a graduate student I was trying to synthesize some of these results). Moreover, the models were so complicated with feedback effects that in general it was impossible to tell what was driving the results and the discussion of most of the impulse responses (model simulations) became more of rhetoric or story telling. (Never mind that the complications made coding and calculations difficult and getting economists to give up their code to be examined was non-trivial - even today.)
Unfortunately, in order to make progress, the basics of DSGE models need to be taught (and learned). From Econbrowser:
I won't deny that in the past 20 years, I haven't seen more than a few models that struck me as pretty irrelevant for analysis of real world issues. But I think that some mathematical training, and the use of models, is essential to economic analysis. After all, one can think of completely irrelevant frameworks for looking at the world even without a model, just as one can with a model.
Unfortunately, because many DSGE models only tried to explain "stylized facts" they are practically useless for the current financial crisis. Some DSGE models came out of the Asian Crisis but again, these only explained stylized facts related to that particular crisis. And if there is a valid criticism of DSGE models it is that they are too specific and not easily generalizable. In the verbiage of randomized control trials that are so in vogue today: they are internally valid but have little external validity. Economists slap each other on the back for being smart enough to come up with a (complicated) model that can explain something ex post and as such can never be prepared to make any general policy recommendation such as whether a fiscal stimulus can work or whether high interest rates can defend a peg. They will tell you this in specific terms specific to their model and blather on as only economists can about whether the shocks are persistent enough or whether there are nominal rigidities that are large enough to propagate the shock to their system. In other words, many economists lack the initiative or ability to look deeper into their model to discover what's inside their black box of equations. As long as their story telling is convincing enough there is no need to look inside.
Simon Johnson states this failure of macroeconomics as follows:
We also had a situation where falling values for collateral triggered more asset sales (either for accounting reasons or due to market pressure of various kinds), and this led to further lowering of collateral. ... More broadly, there was also some kind of bad expectations trap, in which everyone expected everyone else to default and that kind of fear of counterparty risk is obviously self-fulfilling. ... In other words, this view is that we can retrofit our favorite mainstream models to accommodate what happened, at least at a fairly high level of abstraction. There is no crisis for macroeconomic thinking, let alone for economics. ... Unfortunately, we know relatively little about how to stop today's process of falling credit around the world, known as "global deleveraging."
In other words, today's models while overly complicated (with equations requiring market clearing) are actually not complicated enough and in my view over reliant on stories based on impulse responses rather than looking at actual causal mechanisms inside the model.
One possibility is to examine the financial crisis from the perspective of microeconomic concepts and do without complicated DSGE models that can deliver almost any result the model builder wants to deliver. Barry Eichengreen:
The late twentieth century was the heyday of deductive economics. Talented and facile theorists set the intellectual agenda. Their very facility enabled them to build models with virtually any implication, which meant that policy makers could pick and choose at their convenience. Theory turned out to be too malleable, in other words, to provide reliable guidance for policy.
His article also seems to make the point that given all the micro structure inherent in the financial markets the financial crisis should have been foreseen. (I think most economists agree that that a financial crisis was coming but they seemed to be more focused on the current account imbalance and the dollar rather than in the domestic sector.)
What got us into this mess, in other words, were not the limits of scholarly imagination. It was not the failure or inability of economists to model conflicts of interest, incentives to take excessive risk and information problems that can give rise to bubbles, panics and crises. It was not that economists failed to recognize the role of social and psychological factors in decision making or that they lacked the tools needed to draw out the implications. In fact, these observations and others had been imaginatively elaborated by contributors to the literatures on agency theory, information economics and behavioral finance. Rather, the problem was a partial and blinkered reading of that literature. The consumers of economic theory, not surprisingly, tended to pick and choose those elements of that rich literature that best supported their self-serving actions. Equally reprehensibly, the producers of that theory, benefiting in ways both pecuniary and psychic, showed disturbingly little tendency to object. It is in this light that we must understand how it was that the vast majority of the economics profession remained so blissfully silent and indeed unaware of the risk of financial disaster.