This book serves as a counterweight to books that laud free markets and capitalism such as Thomas Friedman's Lexus and the Olive Tree (missed that one) and The Commanding Heights. There is very little to disagree with in the book and if the current financial crisis has not given pause to the virtues of free markets then this book might.
His main concern is that these books have rewritten history in such a manner that trumpets the successes of free markets without considering the evidence that countries such as England, Germany etc. engaged in heavy protection of its industries in order to become industrialized and thus rich. (His interpretation of history, that is.) His support of tarriffs is so unwavering that I wonder how he feels about Smoot-Hawley.
The major problem with this book is that it lacks any solid advice. Unlike the pro-free market books that dispense advice easily and freely - deregulate! privatize! - this book makes a more nuanced case for deregulation and free trade. This means that it is harder for a book that calls for a more balanced approach to give any practical advice.
Chang is a defender of infant industry protection using tariffs and government subsidies that are typically more "patient" then foreign capital or direct foreign investment. He believes that industrialization is the only way for any country to get rich by acquiring higher level skills which then feeds into a virtuous circle of higher education, skills, and high technology industries. This is a biased reading of the successes of the East Asian "miracle" countries. Malaysia has tried to develop a heavy-industry base using its automobile Proton. As far as I can tell it has not been very successful even though it has been more than 20 years in existence. Perhaps its failure is due to it not being sufficiently export-oriented but in any case Chang does not give any attention to the industries that countries have tried to use import substitution but had to abandon them eventually because it became a drain on its budget.
Unfortunately, the weakness with any prescription that calls for industry targeting is: which industry? I have no problem with industry targeting nor with government targeting a wide array of industries but there are no guarantees of success nor that the benefits of targeting and subsidies will exceed the costs.
There is no disagreement with me on the fact that calls for free trade is based more on ideology than evidence. However his references to the "unholy trinity of IMF, WB and WTO" is sometimes a little rankling - no doubt this is intended but the IMF has learned some lessons from the Asian financial crisis and the failure of Doha has actually been considered positive by economists such as Dani Rodrik. If the IMF and WB are guilty of anything it is that they respond too favorably to "trends" and "fads": education, corruption, privatization, liberalization without much apparent consideration of other factors of the country.
Likewise, his accusation that deregulation results in corruption e.g. Russia, is true there is also some evidence that regulations also promote corruption. It is true that in some cases corruption greases the wheels of capitalism (as he notes) but isn't the existence of corruption in these cases prima facie evidence for less regulation?
While "getting the balance right" which is a subtitle in one of the chapters is easy to say putting it into practice is a lot harder and dispensing this little statement as advice borders on flip. As such, I view Dani Rodrik et. al's attempts to put growth into practical terms using diagnostics of binding constraints as being a little ahead of its time. See here for some implementations as well.
Finally, the cheer of free markets have been dampened somewhat by the following books (including this one):
1. Charlton and Stiglitz's Fair Trade, Stiglitz's Globalization and its Discontents - criticism of free trade and globalization.
2. Jaffee and Lerner's Innovation and its Discontents which Chang cites which criticizes the patent system.
3. Michael Heller's Gridlock Economy which challenges that property rights is good for growth.
Saturday, January 31, 2009
Friday, January 30, 2009
Some new proposals on the future of financial regulation were on VoxEU:
1. Luigi Zingales calls for, among other things regulation of the CDS and CDO market.
2. Charles Wyplosz summarizes The ICMB-CEPR Geneva Report: “The Future of Financial Regulation”. Some excerpts:
You can’t make the system safe by making each bank safe
The current approach to systemic regulation implicitly assumes that we can make the system as a whole safe by simply trying to make sure that individual banks are safe. This sounds like a truism, but in practice it represents a fallacy of composition. In trying to make themselves safer, banks, and other highly leveraged financial intermediaries, can behave in a way that collectively undermines the system.
As a result, risk is endogenous. Selling an asset when the price of risk increases, is a prudent response from the perspective of an individual bank. But if many banks act in this way, the asset price will collapse, forcing institutions to take yet further steps to rectify the situation. Responses of the banks to such pressures lead to generalised declines in asset prices, and enhanced correlations and volatility in asset markets.
Busts usually follow booms
Financial crashes do not occur randomly, but generally follow booms. Through a number of avenues, some regulatory, some not, often in the name of sophistication and modernity, the role of current market prices on behaviour has intensified.
These avenues include mark-to-market valuation of assets; regulatory approved market-based measures of risk, such as credit default swap spreads in internal credit models or price volatility in market risk models; and the increasing use of credit ratings, which tend to be correlated, directionally at least, with market prices.
In the up-phase of the economic cycle, price-based measures of asset values rise, price-based measures of risk fall and competition to grow bank profits increases. Most financial institutions spontaneously respond by (i) expanding their balance sheets to take advantage of the fixed costs of banking franchises and regulation; (ii) trying to lower the cost of funding by using short-term funding from the money markets; and (iii) increasing leverage. Those that do not do so are seen as underutilising their equity and are punished by the stock markets.
When the boom ends, asset prices fall and short-term funding to institutions with impaired and uncertain assets or high leverage dries up. Forced sales of assets drives up their measured risk and, invariably, the boom turns to bust.
My previous thoughts were here, here, and here. I am surprised that none of the proposals address the fact that increased regulation produces incentives to avoid regulations such as SIVs and SPVs that invested in subprime backed securities. The prohibition of off-balance sheet investment vehicles should be one element of the new regulatory regime. This would lead to armies of lawyers and financial engineers trying to circumvent this prohibition. Further regulation to prohibit these lawyers and financial engineers from trying to circumvent this problem should then be introduced which would then lead to ... well, you get the idea.
Part of Brad Setser's post on capital inflows mentions this:
I think we now more or less know that the strong increase in gross capital inflows and outflows after 2004 (gross inflows and outflows basically doubled from late 2004 to mid 2007) was tied to the expansion of the shadow banking system. ... It was a largely unregulated system. And it was largely offshore, at least legally. SIVs and the like were set up in London. They borrowed short-term from US banks and money market funds to buyer longer-term assets, generating a lot of cross border flows but little net financing.
My notes on CDS and CDOs are here and the use of VAR numbers to reduce aggregate risk are here. Trying to manage macroeconomic risks using individual bank VARs is essentially the same as trying to safeguard individual banks. It doesn't really work well.
1. Luigi Zingales calls for, among other things regulation of the CDS and CDO market.
2. Charles Wyplosz summarizes The ICMB-CEPR Geneva Report: “The Future of Financial Regulation”. Some excerpts:
You can’t make the system safe by making each bank safe
The current approach to systemic regulation implicitly assumes that we can make the system as a whole safe by simply trying to make sure that individual banks are safe. This sounds like a truism, but in practice it represents a fallacy of composition. In trying to make themselves safer, banks, and other highly leveraged financial intermediaries, can behave in a way that collectively undermines the system.
As a result, risk is endogenous. Selling an asset when the price of risk increases, is a prudent response from the perspective of an individual bank. But if many banks act in this way, the asset price will collapse, forcing institutions to take yet further steps to rectify the situation. Responses of the banks to such pressures lead to generalised declines in asset prices, and enhanced correlations and volatility in asset markets.
Busts usually follow booms
Financial crashes do not occur randomly, but generally follow booms. Through a number of avenues, some regulatory, some not, often in the name of sophistication and modernity, the role of current market prices on behaviour has intensified.
These avenues include mark-to-market valuation of assets; regulatory approved market-based measures of risk, such as credit default swap spreads in internal credit models or price volatility in market risk models; and the increasing use of credit ratings, which tend to be correlated, directionally at least, with market prices.
In the up-phase of the economic cycle, price-based measures of asset values rise, price-based measures of risk fall and competition to grow bank profits increases. Most financial institutions spontaneously respond by (i) expanding their balance sheets to take advantage of the fixed costs of banking franchises and regulation; (ii) trying to lower the cost of funding by using short-term funding from the money markets; and (iii) increasing leverage. Those that do not do so are seen as underutilising their equity and are punished by the stock markets.
When the boom ends, asset prices fall and short-term funding to institutions with impaired and uncertain assets or high leverage dries up. Forced sales of assets drives up their measured risk and, invariably, the boom turns to bust.
My previous thoughts were here, here, and here. I am surprised that none of the proposals address the fact that increased regulation produces incentives to avoid regulations such as SIVs and SPVs that invested in subprime backed securities. The prohibition of off-balance sheet investment vehicles should be one element of the new regulatory regime. This would lead to armies of lawyers and financial engineers trying to circumvent this prohibition. Further regulation to prohibit these lawyers and financial engineers from trying to circumvent this problem should then be introduced which would then lead to ... well, you get the idea.
Part of Brad Setser's post on capital inflows mentions this:
I think we now more or less know that the strong increase in gross capital inflows and outflows after 2004 (gross inflows and outflows basically doubled from late 2004 to mid 2007) was tied to the expansion of the shadow banking system. ... It was a largely unregulated system. And it was largely offshore, at least legally. SIVs and the like were set up in London. They borrowed short-term from US banks and money market funds to buyer longer-term assets, generating a lot of cross border flows but little net financing.
My notes on CDS and CDOs are here and the use of VAR numbers to reduce aggregate risk are here. Trying to manage macroeconomic risks using individual bank VARs is essentially the same as trying to safeguard individual banks. It doesn't really work well.
Why neuroeconomics is not yet a science
Jeff Goldberg undergoes an fMRI which reminds me as to why neuroeconomics has been overselling itself:
Bin Laden, I was pleased to learn, stimulated predictably negative brain activity, but the neuroscientists were flummoxed by my reaction to the sight of Ahmadinejad, who apparently stimulated, in a most dramatic way, my ventral striatum. “Reward!” Iacoboni said. “You’ll have to explain this one.”
When I couldn’t, Joshua Freedman, who is a practicing psychiatrist, offered a possible explanation: “Perhaps you believe that the Israelis or the Americans have the situation under control and so you’re anticipating the day that he’s brought down.” He asked me some questions about my view of Jewish history, and then said: “You seem to believe that the Jewish people endure, that people who try to hurt the Jewish people ultimately fail. Therefore, you derive pleasure from believing that Ahmadinejad will also eventually fail. It’s very similar to the experiment with the monkey and the grape. It’s been shown that the monkey feels maximal reward not when he eats the grape but at the moment he’s sure it’s in his possession, ready to eat. That could explain your response to Ahmadinejad.”
He paused. “Or it means that you’re a Shiite.”
Andrew Gelman addresses the problems of multi comparisons and the overhyped "voodoo" correlations of neuroscience in the press.
It's hard for me to believe that the approach based on separate analyses of voxels and p-values, is really the best way to go. The null hypothesis of zero correlations isn't so interesting. What's really of interest is the pattern of where the differences are in the brain. ...
I think the way forward will be to go beyond correlations and the horrible multiple-comparisons framework, which causes so much confusion. Vul et al. and Lieberman et al. both point out that classical multiple comparisons adjustments do not eliminate the systematic overstatement of correlations.
Bin Laden, I was pleased to learn, stimulated predictably negative brain activity, but the neuroscientists were flummoxed by my reaction to the sight of Ahmadinejad, who apparently stimulated, in a most dramatic way, my ventral striatum. “Reward!” Iacoboni said. “You’ll have to explain this one.”
When I couldn’t, Joshua Freedman, who is a practicing psychiatrist, offered a possible explanation: “Perhaps you believe that the Israelis or the Americans have the situation under control and so you’re anticipating the day that he’s brought down.” He asked me some questions about my view of Jewish history, and then said: “You seem to believe that the Jewish people endure, that people who try to hurt the Jewish people ultimately fail. Therefore, you derive pleasure from believing that Ahmadinejad will also eventually fail. It’s very similar to the experiment with the monkey and the grape. It’s been shown that the monkey feels maximal reward not when he eats the grape but at the moment he’s sure it’s in his possession, ready to eat. That could explain your response to Ahmadinejad.”
He paused. “Or it means that you’re a Shiite.”
Andrew Gelman addresses the problems of multi comparisons and the overhyped "voodoo" correlations of neuroscience in the press.
It's hard for me to believe that the approach based on separate analyses of voxels and p-values, is really the best way to go. The null hypothesis of zero correlations isn't so interesting. What's really of interest is the pattern of where the differences are in the brain. ...
I think the way forward will be to go beyond correlations and the horrible multiple-comparisons framework, which causes so much confusion. Vul et al. and Lieberman et al. both point out that classical multiple comparisons adjustments do not eliminate the systematic overstatement of correlations.
Thursday, January 29, 2009
My CFL died and I'm about to poison the earth
After about a year one of our compact fluorescent bulbs stopped working - emitting a foul odor as it fizzled and died. I've placed it into a non-recycleable (at least in our county) plastic container and am thinking of giving it a decent burial. I will mourn its passing since it died well short of its expected life span.
Unfortunately, burial is not an option since it contains mercury. I guess I'll try our local recycling and wasted center.
Unfortunately, burial is not an option since it contains mercury. I guess I'll try our local recycling and wasted center.
Wednesday, January 28, 2009
Crime and Section 8
The Atlantic article on crime and the spread of Section 8 vouchers to the suburbs which in turn also spreads crime was compelling though not entirely convincing. For those who believe that you can take the poor out of the crime ghettos but not the crime out of the poor ghettos this article provided the ammunition.
On a theoretical level the idea is that as more poor people with Section 8 vouchers move out of the inner city and as they begin locate closely to one another again they form a new pocket of crime. While one or two househoulds with a Section 8 voucher in a suburb may not result in an increase in crime, maybe ten or more households might be sufficient to increase crime because it is more likely that at least one or two households have a criminal past and are more likely to vicitimize one another and others in the new neighborhood. I actually have a strong prior on this though it is only theoretical.
This paper by Jeff Kling and Jens Ludwig ("Is Crime Contagious") uses data from MTO randomization/demonstration program finds results that I would characterize as mixed. I called MTO a demonstration program because the HUD site indicates this is what it was although the analysts who ran it call it a randomized trial. I hestitate to call this a randomized trial because if memory serves they had difficulty recruiting households to participate.
The authors conclude:
Our results are not consistent with the idea that contagion explains as much of the across neighborhood variation in violent crime rates as previous research suggests. We do not find any statistically significant evidence that MTO participants are arrested for violent crime more often in communities with higher violent crime rates. Our estimates enable us to rule out very large contagion effects, but not more modest associations. This general finding holds for our full sample of MTO youth and adults as well as for sub-groups defined by gender and age, and it also holds when we simultaneously instrument for neighborhood racial segregation or poverty rates.
I don't know how this translates into rising "local" crime rates that are more of interest to police in the Atlantic article. The article is concerned that crime rates rise in suburbs and the Kling and Ludwig do not test that crime rates rise in the neighborhoods in which the MTO participants move into. The definition of "neighborhood" is one difficulty althought it may be possible to test for the significance of whether MTO participants are arrested for violent crime more often in communitites with lower violent crime rates. It is possible that there was not enough variation in the data to perform this test.
On a theoretical level the idea is that as more poor people with Section 8 vouchers move out of the inner city and as they begin locate closely to one another again they form a new pocket of crime. While one or two househoulds with a Section 8 voucher in a suburb may not result in an increase in crime, maybe ten or more households might be sufficient to increase crime because it is more likely that at least one or two households have a criminal past and are more likely to vicitimize one another and others in the new neighborhood. I actually have a strong prior on this though it is only theoretical.
This paper by Jeff Kling and Jens Ludwig ("Is Crime Contagious") uses data from MTO randomization/demonstration program finds results that I would characterize as mixed. I called MTO a demonstration program because the HUD site indicates this is what it was although the analysts who ran it call it a randomized trial. I hestitate to call this a randomized trial because if memory serves they had difficulty recruiting households to participate.
The authors conclude:
Our results are not consistent with the idea that contagion explains as much of the across neighborhood variation in violent crime rates as previous research suggests. We do not find any statistically significant evidence that MTO participants are arrested for violent crime more often in communities with higher violent crime rates. Our estimates enable us to rule out very large contagion effects, but not more modest associations. This general finding holds for our full sample of MTO youth and adults as well as for sub-groups defined by gender and age, and it also holds when we simultaneously instrument for neighborhood racial segregation or poverty rates.
I don't know how this translates into rising "local" crime rates that are more of interest to police in the Atlantic article. The article is concerned that crime rates rise in suburbs and the Kling and Ludwig do not test that crime rates rise in the neighborhoods in which the MTO participants move into. The definition of "neighborhood" is one difficulty althought it may be possible to test for the significance of whether MTO participants are arrested for violent crime more often in communitites with lower violent crime rates. It is possible that there was not enough variation in the data to perform this test.
An alternative to nationalization
Ricardo Caballero has an alternative to nationalization which is for the government to act as an insurance against uncertainty:
... there is a far more efficient solution, which is that the government takes over the role of the insurance markets ravaged by Knightian uncertainty. That is, in our example, the government uses one unit of its own capital and instead sells the insurance to the private parties at non-Knightian prices.
A little background:
There is extensive experimental evidence that economic agents faced with (Knightian) uncertainty become overly concerned with extreme, even if highly unlikely, negative events. Unfortunately, the very fact that investors behave in this manner make the dreaded scenarios all the more likely. (From Part I of the column)
(From Part 2)
... I argue that an efficient solution involves the government taking over the role of the insurance markets ravaged by Knightian uncertainty.
... Knightian uncertainty generates a sort of double-(or more)-counting problem, where scarce capital is wasted insuring against impossible events (Caballero and Krishnamurthy 2008b).
A simple example can reinforce this point. Suppose two investors, A and B, engage in a swap, and there are only two states of nature, X and Y. In state X, agent B pays one dollar to agent A, and the opposite happens in state Y. Thus, only one dollar is needed to honour the contract. To guarantee their obligations, each of A and B put up some capital. Since only one dollar is needed to honour the contract, an efficient arrangement will call for A and B jointly to put up no more than one dollar. However, if our agents are Knightian, they will each be concerned with the scenario that their counterparty defaults on them and does not pay the dollar. That is, in the Knightian situation the swap trade can happen only if each of them has a unit of capital. The trade consumes two rather than the one unit of capital that is effectively needed.
Of course real world transactions and scenarios are a lot more complex than this simple example, which is in itself part of the problem. In order to implement transactions that effectively require one unit of capital, the government needs to inject many units of capital into the financial system.
I am uncertain whether the pricing of the insurance is possible given the uncertainty. I am still in favor of wholesale nationalization as a way to reduce uncertainty quickly. In Part I of his column he said:
Worsening the situation, until very recently, the policy response from the US Treasury exacerbated rather than dampened the uncertainty problem.
Early on in the crisis, there was a nagging feeling that policy was behind the curve; then came the “exemplary punishment” (of shareholders) policy of Secretary Paulson during the Bear Stearns intervention, which significantly dented the chance of a private capital solution to the problem; and finally, the most devastating blow came during the failure to support Lehman. The latter unleashed a very different kind of recession, where uncertainty ravaged all forms of explicit and implicit financial insurance markets.
In short, I am uncertain that the reduction of uncertainty by insuring against uncertainty will not in turn result in even more uncertainty due to implementation issues.
The strongest argument against nationalization comes from this report in WAPO:
Another danger is that by taking over a substantial portion of a bank's stock and wiping out the investment of the firm's other shareholders, the government could also precipitate a sell-off across the banking system as investors flee, fearing they could be next.
Which is why nationalization must quick and broad leaving only the smallest retail banks that are very clearly not impacted by the crisis independent. Of course, post nationalization is whole other subject matter. The end of capitalism perhaps.
Currently, it is unlikely that the US will take the lead in nationalization of the sector. Perhaps France or UK will be the leaders here.
Update: Tyler Cowen points out that nationalization might not be cheaper than a bail-out. If the sector suffers a loss whether we bail it out or take it over we still suffer the loss.
... there is a far more efficient solution, which is that the government takes over the role of the insurance markets ravaged by Knightian uncertainty. That is, in our example, the government uses one unit of its own capital and instead sells the insurance to the private parties at non-Knightian prices.
A little background:
There is extensive experimental evidence that economic agents faced with (Knightian) uncertainty become overly concerned with extreme, even if highly unlikely, negative events. Unfortunately, the very fact that investors behave in this manner make the dreaded scenarios all the more likely. (From Part I of the column)
(From Part 2)
... I argue that an efficient solution involves the government taking over the role of the insurance markets ravaged by Knightian uncertainty.
... Knightian uncertainty generates a sort of double-(or more)-counting problem, where scarce capital is wasted insuring against impossible events (Caballero and Krishnamurthy 2008b).
A simple example can reinforce this point. Suppose two investors, A and B, engage in a swap, and there are only two states of nature, X and Y. In state X, agent B pays one dollar to agent A, and the opposite happens in state Y. Thus, only one dollar is needed to honour the contract. To guarantee their obligations, each of A and B put up some capital. Since only one dollar is needed to honour the contract, an efficient arrangement will call for A and B jointly to put up no more than one dollar. However, if our agents are Knightian, they will each be concerned with the scenario that their counterparty defaults on them and does not pay the dollar. That is, in the Knightian situation the swap trade can happen only if each of them has a unit of capital. The trade consumes two rather than the one unit of capital that is effectively needed.
Of course real world transactions and scenarios are a lot more complex than this simple example, which is in itself part of the problem. In order to implement transactions that effectively require one unit of capital, the government needs to inject many units of capital into the financial system.
I am uncertain whether the pricing of the insurance is possible given the uncertainty. I am still in favor of wholesale nationalization as a way to reduce uncertainty quickly. In Part I of his column he said:
Worsening the situation, until very recently, the policy response from the US Treasury exacerbated rather than dampened the uncertainty problem.
Early on in the crisis, there was a nagging feeling that policy was behind the curve; then came the “exemplary punishment” (of shareholders) policy of Secretary Paulson during the Bear Stearns intervention, which significantly dented the chance of a private capital solution to the problem; and finally, the most devastating blow came during the failure to support Lehman. The latter unleashed a very different kind of recession, where uncertainty ravaged all forms of explicit and implicit financial insurance markets.
In short, I am uncertain that the reduction of uncertainty by insuring against uncertainty will not in turn result in even more uncertainty due to implementation issues.
The strongest argument against nationalization comes from this report in WAPO:
Another danger is that by taking over a substantial portion of a bank's stock and wiping out the investment of the firm's other shareholders, the government could also precipitate a sell-off across the banking system as investors flee, fearing they could be next.
Which is why nationalization must quick and broad leaving only the smallest retail banks that are very clearly not impacted by the crisis independent. Of course, post nationalization is whole other subject matter. The end of capitalism perhaps.
Currently, it is unlikely that the US will take the lead in nationalization of the sector. Perhaps France or UK will be the leaders here.
Update: Tyler Cowen points out that nationalization might not be cheaper than a bail-out. If the sector suffers a loss whether we bail it out or take it over we still suffer the loss.
Animal Spirits
The Animal Spirits Page has constructed this measure using unemployment rates. I like the approach but am not convinced that unemployent rates can tell us everything. Likewise I am also skeptical of yield curve approaches and here is where perhaps psychometrics or factor analysis might lend to more insights.
I was reminded of these when Mark Thoma linked to Bob Shiller's piece on consumer confidence. Like Mark I do not believe that confidence is the cause of business cycles but I believe that it is a propagation mechanism. Once the Fed failed to contain the collapse of Lehman and AIG it began to spillover into the real sector. Until then I believed (with no evidence whatsoever) that the subprime problem could have been limited to a small sector of the economy. Perhaps an DSGE model of this might be helpful.
I was reminded of these when Mark Thoma linked to Bob Shiller's piece on consumer confidence. Like Mark I do not believe that confidence is the cause of business cycles but I believe that it is a propagation mechanism. Once the Fed failed to contain the collapse of Lehman and AIG it began to spillover into the real sector. Until then I believed (with no evidence whatsoever) that the subprime problem could have been limited to a small sector of the economy. Perhaps an DSGE model of this might be helpful.
Wintry mix
We got the dreaded wintry mix of freezing rain and ice overnight. Though not as bad as in 1999 when M went into labor with K1 I was reminded of it when I went outside and our driveway (with its approx 30 degree slope) was ice covered. However, it sounds like West Virginia was badly hit. Washington has been in the wintry mix line for the past few years now and it may well be a long term trend given the fact that we are in the midst of climate change.
On the plus side it is days like this that I am glad we widened our driveway enough to give us more negotiating room to shovel and deice the driveway and cars. It was an expensive project (at least more than I had wanted to pay initially) but well worth it.
I think the last time we got snowed in and the federal government shut down was in 2003 (or was it 2000?). In any case the Capital Weather Gang recounts some snowstorms in DC here and here.
On the plus side it is days like this that I am glad we widened our driveway enough to give us more negotiating room to shovel and deice the driveway and cars. It was an expensive project (at least more than I had wanted to pay initially) but well worth it.
I think the last time we got snowed in and the federal government shut down was in 2003 (or was it 2000?). In any case the Capital Weather Gang recounts some snowstorms in DC here and here.
Tuesday, January 27, 2009
Google, stupid, reading, writing
This article in the Atlantic "Is Google Making Us Stupid" resulted in a lot of random thoughts:
1. The ability to quickly search and scan items I find makes it less likely that I will concentrate for a long article. Then again the sheer number of links sometimes makes it joyful to jump from page to page. (I like IMDB for this.)
2. Reading on screen is hard, especially academic articles where I like to skip pages or move back and forth through them. Perhaps there is a reading software (that reads PDF) that allows me "rip" pages off and set them aside to look at them later.
3. The same issue of the Atlantic has an article on Rupert Murdoch's purchase of WSJ and the future of print newspapers in general. It made me think that with shortened attention spans the future is not in dailies but perhaps in weeklies or twice weeklies (unless there is real breaking news). Dailies will be the similar to the Washington Post Express but the more substantially weeklies will provide the detail. Alternatively, the online versions will provide more detail. One random thought is here.
The problem is declining ad revenue and subscriptions (and costs). The above addresses the print costs of dailies. Ad revenue could potentially be boosted with the free express versions and subscription could increase if I didn't have to deal with newspapers piling up to be read. (For instance, I did not get to my July/Aug issue of Atlantic until just recently.) Unfortunately, this approach threatens the business of Time/Newsweek but the styles could be different enough that there is some differentiation.
4. The short attention span also translates to writing. Without a word processor it seems that I am unwilling to write using a pencil/pen and paper. Andrew Gelman explores how different writing software have affected his writing. I am interested in how the medium (electronic etc. affects style and abilities). I for one would like to write more technical articles but the thought of firing up an application to write out equations is enough to make me procrastinate.
After being out of grad school for such a long time I have also seem to lost the ability to use a pencil and paper to work out equations and derivations.
1. The ability to quickly search and scan items I find makes it less likely that I will concentrate for a long article. Then again the sheer number of links sometimes makes it joyful to jump from page to page. (I like IMDB for this.)
2. Reading on screen is hard, especially academic articles where I like to skip pages or move back and forth through them. Perhaps there is a reading software (that reads PDF) that allows me "rip" pages off and set them aside to look at them later.
3. The same issue of the Atlantic has an article on Rupert Murdoch's purchase of WSJ and the future of print newspapers in general. It made me think that with shortened attention spans the future is not in dailies but perhaps in weeklies or twice weeklies (unless there is real breaking news). Dailies will be the similar to the Washington Post Express but the more substantially weeklies will provide the detail. Alternatively, the online versions will provide more detail. One random thought is here.
The problem is declining ad revenue and subscriptions (and costs). The above addresses the print costs of dailies. Ad revenue could potentially be boosted with the free express versions and subscription could increase if I didn't have to deal with newspapers piling up to be read. (For instance, I did not get to my July/Aug issue of Atlantic until just recently.) Unfortunately, this approach threatens the business of Time/Newsweek but the styles could be different enough that there is some differentiation.
4. The short attention span also translates to writing. Without a word processor it seems that I am unwilling to write using a pencil/pen and paper. Andrew Gelman explores how different writing software have affected his writing. I am interested in how the medium (electronic etc. affects style and abilities). I for one would like to write more technical articles but the thought of firing up an application to write out equations is enough to make me procrastinate.
After being out of grad school for such a long time I have also seem to lost the ability to use a pencil and paper to work out equations and derivations.
Economics can learn from psychometrics
This post by Andrew Gelman on how statisticians seem to rediscover something that psychometricians have already discovered a ong time ago made me think that economics can benefit from the study of psychometrics as well. I am thinking in particular of index number creation or measures of latent ability such as SATs. Economists use test scores as outcomes all the time yet do not adopt psychometric elements into their research.
One possible avenue is the stress of an economy that was pondered here. Others are possibly comparing WB Governance Indicators to those created using psychometric techniques.
One possible avenue is the stress of an economy that was pondered here. Others are possibly comparing WB Governance Indicators to those created using psychometric techniques.
First shovel-able snow
It snowed today - it barely covered the ground and roads were clear but wet this morning. Montgomery County where we are have already closed schools but K1 and K2 are in school today in DC. For snow lovers it has been a disappointing snow season. As K1 says: "It's still snow."
We still haven't used the sled we bought two years ago and unfortunately with schools open for K1 and K2 today will not be the day and its supposed to be icy tomorrow.
We still haven't used the sled we bought two years ago and unfortunately with schools open for K1 and K2 today will not be the day and its supposed to be icy tomorrow.
Friday, January 23, 2009
Did a technology shock cause the subprime crisis?
Or the more widespread financial crisis? I was reminded of this by Robert Skidelsky. He overstates what RBC is (no unemployment) since the current crop of RBC or rather DSGE models are quite
In classic business-cycle theory, a boom is initiated by a clutch of inventions – power looms and spinning jennies in the 18th century, railways in the 19th century, automobiles in the 20th century. But competitive pressures and the long gestation period of fixed-capital outlays multiply optimism, leading to more investment being undertaken than is actually profitable. Such over-investment produces an inevitable collapse. Banks magnify the boom by making credit too easily available, and they exacerbate the bust by withdrawing it too abruptly. But the legacy is a more efficient stock of capital equipment.
Dennis Robertson, an early 20th-century "real" business-cycle theorist, wrote: "I do not feel confident that a policy which, in the pursuit of stability of prices, output, and employment, had nipped in the bud the English railway boom of the forties, or the American railway boom of 1869-71, or the German electrical boom of the nineties, would have been on balance beneficial to the populations concerned." Like his contemporary, Schumpeter, Robertson regarded these boom-bust cycles, which involved both the creation of new capital and the destruction of old capital, as inseparable from progress.
Contemporary "real" business-cycle theory builds a mountain of mathematics on top of these early models, the main effect being to minimise the "destructiveness" of the "creation". It manages to combine technology-driven cycles of booms and recessions with markets that always clear (ie there is no unemployment).How is this trick accomplished? When a positive technological "shock" raises real wages, people will work more, causing output to surge. In the face of a negative "shock", workers will increase their leisure, causing output to fall.
... Although Schumpeter brilliantly captured the inherent dynamism of entrepreneur-led capitalism, his modern "real" successors smothered his insights in their obsession with "equilibrium" and "instant adjustments". For Schumpeter, there was something both noble and tragic about the spirit of capitalism. But those sentiments are a world away from the pretty, polite techniques of his mathematical progeny.
Update: Mankiw defends the equilibrium approach while Mark Thoma and Paul Krugman says that the approach is based more on theory than evidence.
Krugman: There’s no ambiguity in either case: both Fama and Cochrane are asserting that desired savings are automatically converted into investment spending, and that any government borrowing must come at the expense of investment — period.
What’s so mind-boggling about this is that it commits one of the most basic fallacies in economics — interpreting an accounting identity as a behavioral relationship.
Thoma: ... we can expect these economists to flail about defending the indefensible, they will be quite vicious at times, and in their panic to defend the work they have spent their lives on, they may not be very careful about the arguments they make.
Also, Robert Waldman explains the difference between equilibrium (fresh water) schools and everything else (salt water):
In the field of macroeconomics there is a much deeper division between macroeconomics as practiced at universities closer to the great lakes than to an Ocean (Fresh water economics) and that practiced at universities closer to Oceans (Salt water economics). [Disclaimer: I graduated from a Fresh water university.] ... It is a little difficult to explain the disagreement to non economists. Frankly, I think this is because non-economists have difficulty believing that any sane person would take fresh water economics seriously.
I think he overstates that everyone from a fresh water school believes that recessions are optimal.
Over near the Great Lakes there is considerable investigation of models in which the market outcome is Pareto efficient, that is, it is asserted that recessions are optimal and that, if they could be prevented, it would be a mistake to prevent them.
Although the responses to exogenous technological shocks are optimal and pareto efficient even within the fresh water schools there is some disbelief that technology shocks cause everything.
An Idealogical Turf War was an interesting read:
I was a little confused with this though (even though I agree with its tenor - by which I mean that the fresh water schools are sounding a little defensive):
What's different this time, and it's a difference I hope will bring about some humility, is that the wreckage is not from the Keynesian model crashing, this time it is the Classical formulation of the world that is being called into question. Once the proponents of these models are willing to concede that point, something they are currently resisting, maybe we can come together and get somewhere useful.
Which Classical formulation has failed?
1. The assumption that all agents are rational and like solving dynamic stochastic general equilibrium problems?
2. Markets are efficient (weak/strong form?) and clears at every point in time?
3. The idea of free markets/competion itself? I.e. More competition need not be good?
4. Dergulation? I guess this is tied into (3) because deregulation per se is usually not a feature of real business cycle models?
In classic business-cycle theory, a boom is initiated by a clutch of inventions – power looms and spinning jennies in the 18th century, railways in the 19th century, automobiles in the 20th century. But competitive pressures and the long gestation period of fixed-capital outlays multiply optimism, leading to more investment being undertaken than is actually profitable. Such over-investment produces an inevitable collapse. Banks magnify the boom by making credit too easily available, and they exacerbate the bust by withdrawing it too abruptly. But the legacy is a more efficient stock of capital equipment.
Dennis Robertson, an early 20th-century "real" business-cycle theorist, wrote: "I do not feel confident that a policy which, in the pursuit of stability of prices, output, and employment, had nipped in the bud the English railway boom of the forties, or the American railway boom of 1869-71, or the German electrical boom of the nineties, would have been on balance beneficial to the populations concerned." Like his contemporary, Schumpeter, Robertson regarded these boom-bust cycles, which involved both the creation of new capital and the destruction of old capital, as inseparable from progress.
Contemporary "real" business-cycle theory builds a mountain of mathematics on top of these early models, the main effect being to minimise the "destructiveness" of the "creation". It manages to combine technology-driven cycles of booms and recessions with markets that always clear (ie there is no unemployment).How is this trick accomplished? When a positive technological "shock" raises real wages, people will work more, causing output to surge. In the face of a negative "shock", workers will increase their leisure, causing output to fall.
... Although Schumpeter brilliantly captured the inherent dynamism of entrepreneur-led capitalism, his modern "real" successors smothered his insights in their obsession with "equilibrium" and "instant adjustments". For Schumpeter, there was something both noble and tragic about the spirit of capitalism. But those sentiments are a world away from the pretty, polite techniques of his mathematical progeny.
Update: Mankiw defends the equilibrium approach while Mark Thoma and Paul Krugman says that the approach is based more on theory than evidence.
Krugman: There’s no ambiguity in either case: both Fama and Cochrane are asserting that desired savings are automatically converted into investment spending, and that any government borrowing must come at the expense of investment — period.
What’s so mind-boggling about this is that it commits one of the most basic fallacies in economics — interpreting an accounting identity as a behavioral relationship.
Thoma: ... we can expect these economists to flail about defending the indefensible, they will be quite vicious at times, and in their panic to defend the work they have spent their lives on, they may not be very careful about the arguments they make.
Also, Robert Waldman explains the difference between equilibrium (fresh water) schools and everything else (salt water):
In the field of macroeconomics there is a much deeper division between macroeconomics as practiced at universities closer to the great lakes than to an Ocean (Fresh water economics) and that practiced at universities closer to Oceans (Salt water economics). [Disclaimer: I graduated from a Fresh water university.] ... It is a little difficult to explain the disagreement to non economists. Frankly, I think this is because non-economists have difficulty believing that any sane person would take fresh water economics seriously.
I think he overstates that everyone from a fresh water school believes that recessions are optimal.
Over near the Great Lakes there is considerable investigation of models in which the market outcome is Pareto efficient, that is, it is asserted that recessions are optimal and that, if they could be prevented, it would be a mistake to prevent them.
Although the responses to exogenous technological shocks are optimal and pareto efficient even within the fresh water schools there is some disbelief that technology shocks cause everything.
An Idealogical Turf War was an interesting read:
I was a little confused with this though (even though I agree with its tenor - by which I mean that the fresh water schools are sounding a little defensive):
What's different this time, and it's a difference I hope will bring about some humility, is that the wreckage is not from the Keynesian model crashing, this time it is the Classical formulation of the world that is being called into question. Once the proponents of these models are willing to concede that point, something they are currently resisting, maybe we can come together and get somewhere useful.
Which Classical formulation has failed?
1. The assumption that all agents are rational and like solving dynamic stochastic general equilibrium problems?
2. Markets are efficient (weak/strong form?) and clears at every point in time?
3. The idea of free markets/competion itself? I.e. More competition need not be good?
4. Dergulation? I guess this is tied into (3) because deregulation per se is usually not a feature of real business cycle models?
Swift sudden collapse
The financial crisis (for instance in Iceland) has sometimes been described as "swift" or "sudden". This is also the case with the U.S. While the collapse itself can be seen as swift or sudden does it also mean as some commentators have indicated - that it was "unexpected"? Swiftness or suddeness does not mean unexpected. A series of missteps or wrong turns can accumulate and the economy can collapse suddenly.
The story in Fortune on Iceland indicates that this may well be the story. As far back as 2006:
In April 2006 the rating agency Fitch abruptly downgraded its outlook for Iceland, citing concerns about the banks. Investors panicked, and the currency and the stock market both plunged 25% in a matter of days.
Compounded by what the article characterizes as "personality issues":
... says a senior official at the ECB in Frankfurt, "if you want to put in place a swap agreement with us, you call [ECB president Jean-Claude] Trichet and make an appointment to come to see him. Then you fly to Frankfurt to discuss your request and explain how it is supposed to work. You need to win him over. Oddsson didn't even try."
Those who say that the subprime crisis could not be predicted or was unforseeable are incorrect since various indicators have been showing that the economy was under some stress for some time. (Roubini has warned of the impending crisis, for instance.) Of course, the timing of the collapse if that is what we're after is unpredictable.
In many economic models such as early warning models, crisis dates are modelled as binary variables (0 if the country is not in crisis at that quarter and 1 otherwise). Alternatively they are modelled as regime changes when the economy switches from calm to crisis. In these types of models the variable of interest is the predicted probability of a crisis. How well the model performs is whether the predicted probability (phat) can actually predict the crisis states. In some papers the plots of phat shows sudden changes which is actually built into the model by assumption. In this sense then the collapse is "unpredicted". In early warning models, phat can either perform well or not depending on the covariates but in general I think that both approaches are not very informative if we focus on phat.
The underlying covariates in early warning models tell us something about the economy - e.g. stress. Construct an indicator (e.g. using factor analysis possibly) and use that as a predictor of a crisis. What we might possibly want is the answer to the following question: "If an economy is under stress, how much stress can it withstand before the economy falls into crisis?" In this case we are looking for some cutoff value of the stress indicator (just as we are looking for a cutoff value of phat in early warning models) and is more informative than a phat.
Unfortunately, the value of a stress indicator are only slightly more informative since it really doesn't tell us much about what is causing the stress. A decomposition of the index would be required. However, this approach may lead to better information on how an economy collapses. If an economy collapses without and change in the indicator then something is missing from the indicator. Alternatively, perhaps the factor weights are changing over time. All this gets into the black box of a crisis in a way that early warning models do not.
Alternatively, one can argue that phat from early warning models already constitute such a stress indicator. The logistic regression approach using phat does not allow the analyst to explore with changing weights of factors - although in fairness neither does factor analysis.
The idea here is that whether the economy is under stress depends on a bunch of factors. Over time as regulators focus on one aspect it's importance as a possible contributory cause decreases but others may rise over time and this is the aspect that is not well captured by current models.
The story in Fortune on Iceland indicates that this may well be the story. As far back as 2006:
In April 2006 the rating agency Fitch abruptly downgraded its outlook for Iceland, citing concerns about the banks. Investors panicked, and the currency and the stock market both plunged 25% in a matter of days.
Compounded by what the article characterizes as "personality issues":
... says a senior official at the ECB in Frankfurt, "if you want to put in place a swap agreement with us, you call [ECB president Jean-Claude] Trichet and make an appointment to come to see him. Then you fly to Frankfurt to discuss your request and explain how it is supposed to work. You need to win him over. Oddsson didn't even try."
Those who say that the subprime crisis could not be predicted or was unforseeable are incorrect since various indicators have been showing that the economy was under some stress for some time. (Roubini has warned of the impending crisis, for instance.) Of course, the timing of the collapse if that is what we're after is unpredictable.
In many economic models such as early warning models, crisis dates are modelled as binary variables (0 if the country is not in crisis at that quarter and 1 otherwise). Alternatively they are modelled as regime changes when the economy switches from calm to crisis. In these types of models the variable of interest is the predicted probability of a crisis. How well the model performs is whether the predicted probability (phat) can actually predict the crisis states. In some papers the plots of phat shows sudden changes which is actually built into the model by assumption. In this sense then the collapse is "unpredicted". In early warning models, phat can either perform well or not depending on the covariates but in general I think that both approaches are not very informative if we focus on phat.
The underlying covariates in early warning models tell us something about the economy - e.g. stress. Construct an indicator (e.g. using factor analysis possibly) and use that as a predictor of a crisis. What we might possibly want is the answer to the following question: "If an economy is under stress, how much stress can it withstand before the economy falls into crisis?" In this case we are looking for some cutoff value of the stress indicator (just as we are looking for a cutoff value of phat in early warning models) and is more informative than a phat.
Unfortunately, the value of a stress indicator are only slightly more informative since it really doesn't tell us much about what is causing the stress. A decomposition of the index would be required. However, this approach may lead to better information on how an economy collapses. If an economy collapses without and change in the indicator then something is missing from the indicator. Alternatively, perhaps the factor weights are changing over time. All this gets into the black box of a crisis in a way that early warning models do not.
Alternatively, one can argue that phat from early warning models already constitute such a stress indicator. The logistic regression approach using phat does not allow the analyst to explore with changing weights of factors - although in fairness neither does factor analysis.
The idea here is that whether the economy is under stress depends on a bunch of factors. Over time as regulators focus on one aspect it's importance as a possible contributory cause decreases but others may rise over time and this is the aspect that is not well captured by current models.
War of the economists
War - Or why the public cannot trust economists redux:
Matt Yglesias has a very good post on Robert Barro's latest. Brad DeLong seems to agree with Matt. Paul Krugman uses the word "boneheaded" to describe the Barro piece.
This exchange is a good micro-cosm of how the stimulus debate has proceeded. A highly respected anti-stimulus economist puts up some anti-stimulus evidence in a highly imperfect test (in Barro's defense, he did cover more than just WWII). The anti-stimulus economist is attacked by pro-stimulus economists. But the pro-stimulus proponents are focused on attack. They are not putting up comparable empirical evidence of their own for the efficacy of fiscal policy and there is a reason for that, namely that the evidence isn't really there.
The pro stimulus economists have already put up their evidence. The anti-stimulus economists just don't like it. Likewise the pro stimulus pack don't like the evidence of their opponents. There is great uncertainty over the evidence on both the tax cut advocates and the fiscal stimulus advocates. There is some sense that we might want to try both (Scroll down to see that Mark Thoma does not object to tax cuts per se but he feels that spending in public projects is way overdue, for instance, tax cuts won't build schools). Unfortunately, this approach can divide resources to the point that neither is effective. (I'm in the both camp even though it may end up being ineffective.)
For instance, Warren Buffet via MR: "All you know is you throw everything at it and whether it’s more effective if you’re fighting a fire to be concentrating the water flow on this part or that part. You’re going to use every weapon you have in fighting it. "
Being in the "both" camp then requires us to find projects that are beneficial. Why build schools if they are not needed? In this case, the paper by Linda Bilmes is relevant. Corruption can become a problem. However, if I were in the pro-stimulus camp I would consider the findings of this paper to be interesting but tangential. Whether or not fiscal funds are put to "good" use is irrelevant as long as it gets recycled into the economy. Digging holes and then filling them up again is a Keynesian presciption out of a liquidity trap.
The pro tax cut camp also wants to use tax cuts to spur investment. Seeing that we had almost a decade of "spurred" residential investment that accompanied the housing bubble I question the need to "spur" more investment unless the pro tax cut advocates are proposing that the government "cause" another investment bubble in public works type projects. Perhaps in this case companies like Bechtel, Siemens and various construction companies will become beneficiaries. And "hopefully" we'll have another bubble that will lift us out of the recession. After all during the dot-com bubble there was a lot of IT "investment" e.g. laying down fiber optic cables, etc. (At least I think there was. I need to find a reference for this.)
Update: Menzie Chinn has some diagrammatic expositions of fiscal policy analyses which I found useful. Again these are theoretical arguments and as far as I can tell there are no empirical estimates of the slopes of IS-LM models.
Update: This is getting fun!
Rodrik seems to be anti-stimulus but what he says here makes sense as well (emphais mine):
And if I am right on the remaining source of disagreement, we can say two things. First, there is in fact a reasonable consensus about the economics of the situation (as described by both Cochrane and Krugman, although they do use different words). And second, the remaining disagreements are largely philosophical, political, and practical--revolving around the role of government, the extent of rent-seeking and public-choice concerns in government programs, and the right mixture of prudence and boldness that the situation requires.
It wouldn't be the first time that economists are discussing such questions--for which their PhDs have done little to qualify them--in the guise of discussing economics. But it would be too bad if disagreements on the second score obscure the apparent convergence on the former.
My argument against Cochrane's piece:
We are experiencing a strong portfolio and precautionary demand for government debt, along with a credit crunch. People want to hold less private debt and they want to save, and they want to hold Treasuries, money, or government-guaranteed debt.
Is this an assertion, assumption or something based on evidence? It sounds plausible but if there is any movement by economics toward 'evidence-base' this is not one of its finer moments. See also Brad Delong's claim that Cochrane is making "an elementary, freshman mistake."
Matt Yglesias has a very good post on Robert Barro's latest. Brad DeLong seems to agree with Matt. Paul Krugman uses the word "boneheaded" to describe the Barro piece.
This exchange is a good micro-cosm of how the stimulus debate has proceeded. A highly respected anti-stimulus economist puts up some anti-stimulus evidence in a highly imperfect test (in Barro's defense, he did cover more than just WWII). The anti-stimulus economist is attacked by pro-stimulus economists. But the pro-stimulus proponents are focused on attack. They are not putting up comparable empirical evidence of their own for the efficacy of fiscal policy and there is a reason for that, namely that the evidence isn't really there.
The pro stimulus economists have already put up their evidence. The anti-stimulus economists just don't like it. Likewise the pro stimulus pack don't like the evidence of their opponents. There is great uncertainty over the evidence on both the tax cut advocates and the fiscal stimulus advocates. There is some sense that we might want to try both (Scroll down to see that Mark Thoma does not object to tax cuts per se but he feels that spending in public projects is way overdue, for instance, tax cuts won't build schools). Unfortunately, this approach can divide resources to the point that neither is effective. (I'm in the both camp even though it may end up being ineffective.)
For instance, Warren Buffet via MR: "All you know is you throw everything at it and whether it’s more effective if you’re fighting a fire to be concentrating the water flow on this part or that part. You’re going to use every weapon you have in fighting it. "
Being in the "both" camp then requires us to find projects that are beneficial. Why build schools if they are not needed? In this case, the paper by Linda Bilmes is relevant. Corruption can become a problem. However, if I were in the pro-stimulus camp I would consider the findings of this paper to be interesting but tangential. Whether or not fiscal funds are put to "good" use is irrelevant as long as it gets recycled into the economy. Digging holes and then filling them up again is a Keynesian presciption out of a liquidity trap.
The pro tax cut camp also wants to use tax cuts to spur investment. Seeing that we had almost a decade of "spurred" residential investment that accompanied the housing bubble I question the need to "spur" more investment unless the pro tax cut advocates are proposing that the government "cause" another investment bubble in public works type projects. Perhaps in this case companies like Bechtel, Siemens and various construction companies will become beneficiaries. And "hopefully" we'll have another bubble that will lift us out of the recession. After all during the dot-com bubble there was a lot of IT "investment" e.g. laying down fiber optic cables, etc. (At least I think there was. I need to find a reference for this.)
Update: Menzie Chinn has some diagrammatic expositions of fiscal policy analyses which I found useful. Again these are theoretical arguments and as far as I can tell there are no empirical estimates of the slopes of IS-LM models.
Update: This is getting fun!
Rodrik seems to be anti-stimulus but what he says here makes sense as well (emphais mine):
And if I am right on the remaining source of disagreement, we can say two things. First, there is in fact a reasonable consensus about the economics of the situation (as described by both Cochrane and Krugman, although they do use different words). And second, the remaining disagreements are largely philosophical, political, and practical--revolving around the role of government, the extent of rent-seeking and public-choice concerns in government programs, and the right mixture of prudence and boldness that the situation requires.
It wouldn't be the first time that economists are discussing such questions--for which their PhDs have done little to qualify them--in the guise of discussing economics. But it would be too bad if disagreements on the second score obscure the apparent convergence on the former.
My argument against Cochrane's piece:
We are experiencing a strong portfolio and precautionary demand for government debt, along with a credit crunch. People want to hold less private debt and they want to save, and they want to hold Treasuries, money, or government-guaranteed debt.
Is this an assertion, assumption or something based on evidence? It sounds plausible but if there is any movement by economics toward 'evidence-base' this is not one of its finer moments. See also Brad Delong's claim that Cochrane is making "an elementary, freshman mistake."
Wednesday, January 21, 2009
Economic indoctrination
One of Ariel Rubinstein's dilemmas was depicted in the Layoff Survey where respondents (mainly students) were asked how many people to layoff in a hypothetical situation. He found that Econ students were more likely to respond to the profit maximizing solution (and hence laying off more) than either Philosophy or MBA students. He concludes:
The interpretation of the results cannot be separated from one’s personal views regarding the behavior of economic agents in such a situation. If you believe that the managers of a company are obligated morally or legally to maximize profits, then you should probably praise economics for how well it indoctrinates its students and be disappointed that so many of them still do not maximize profits. On the other hand, if you approach the results with the belief that managers should also take into account the welfare of the workers, particularly when the economy is in recession and unemployment is high, then you probably feel uncomfortable with the results.
I think that this is one of the problems with using students. It is possible that once the students are out of school the effects of indoctrination might be ameliorated. Having come from the economics department from Rochester (a very free market school) and having been out of economics for almost 10 years I have no problems entertaining the following seriously (whereas I would have not in the first few years out of school):
1. Nationalization of the financial industry
2. Health care is a right, not a privilege (Bumper Sticker)
3. Living wages
4. Free trade is over rated
It is also possible that the Econ students were not responding truthfully but responding in a way that they think the questioner would want them to respond. For instance, they may have viewed the survey as a "test" of how well they had learned economics. This may be an instance of where asking a question might be distorting.
Bottom line: I'm not as worried about indoctrination as Prof. Rubinstein as long as Econ students don't remain in academia. This is when they move from being indoctrinated to being idealogues.
The interpretation of the results cannot be separated from one’s personal views regarding the behavior of economic agents in such a situation. If you believe that the managers of a company are obligated morally or legally to maximize profits, then you should probably praise economics for how well it indoctrinates its students and be disappointed that so many of them still do not maximize profits. On the other hand, if you approach the results with the belief that managers should also take into account the welfare of the workers, particularly when the economy is in recession and unemployment is high, then you probably feel uncomfortable with the results.
I think that this is one of the problems with using students. It is possible that once the students are out of school the effects of indoctrination might be ameliorated. Having come from the economics department from Rochester (a very free market school) and having been out of economics for almost 10 years I have no problems entertaining the following seriously (whereas I would have not in the first few years out of school):
1. Nationalization of the financial industry
2. Health care is a right, not a privilege (Bumper Sticker)
3. Living wages
4. Free trade is over rated
It is also possible that the Econ students were not responding truthfully but responding in a way that they think the questioner would want them to respond. For instance, they may have viewed the survey as a "test" of how well they had learned economics. This may be an instance of where asking a question might be distorting.
Bottom line: I'm not as worried about indoctrination as Prof. Rubinstein as long as Econ students don't remain in academia. This is when they move from being indoctrinated to being idealogues.
Solving the financial crisis fictionally
Is it possible for the government to solve this crisis by engaging in a massive attempt to fool the public? These caught my eye:
1. Accounting Standards Wilt Under Pressure (From WaPo) In October, largely hidden from public view, the International Accounting Standards Board changed the rules so European banks could make their balance sheets look better. The action let the banks rewrite history, picking and choosing among their problem investments to essentially claim that some had been on a different set of books before the financial crisis started.
The results were dramatic. Deutsche Bank shifted $32 billion of troubled assets, turning a $970 million quarterly pretax loss into $120 million profit. And the securities markets were fooled, bidding Deutsche Bank's shares up nearly 19 percent on Oct. 30, the day it made the startling announcement that it had turned an unexpected profit.
2. ShadowStatistics (debunked).
1. Accounting Standards Wilt Under Pressure (From WaPo) In October, largely hidden from public view, the International Accounting Standards Board changed the rules so European banks could make their balance sheets look better. The action let the banks rewrite history, picking and choosing among their problem investments to essentially claim that some had been on a different set of books before the financial crisis started.
The results were dramatic. Deutsche Bank shifted $32 billion of troubled assets, turning a $970 million quarterly pretax loss into $120 million profit. And the securities markets were fooled, bidding Deutsche Bank's shares up nearly 19 percent on Oct. 30, the day it made the startling announcement that it had turned an unexpected profit.
2. ShadowStatistics (debunked).
Voting and rationality
Andrew Gelman had a series of posts on voting and rationality: here, here, here, and here for instance. Wading into very unknown territory for me is the following question: Why isn't the Nash equilibrium in a voting game to vote?
Consider an N-person game with two pure strategies of vote or not vote over two candidates.
1. Suppose nobody votes. This cannot possibly be a Nash equilibrium since any one person can vote and the vote will be decisive.
2. Thus the NE must be to vote.
Consider the N1+N2=N person game where N1 supports candidate 1 and N2 supports candidate 2.
1. Suppose no one from either or N1 or N2 votes. Again any one player from the N1 or N2 coalition will deviate from this strategy of not voting and the vote will be decisive.
2. Again the NE must be to vote.
Unfotunately, I have not been able to find out whether my reasoning is correct in this simple game. The economics literature is full of papers on costly voting where each voter incurs a cost C of voting (either uniformly distributed or not), imperfect information on the size of N1 and N2, strategic voting, coalition formation, stability of the core, etc.
I find the above simple and straightforward enough for me to explain why people vote. Why do economists have to cloud the issue? I guess it must make for fancier math and more publications.
Consider an N-person game with two pure strategies of vote or not vote over two candidates.
1. Suppose nobody votes. This cannot possibly be a Nash equilibrium since any one person can vote and the vote will be decisive.
2. Thus the NE must be to vote.
Consider the N1+N2=N person game where N1 supports candidate 1 and N2 supports candidate 2.
1. Suppose no one from either or N1 or N2 votes. Again any one player from the N1 or N2 coalition will deviate from this strategy of not voting and the vote will be decisive.
2. Again the NE must be to vote.
Unfotunately, I have not been able to find out whether my reasoning is correct in this simple game. The economics literature is full of papers on costly voting where each voter incurs a cost C of voting (either uniformly distributed or not), imperfect information on the size of N1 and N2, strategic voting, coalition formation, stability of the core, etc.
I find the above simple and straightforward enough for me to explain why people vote. Why do economists have to cloud the issue? I guess it must make for fancier math and more publications.
Causality and cure
This Econbrowser post states: "How you think we might get out of our current economic problems has something to do with how you think we got into them in the first place."
This implies that the cure is determined by its cause (or causes). So if there are multiple causes we would treat all of them. But by some accounts e.g. Acemoglu/Johnson there are various contributory factors (not necessarily causes) e.g. the Great Moderation made policy makers complacent that also should be considered. When there are many factors and causes do we address them all equally or are some more important than others? How do we determine the size effect of these factors/causes? Should contributory factors get a smaller weight than "causes"?
And what if there are feedback effects, for instance, if mark to market were a feedback effect, should we "short-circuit" this feedback loop? Or what if credit downgrades caused feedback effects, for instance as in AIG?
Finally, how do we model all these when models have been discredited? Do we return to structural equation models?
Update: Eswar Prasad and Brad Setser explore the global roots of the current financial crisis. If global imbalances are a proximate cause that created the liquidity for the subprime bubble then is capital controls the answer?
This implies that the cure is determined by its cause (or causes). So if there are multiple causes we would treat all of them. But by some accounts e.g. Acemoglu/Johnson there are various contributory factors (not necessarily causes) e.g. the Great Moderation made policy makers complacent that also should be considered. When there are many factors and causes do we address them all equally or are some more important than others? How do we determine the size effect of these factors/causes? Should contributory factors get a smaller weight than "causes"?
And what if there are feedback effects, for instance, if mark to market were a feedback effect, should we "short-circuit" this feedback loop? Or what if credit downgrades caused feedback effects, for instance as in AIG?
Finally, how do we model all these when models have been discredited? Do we return to structural equation models?
Update: Eswar Prasad and Brad Setser explore the global roots of the current financial crisis. If global imbalances are a proximate cause that created the liquidity for the subprime bubble then is capital controls the answer?
Tuesday, January 20, 2009
Inauguration Day
Can't helped but be moved by the pictures of the thousands of people turning out for the Inauguration. K1's piano teacher said she never thought she would see the day that an African American would become president. Neither did I. I thought that conservative America would never vote for an African American and that the liberals could talk the talk but when it came down to it they would even choose Hilary Clinton rather than an African American.
The hardest road is still ahead. What legacy would Obama leave if he fails? Would it lead to a backlash against blacks? Or is this really full steam ahead for equal rights?
There is so much hope today and it is such a great contrast to the feelings I had on 9/11.
Update: Elizabeth attended and gave us a view of what she got to see (barely anything!) but I agree that it was more the look on people's faces, the feel of the crowds and the sounds of the cheering and the sense of hope that mattered more. If we were before kids I might have been tempted to shlep down to the Washington Monument myself.
The hardest road is still ahead. What legacy would Obama leave if he fails? Would it lead to a backlash against blacks? Or is this really full steam ahead for equal rights?
There is so much hope today and it is such a great contrast to the feelings I had on 9/11.
Update: Elizabeth attended and gave us a view of what she got to see (barely anything!) but I agree that it was more the look on people's faces, the feel of the crowds and the sounds of the cheering and the sense of hope that mattered more. If we were before kids I might have been tempted to shlep down to the Washington Monument myself.
Friday, January 16, 2009
VAR numbers as road signs
In a previous post there was mention of the notion that seeing a VAR number too often can cloud the judgement of the user and lead him to accept it as truth. One can make the same analogy to road signs - from this article ("Distracting Miss Daisy") by John Staddon.
...the overabundance of stop signs teaches drivers to be less observant of cross traffic and to exercise less judgment when driving—instead, they look for signs and drive according to what the signs tell them to do. ... But this is emblematic of the sort of signage arms race that has become necessary in the U.S. When you’ve trained people to drive according to the signs, you need to keep adding more signs to tell them exactly when and in what fashion they need to adjust their behavior. Otherwise, drivers may see no reason why they should slow down on a curve in the rain.
Do more road signs make roads safer or prevent more accidents?
In 1949, a British statistician named R. J. Smeed, who would go on to become the first professor of traffic studies at University College London, proposed a now-eponymous law. Smeed had looked at data on traffic fatalities in many different countries, over many years. He found that deaths per year could be predicted fairly accurately by a formula that involved just two factors: the number of people and the number of cars. The physicist Freeman Dyson, who during World War II had worked for Smeed in the Operational Research Section of the Royal Air Force’s Bomber Command, noted the marvelous simplicity of Smeed’s formula, writing in Technology Review in November 2006: “It is remarkable that the number of deaths does not depend strongly on the size of the country, the quality of the roads, the rules and regulations governing traffic, or the safety equipment installed in cars.” As a result of his research, Smeed developed a fatalistic view of traffic safety, Dyson wrote.
So should we abolish VAR numbers?
A few European towns and neighborhoods—Drachten in Holland, fashionable Kensington High Street in London, Prince Charles’s village of Poundbury, and a few others—have even gone ahead and tried it. They’ve taken the apparently drastic step of eliminating traffic control more or less completely in a few high-traffic and pedestrian-dense areas. The intention is to create environments in which everyone is more focused, more cautious, and more considerate. Stop signs, stoplights, even sidewalks are mostly gone. The results, by all accounts, have been excellent: pedestrian accidents have been reduced by 40 percent or more in some places, and traffic flows no more slowly than before.
What I propose is more modest: the adoption of something like the British traffic system, which is free of many of the problems that plague American roads. One British alternative to the stop sign is just a dashed line on the pavement, right in front of the driver. It actually means “yield,” not “stop”; it tells the driver which road has the right of way. Another alternative is the roundabout. ... A “mini-roundabout” in the U.K. is essentially just a large white dot in the middle of the intersection. In this form, it amounts to no more than an instruction to give way to traffic coming from the right (that would be the left over here, of course, since the Brits drive on the left). ... most right-of-way signs are informational: there are almost no mandatory stops in the U.K. (The dominant motive in the U.S. traffic-control community seems to be distrust, and policies are usually designed to control drivers and reduce their discretion. The British system puts more responsibility on the drivers themselves.)
The above quote leads to the suggestion that banks use their own VAR numbers to set their own capital requirements which was indeed what they did and did not prevent the crisis. A 99 percent chance that Bank A will not lose more than 50 million itoday without accounting for the overall economic condition (or correlated risk) is essentially meaningless. Perhaps financial authorities should spend more time providing direction on whether daily VARs need to be adjusted to account for economic conditions than to the numbers themselves. For instance, if an economy shows signs of some stress then authorities need to provide direction as to how VAR numbers should be adjusted.
However, this suggestion is subject to the same criticisms of whether the Fed should pop asset price bubbles. But rather than use interest rates, setting VAR limits or making VAR adjustments can trigger an overall audit of the financial sector where participants can disagree on whether the VAR limit makes sense in the same way that BCS ratings are dissected. This way the authorities are in some sense ahead of the game without causing the disruption of an increase in interest rates.
...the overabundance of stop signs teaches drivers to be less observant of cross traffic and to exercise less judgment when driving—instead, they look for signs and drive according to what the signs tell them to do. ... But this is emblematic of the sort of signage arms race that has become necessary in the U.S. When you’ve trained people to drive according to the signs, you need to keep adding more signs to tell them exactly when and in what fashion they need to adjust their behavior. Otherwise, drivers may see no reason why they should slow down on a curve in the rain.
Do more road signs make roads safer or prevent more accidents?
In 1949, a British statistician named R. J. Smeed, who would go on to become the first professor of traffic studies at University College London, proposed a now-eponymous law. Smeed had looked at data on traffic fatalities in many different countries, over many years. He found that deaths per year could be predicted fairly accurately by a formula that involved just two factors: the number of people and the number of cars. The physicist Freeman Dyson, who during World War II had worked for Smeed in the Operational Research Section of the Royal Air Force’s Bomber Command, noted the marvelous simplicity of Smeed’s formula, writing in Technology Review in November 2006: “It is remarkable that the number of deaths does not depend strongly on the size of the country, the quality of the roads, the rules and regulations governing traffic, or the safety equipment installed in cars.” As a result of his research, Smeed developed a fatalistic view of traffic safety, Dyson wrote.
So should we abolish VAR numbers?
A few European towns and neighborhoods—Drachten in Holland, fashionable Kensington High Street in London, Prince Charles’s village of Poundbury, and a few others—have even gone ahead and tried it. They’ve taken the apparently drastic step of eliminating traffic control more or less completely in a few high-traffic and pedestrian-dense areas. The intention is to create environments in which everyone is more focused, more cautious, and more considerate. Stop signs, stoplights, even sidewalks are mostly gone. The results, by all accounts, have been excellent: pedestrian accidents have been reduced by 40 percent or more in some places, and traffic flows no more slowly than before.
What I propose is more modest: the adoption of something like the British traffic system, which is free of many of the problems that plague American roads. One British alternative to the stop sign is just a dashed line on the pavement, right in front of the driver. It actually means “yield,” not “stop”; it tells the driver which road has the right of way. Another alternative is the roundabout. ... A “mini-roundabout” in the U.K. is essentially just a large white dot in the middle of the intersection. In this form, it amounts to no more than an instruction to give way to traffic coming from the right (that would be the left over here, of course, since the Brits drive on the left). ... most right-of-way signs are informational: there are almost no mandatory stops in the U.K. (The dominant motive in the U.S. traffic-control community seems to be distrust, and policies are usually designed to control drivers and reduce their discretion. The British system puts more responsibility on the drivers themselves.)
The above quote leads to the suggestion that banks use their own VAR numbers to set their own capital requirements which was indeed what they did and did not prevent the crisis. A 99 percent chance that Bank A will not lose more than 50 million itoday without accounting for the overall economic condition (or correlated risk) is essentially meaningless. Perhaps financial authorities should spend more time providing direction on whether daily VARs need to be adjusted to account for economic conditions than to the numbers themselves. For instance, if an economy shows signs of some stress then authorities need to provide direction as to how VAR numbers should be adjusted.
However, this suggestion is subject to the same criticisms of whether the Fed should pop asset price bubbles. But rather than use interest rates, setting VAR limits or making VAR adjustments can trigger an overall audit of the financial sector where participants can disagree on whether the VAR limit makes sense in the same way that BCS ratings are dissected. This way the authorities are in some sense ahead of the game without causing the disruption of an increase in interest rates.
The future of models
With the financial crisis and the blame being heaped onto models I wonder if there was any future for models or what future models might look like.
One possiblity might be the advance of agent based modeling which has not had made much headway against DSGE models or even heterogenous agent DSGE models. Leigh Testfatsion has been a contributor to this field for a long time. Tyler Cowen maligns it:
What's the important innovation behind intelligent agent modeling? To introduce lots of arbitrary assumptions about behavior? Greater realism? Complexity? Considerations of computability? Learning? We already have enough "existence theorems" as to what is possible in models, namely just about everything. The CGE models already have the problem of oversensitivity to the initial assumptions; in part they work because we use our intuition to calibrate the parameters and to throw out implausible results. We're going to have to do the same with the intelligent agent models and the fact that those models "sound more real" is not actually a significant benefit.
What can be done will be done and so people will build intelligent models for at least the next twenty years. But it's hard for me to see them changing anyone's mind about any major outstanding issue in economics. What comes out will be a function of what goes in. In contrast, regressions and simple models have in many cases changed people's minds.
But Alex Tabarrok is more optimistic:
I see bringing experimental economics and I-A modeling closer as an important goal with potentially very large payoffs. Here, for example, is my model for a ground-breaking paper.
1) Experiment
2) I-A replication of experiment (parameterization)
3) I-A simulation under new conditions
4) Experiment under the same conditions as 3 demonstrating accuracy of simulation
5) I-A simulation under conditions that cannot be tested using experiments.
I am also more optimistic since reading about swarm models. (See an old post.) I'd complement Alex's approach with the advent of greater amounts of data that is becoming more available. For instance, the following claims are made via Andrew Gelman:
1. More data beats better algorithms (Some agreement.)
2. The End of Theory: The Data Deluge Makes the Scientific Method Obsolete (Dissent and agreement within link.)
3. Some convergence in using priors (intuition), large databases, visualization and modeling language.
One possiblity might be the advance of agent based modeling which has not had made much headway against DSGE models or even heterogenous agent DSGE models. Leigh Testfatsion has been a contributor to this field for a long time. Tyler Cowen maligns it:
What's the important innovation behind intelligent agent modeling? To introduce lots of arbitrary assumptions about behavior? Greater realism? Complexity? Considerations of computability? Learning? We already have enough "existence theorems" as to what is possible in models, namely just about everything. The CGE models already have the problem of oversensitivity to the initial assumptions; in part they work because we use our intuition to calibrate the parameters and to throw out implausible results. We're going to have to do the same with the intelligent agent models and the fact that those models "sound more real" is not actually a significant benefit.
What can be done will be done and so people will build intelligent models for at least the next twenty years. But it's hard for me to see them changing anyone's mind about any major outstanding issue in economics. What comes out will be a function of what goes in. In contrast, regressions and simple models have in many cases changed people's minds.
But Alex Tabarrok is more optimistic:
I see bringing experimental economics and I-A modeling closer as an important goal with potentially very large payoffs. Here, for example, is my model for a ground-breaking paper.
1) Experiment
2) I-A replication of experiment (parameterization)
3) I-A simulation under new conditions
4) Experiment under the same conditions as 3 demonstrating accuracy of simulation
5) I-A simulation under conditions that cannot be tested using experiments.
I am also more optimistic since reading about swarm models. (See an old post.) I'd complement Alex's approach with the advent of greater amounts of data that is becoming more available. For instance, the following claims are made via Andrew Gelman:
1. More data beats better algorithms (Some agreement.)
2. The End of Theory: The Data Deluge Makes the Scientific Method Obsolete (Dissent and agreement within link.)
3. Some convergence in using priors (intuition), large databases, visualization and modeling language.
Thursday, January 15, 2009
Model builders and model users
First, John Quiggin poses the question: Bad models or Bad Modelers? If an underlying assumption of the model is bad is the model at fault or is the modeler at fault? After all models don't make assumptions, people do. Why should we accept what the models tell us without close examination? Every year compter rankings of college football teams are generated and these rankings are constantly being disputed. There is a healthy disrespect for models in college football that does not seem to carry over to finance and economic models.
Second, from Joe Nocera (NYT):
There were the investors who saw the VaR numbers in the annual reports but didn’t pay them the least bit of attention. There were the regulators who slept soundly in the knowledge that, thanks to VaR, they had the whole risk thing under control. There were the boards who heard a VaR number once or twice a year and thought it sounded good. There were chief executives like O’Neal and Prince. There was everyone, really, who, over time, forgot that the VaR number was only meant to describe what happened 99 percent of the time. That $50 million wasn’t just the most you could lose 99 percent of the time. It was the least you could lose 1 percent of the time. In the bubble, with easy profits being made and risk having been transformed into mathematical conceit, the real meaning of risk had been forgotten. Instead of scrutinizing VaR for signs of impending trouble, they took comfort in a number and doubled down, putting more money at risk in the expectation of bigger gains. “It has to do with the human condition,” said one former risk manager. “People like to have one number they can believe in.”
(see a critique of the article here)
There is some truth to the notion that once an unsophisticated user has been exposed to a concept long enough this notion can take become the TRUTH and in a sense this is what has happened to VAR. The more sophisticated users/modelers who understand the assumptions behind models will also be lulled into complacency if the novice user e.g. CEOs don't take the trouble to understand the models and act as though there were no limitations to the models. If my boss is not worried then why should I worry?
The incentive to worry about the 1 percent is also not present. Why devote resources to the small probability of a catastrophe when everyone else isn't doing it? After all, ‘a sound banker, alas, is not one who foresees danger and avoids it, but one who, when he is ruined, is ruined in a conventional and orthodox way with his fellows, so that no-one can really blame him.’ (Keynes)
As Nocera points out in the article many don't believe that VAR models are useless but there is an element of human judgement that needs to be used every time the numbers are scrutinized. So, should all risk models come with a warning e.g. "This model will only behave as it has been programmed to behave. Use at your own risk".
Unfortunately, disclaimers such as these are ubiquitous - almost like end user license agreements when software is installed - that I almost never read anything like Terms and Conditions or Disclaimer any more.
There is a human element to all financial crisis and it is neither stupidity nor ignorance. It is greed.
Second, from Joe Nocera (NYT):
There were the investors who saw the VaR numbers in the annual reports but didn’t pay them the least bit of attention. There were the regulators who slept soundly in the knowledge that, thanks to VaR, they had the whole risk thing under control. There were the boards who heard a VaR number once or twice a year and thought it sounded good. There were chief executives like O’Neal and Prince. There was everyone, really, who, over time, forgot that the VaR number was only meant to describe what happened 99 percent of the time. That $50 million wasn’t just the most you could lose 99 percent of the time. It was the least you could lose 1 percent of the time. In the bubble, with easy profits being made and risk having been transformed into mathematical conceit, the real meaning of risk had been forgotten. Instead of scrutinizing VaR for signs of impending trouble, they took comfort in a number and doubled down, putting more money at risk in the expectation of bigger gains. “It has to do with the human condition,” said one former risk manager. “People like to have one number they can believe in.”
(see a critique of the article here)
There is some truth to the notion that once an unsophisticated user has been exposed to a concept long enough this notion can take become the TRUTH and in a sense this is what has happened to VAR. The more sophisticated users/modelers who understand the assumptions behind models will also be lulled into complacency if the novice user e.g. CEOs don't take the trouble to understand the models and act as though there were no limitations to the models. If my boss is not worried then why should I worry?
The incentive to worry about the 1 percent is also not present. Why devote resources to the small probability of a catastrophe when everyone else isn't doing it? After all, ‘a sound banker, alas, is not one who foresees danger and avoids it, but one who, when he is ruined, is ruined in a conventional and orthodox way with his fellows, so that no-one can really blame him.’ (Keynes)
As Nocera points out in the article many don't believe that VAR models are useless but there is an element of human judgement that needs to be used every time the numbers are scrutinized. So, should all risk models come with a warning e.g. "This model will only behave as it has been programmed to behave. Use at your own risk".
Unfortunately, disclaimers such as these are ubiquitous - almost like end user license agreements when software is installed - that I almost never read anything like Terms and Conditions or Disclaimer any more.
There is a human element to all financial crisis and it is neither stupidity nor ignorance. It is greed.
Reading Jennifer McMahon
Both Promise Not To Tell and Island of Lost Girls were good. They were both page turners although the second was a little more predictable in terms of ending. Both books are written so that the past and present are interleaved into the book so the reader switches from the present to the past. Fortunately they both advance the story so it was not disconcerting for me. I enjoyed both books but I'm not sure if I'd read another one of her future books. The style and approach is getting a little stale as well as the type of stories that she tells. She is quite a good writer though so I think she might be able to pull of another one of these genre very well, so who knows.
Wednesday, January 14, 2009
American Shaolin
Read Matthew Polly's American Shaolin - the time he spent in China learning martial arts at the Shaolin Temple. It was a good read and I liked it better than some of the travel articles that he had written on Slate. It was good to have the chapters organized under his experiences (sometimes compared to his expectations as an American) rather than in chronological order.
It brought back some memories of growing up:
1. Wong Fei Hung shows on TV played by Kwan Tak Hing. See also here. The most recent version was I saw was Jet Li in Once Upon A Time In China.
2. The book also mentions Jet Li's first movie Shaolin Temple and how the movie craze then swept the Wushu world.
3. The One Armed Swordsman played by Ti Lung.
4. The many Police Story movies by Jackie Chan and Sammo Hung
It brought back some memories of growing up:
1. Wong Fei Hung shows on TV played by Kwan Tak Hing. See also here. The most recent version was I saw was Jet Li in Once Upon A Time In China.
2. The book also mentions Jet Li's first movie Shaolin Temple and how the movie craze then swept the Wushu world.
3. The One Armed Swordsman played by Ti Lung.
4. The many Police Story movies by Jackie Chan and Sammo Hung
R wish list
Was part of an R discussion - I haven't really used R except for the VARS package and I've been trying to make the switch from SAS but it hasn't worked because of all the pre-processing that I have to do to get a data set for analysis.
My wish list for R is as follows (and they may already be there just not to my mediocre knowledge or quick Google searches of the R discussion list):
1. An input statement for processing text files like SAS - this is key to reading public use files that are usually very large and having to avoid reading the entire file using read.table or the fortran syntax for reading files.
2. Several commenters noted that you can read files without using data frames and I was not able to find a reference to it on the R-discussion list. I'm thinking that this is achieved using vectors or matrices but haven't quite figured it out yet.
3. A first dot and last dot syntax similar to SAS or an egen statement similar to STATA.
4. it would be nice if the R foreign package has a keep or drop statement so that I don't have to read the entire data set into memory. I tried to read the public use version of World Values Survey Data which was in Stata xpt format but the memory limitations on my computer couldn't handle it.
I realize that R is NOT a data processing package and something like Perl could also work BUT it's always nice to have everything integrated instead of having to deal with two languages and porting abck and forth between languages to do what I consider basic data processing tasks. I consider data analysis 90 percent processing and 10 percent analysis and then another 100 percent fooling around with different packages to get the results I want.
My wish list for R is as follows (and they may already be there just not to my mediocre knowledge or quick Google searches of the R discussion list):
1. An input statement for processing text files like SAS - this is key to reading public use files that are usually very large and having to avoid reading the entire file using read.table or the fortran syntax for reading files.
2. Several commenters noted that you can read files without using data frames and I was not able to find a reference to it on the R-discussion list. I'm thinking that this is achieved using vectors or matrices but haven't quite figured it out yet.
3. A first dot and last dot syntax similar to SAS or an egen statement similar to STATA.
4. it would be nice if the R foreign package has a keep or drop statement so that I don't have to read the entire data set into memory. I tried to read the public use version of World Values Survey Data which was in Stata xpt format but the memory limitations on my computer couldn't handle it.
I realize that R is NOT a data processing package and something like Perl could also work BUT it's always nice to have everything integrated instead of having to deal with two languages and porting abck and forth between languages to do what I consider basic data processing tasks. I consider data analysis 90 percent processing and 10 percent analysis and then another 100 percent fooling around with different packages to get the results I want.
Thoughts on randomization
Mostly triggered by Chris Blattman's advice to PhDs:
"The randomized evaluation is just one tool in the knowledge toolbox. It's currently the rage, but that means it will probably be old news by the time you finish your PhD."
One of the problems with randomized trials is that is is a black box. We understand very little or we may think we understand a lot. There is also a lot of potential subgroup interaction that needs to be tested.
All this points to the fact that if we have to do a randomized trial then we don't really understand the mechanism of how the treatment works (and this also applies to medical "science"/drug therapy, etc.). And if it does work to our expectations then it validates our priors and perhaps advances the field a little. But does it really advance our understanding of the causal underlying mechanism? All we can point to are suggestions that our limited understanding is validated but we could still be spectacularly wrong.
Another problem with randomized trials is that it usually is never the last word. (Perhaps repeated randomized trials can provide the last word, but rarely one randomized trial.) Again, this is because if a theory accords with my priors and the results of my hypotheses are rejected it doesn't seem to lower my priors as much as it should - mainly because the "theory" sounds so sensible and plausible. So it must be something with the way the trial is conducted. For instance, the effects of Head Start on children and the disappointing First Year results - yet the underlying premise of Head Start is so strong that it will not go away.
Randomized trials also do not address the question: How will it work for me? And this is particularly true for drugs. I really do not care about average treatment effects of the average treatment effects on my subgroup. And it is this thinking that leads to experimentation and continuing treatment using "less than acceptable" methods or alternative methods. This would be the test of our understanding - if we can predict individual results then we can claim to have the final word on causality.
"The randomized evaluation is just one tool in the knowledge toolbox. It's currently the rage, but that means it will probably be old news by the time you finish your PhD."
One of the problems with randomized trials is that is is a black box. We understand very little or we may think we understand a lot. There is also a lot of potential subgroup interaction that needs to be tested.
All this points to the fact that if we have to do a randomized trial then we don't really understand the mechanism of how the treatment works (and this also applies to medical "science"/drug therapy, etc.). And if it does work to our expectations then it validates our priors and perhaps advances the field a little. But does it really advance our understanding of the causal underlying mechanism? All we can point to are suggestions that our limited understanding is validated but we could still be spectacularly wrong.
Another problem with randomized trials is that it usually is never the last word. (Perhaps repeated randomized trials can provide the last word, but rarely one randomized trial.) Again, this is because if a theory accords with my priors and the results of my hypotheses are rejected it doesn't seem to lower my priors as much as it should - mainly because the "theory" sounds so sensible and plausible. So it must be something with the way the trial is conducted. For instance, the effects of Head Start on children and the disappointing First Year results - yet the underlying premise of Head Start is so strong that it will not go away.
Randomized trials also do not address the question: How will it work for me? And this is particularly true for drugs. I really do not care about average treatment effects of the average treatment effects on my subgroup. And it is this thinking that leads to experimentation and continuing treatment using "less than acceptable" methods or alternative methods. This would be the test of our understanding - if we can predict individual results then we can claim to have the final word on causality.
Tuesday, January 13, 2009
Dilemma of an economist
This is from Ariel Rubinstein's Dilemma of An Economic Theorist but I think it applies to economists as a whole:
"What on earth am I doing? What are we trying to accomplish as economic theorists? We essentially play with toys called models. We have the luxury of remaining children over the course of our entire professional lives and we are even well paid for it. We get to call ourselves economists and the public naively thinks that we are improving the economy’s performance, increasing the rate of growth, or preventing economic catastrophes. Of course, we can justify this image by repeating some of the same fancy sounding slogans we use in our grant proposals, but do we ourselves believe in those slogans?"
By the way, Ariel Rubinstein's website is like entering what I might have thought to be a run of the mill book store but discovered it to be an amazing place for browsing. Check out his Cafe Poster.
"What on earth am I doing? What are we trying to accomplish as economic theorists? We essentially play with toys called models. We have the luxury of remaining children over the course of our entire professional lives and we are even well paid for it. We get to call ourselves economists and the public naively thinks that we are improving the economy’s performance, increasing the rate of growth, or preventing economic catastrophes. Of course, we can justify this image by repeating some of the same fancy sounding slogans we use in our grant proposals, but do we ourselves believe in those slogans?"
By the way, Ariel Rubinstein's website is like entering what I might have thought to be a run of the mill book store but discovered it to be an amazing place for browsing. Check out his Cafe Poster.
Subscribe to:
Posts (Atom)