Tuesday, April 12, 2011

On Summer’s silly defence of silly economics

Yesterday I wrote about Larry Summers rules for knowing what economics research to dismiss when you are looking for valid and useful insights. However, he didn’t want to criticize the nonsense too hard:

On the other hand, he pointed out that while there was clearly a need to be prudent while applying research to the real world, it would also be unwise to attack it wholesale. He surmised that it might be possible that some things that seem useless or of limited applicability now would turn out to be useful in years to come (microfoundations for macroeconomics, perhaps?).

This last caveat is one I’ve frequently encountered in two contexts: From people who want to defend basic (natural) science, and from people who want to defend some discipline in economics that is just plain wacky. The argument is the same: It might turn out to be useful in the future.

Though true in the strict sense (I can’t rule out possible value coming from this research), the argument is frequently a “cheat”: I suspect that the person supporting basic science (or abstract economic theorizing) believes that this is nice and valuable intrinsically no matter what the usefulness of the results may turn out to be. But since this is a tough pitch to sell to the general public (especially for the economist), they try to say that “well, this could actually turn out to be valued highly by you even if you don’t care about the intrinsic value.” And yes, there are clear cases of (truly) useful things that came out of (seemingly) pointless and abstract theorizing. Here’s an example from the US Department of Energy:

The discovery that all matter comes in discrete bundles was at the core of forefront research on quantum mechanics in the 1920s. This knowledge did not originally appear to have much connection to the way things were built or used in daily life. In time, however, the understanding of quantum mechanics allowed us to build devices such as the transistor and the laser. Our present-day electronic world, with computers, communications networks, medical technology, and space-age materials would be utterly impossible without the quantum revolution in the understanding of matter that occurred seven decades ago. But the payoff took time, and no one envisioned the enormous economic and social outcome at the time of the original research.

However, it seems wrong (especially of an economist) to just transfer this argument from basic science (whether mathematics or theoretical physics or whatever) to economics. The reason is simple: Take two types of research. One (“applied research”?)is practical and will with high probability lead to valuable insights (in  terms of practical usefulness, economic value, material benefits to humanity or whatever). The other one (“basic research”?) is highly abstract and divorced from empirical applications and will with high probability fail to lead to such valuable insights. However, with both of them there is uncertainty, and we can imagine some probability distribution over “insight-value” that these will generate. It seems to me that unless we have reason to believe that the tail of the “basic science” distribution is fatter – i.e., unless the probability of making truly mind-blowing important progress  is higher for basic than for applied science – then we should always go for the applied in so far as the pragmatic value of the insights is what we want. The expected value would be higher, and the probability of an insight of any given value would be higher with the applied research. In other words, we need a “fat-tail” argument – an argument that the distributions will differ for observations lying far away from the mean (explaining the possibility of such differences in distributions in another context was part of what made Summers resign as President of Harvard , so I would think he sees this).

My point is just that I can see the possibility of this fat-tail argument in terms of certain types of basic science, but that does not mean it is present in economics. In physics there could be some argument such as “the higher the granularity and precision with which we can understand and manipulate the world around us, the more opportunities are open to us for manipulating it to our benefit,” and this can be supported by examples from experience. In mathematics there could be an argument that “the more analytical tools for a broader array of problems, the more mathematics will be able to power up other disciplines and improve their reach and value”. However, I am at a loss to see what more sophisticated representative agent-modelling in DSGE models will give us. To me, it seems more like Tolkienesque fantasy about alternate worlds. And if such fantasy about alternate probably-not-even-conceivably-realistic worlds can be useful – then the question is: Which ones are most likely to be useful, and how do we tell? Why representative agents deciding with optimal control theory? Why is the (seeming) bias towards non-regulation and free markets?

Also – if such modeling divorced from evidence “could potentially” turn out to be useful – surely it could also “potentially” turn out to be harmful? For instance, if it misled (at times influential) economists into thinking that the world is simpler than it is and that it is imperative that our world implements the policies derived from their rational choice fan-fiction. A possible example: Brooksley Born pushed hard for the regulation of a booming, wild-west-frontier derivatives market, and was stopped by President Clinton’s Working Group on Financial Markets. Alan Greenspan argued that regulation could lead to financial turmoil, and at one point she was called by Larry Summers and told that

"You're going to cause the worst financial crisis since the end of World War II."... [Summers then said he had] 13 bankers in his office who informed him of this.

No comments:

Post a Comment