Tuesday, June 7, 2011

The “flaw” in modern economics – and how to fix it?

Why do economists produce such sophisticated, intelligent work and yet end up supporting claims about the real world that seem – at times – insane, absurd and clearly unsupported by evidence? (We realize you might disagree that this is ever a problem, but (as the quotes below will show) we are not alone in making this observation.)

A colleague and I have tried to understand why this happens in a recently published paper. An essay presenting the same ideas in a shorter, simpler, and more readable form is here, and for those who prefer to get “the gist of it” through a video, you can do so here. An even shorter version follows in this blogpost… ;-)

The puzzle that we try to explain is this frequent disconnect between high-quality, sophisticated work in some dimensions, and almost incompetently argued claims about the real world on the other. DeLong recently blogged about this as the “Walrasian” mindset (as opposed to the more pragmatic and empirically oriented Marshallian) he feels characterizes some macroeconomists:

The microfoundation-based theoretical framework is not to be tested, but simply applied. It is not an "engine for the discovery of concrete truth" but rather a body of truth itself. Once a Walrasian has pointed out some not-wholly-implausible microfoundation-based mechanisms, his work here is done.

The implied claim is that some economists are seduced-by-theoretical-beauty and talk about the real world even though their gaze is fixed almost exclusively on the Platonic ideal of their equations and models. This is similar to Olivier Blanchard`s recent statement that

Before the crisis, we had converged on a beautiful construction" to explain how markets could protect themselves from harm […] But beauty is not synonymous with truth.

This, again, was similar to Krugman’s claim in the 2009  essay “How did economists get it so wrong?”:

As I see it, the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.

I`d also note the recent reflections of blogger noahpinion on his graduate economics courses, where

the course [… in macroeconomics] didn't discuss how we knew if these theories were right or wrong. We did learn Bob Hall's test of the PIH. That was good. But when it came to all the other theories, empirics were only briefly mentioned, if at all, and never explained in detail. When we learned RBC, we were told that the measure of its success in explaining the data was - get this - that if you tweaked the parameters just right, you could get the theory to produce economic fluctuations of about the same size as the ones we see in real life. When I heard this, I thought "You have got to be kidding me!" Actually, what I thought was a bit more...um...colorful.

and (in part 2)

all of the mathematical formalism and kludgy numerical solutions of DSGE give you basically zero forecasting ability (and, in almost all cases, no better than an SVAR). All you get from using DSGE, it seems, is the opportunity to puff up your chest and say "Well, MY model is fully microfounded, and contains only 'deep structural' parameters like tastes and technology!"...Well, that, and a shot at publication in a top journal.

Though these observations seem related, they still don`t explain how this happens and why – and that makes it hard to find a good way to fix things.

Our explanation can be put in terms of the research process as an “evolutionary” process: Hunches and ideas are turned into models and arguments and papers, and these are “attacked” by colleagues who read drafts, attend seminars, perform anonymous peer-reviews or respond to published articles. Those claims that survive this process are seen as “solid” and “backed by research.” If the “challenges” facing some types of claims are systematically weaker than those facing other types of claims, the consequence would be exactly what we see: Some types of “accepted” claims would be of high standard (e.g., formal, theoretical models and certain types of statistical fitting) while other types of “accepted claims” would be of systematically lower quality (e.g., claims about how the real world actually works or what policies people would actually be better off under).

In our paper, we pursue this line of thought by identifying four types of claims that are commonly made – but that require very different types of evidence (just as the Pythagorean theorem and a claim about the permeability of shale rock would be supported in very different ways). We then apply this to the literature on rational addiction and argue that this literature has extended theory and that, to some extent, it is “as if” the market data was generated by these models. However, we also argue that there is (as good as) no evidence that these models capture the actual mechanism underlying an addiction or that they are credible, valid tools for predicting consumer welfare under addictions.  All the same – these claims have been made too – and we argue that such claims are allowed to piggy-back on the former claims provided these have been validly supported. We then discuss a survey mailed to all published rational addiction researchers which provides indicative support – or at least is consistent with – the claim that the “culture” of economics knows the relevant criteria for evaluating claims of pure theory and statistical fit better than it knows the relevant criteria for evaluating claims of causal or welfare “insight”. To see this, just compare the Bradford-Hill criteria for establishing causality in medicine/epidemiology with the evidence presented in modern macro or rational addiction theory or a game-theoretic model of the climate treaty negotiation process.

If this explanation holds up after further challenges and research and refinement, it would also provide a way of changing things – simply by demanding that researchers state claims more explicitly and with greater precision, and that we start discussing different claims separately and using the evidence relevant to each specific one. Unsupported claims about the real world should not be something you`re allowed to tag on at the end of a work as a treat for competently having done something quite unrelated.

Anyway, this is also an experiment in spreading research – and in addition to this blogpost you can pick from three different levels of interest: The full paper, the essay or the video.

Comments welcome :-)

A blind spot in economics? Unjustified claims about reality

No comments:

Post a Comment