Wednesday, November 23, 2011

Economic models in theory and practice

Michael Woodford has a nice essay at INET where he responds to John Kay’s plea for a changed economics. In it, Woodford presents a number of arguments in favor of economic models that I think are valid and useful, but I don’t think he successfully defends the way economists use models in practice. Instead, he defends a different way of using models that would be more defensible.
Let’s consider his arguments in favor of mathematical models.
Argument 1: Precision
Models allow the internal consistency of a proposed argument to be checked with greater precision;
True, if the argument allows for translation into mathematical form. There’s an old Keynes quote on this:
Much economic theorizing to-day suffers, I think, because it attempts to apply highly precise and mathematical methods to material which is itself much too vague to support such treatment.
Sometimes, surely, the “vagueness of the material” is a shortcoming that makes an argument sound more sensible than it is. In that case, forcing it into mathematical form forces us to clarify what we actually mean and makes it harder to “weasel.” In other cases, however, formal methods force us to “sharpen” assumptions in a way that changes the argument itself. There’s a difference between saying that people have some thoughts on how gasoline prices will be in the future, and saying that people have subjective beliefs about future gas price trajectories that can be defined as a probability distribution over all possible price paths.
Argument 2: Differentiation/clarification
Back to Woodford:
they allow more finely-grained differentiation among alternative hypotheses
This is true – in so far as all the alternative hypotheses can be translated into this common language. For instance, the philosopher Jon Elster has written on Gary Becker’s rational addiction work that
Although I disagree sharply with much of it, it has raised the level of discussion enormously. Before Becker, most explanations of addiction did not involve choice at all, much less rational choice. By arguing that addiction is a form of rational behavior, Becker offers other scholars the choice between agreeing with him or trying to identify exactly where he goes wrong. Whatever option we take (I'm going to take the second), our understanding of addiction will be sharpened and focused.
This sounds fine, until you try to read the literature and discussions and realize that economists rarely find it interesting (or possible?) to discuss the specification of a model taken as a serious hypothesis about a causal mechanism in the world. As Woodford says elsewhere in his essay, when you want to conduct economic analysis with a mathematical model,
An assessment of the realism of the assumptions made in the  model is essential --- not, of course, an assessment of whether the model literally describes all aspects of the world, which is never the case, but an assessment of the realism of what the model assumes about those aspects of the world that the model pretends to represent. It is also important to assess the robustness of the model’s conclusions to variations in the precise assumptions that are made, at least over some range of possible assumptions that can all be regarded as potentially of empirical relevance. These kinds of critical scrutiny are crucial to the sensible use of models for practical purposes.
However, in many parts of economics, this kind of discussion is seen as irrelevant or silly. If you persist in trying to discuss “the realism of what the model assumes about those aspects of the world that the model pretends to represent” you’ll just be a person who doesn’t “get” economics.
Consider the rational addiction model of Becker and Murphy and the others that followed in its wake. As I’ve written with a colleague here,
The core of the causal insight claims from rational addiction research is that people behave in a certain way (i.e. exhibit addictive behavior) because they face and solve a specific type of choice problem. Yet rational addiction researchers show no interest in empirically examining the actual choice problem – the preferences, beliefs, and choice processes – of the people whose behavior they claim to be explaining. Becker has even suggested that the rational choice process occurs at some subconscious level that the acting subject is unaware of, making human introspection irrelevant and leaving us no known way to gather relevant data
In addition: Trying to examine whether the causal mechanisms described are at all plausible or consistent with evidence, is seen as irrelevant or weird. It’s an exercise only philosophers like the above quoted Jon Elster and oddballs like myself seem to be interested in (I first wrote on this in an article called “Taking absurd theories seriously”). “Real” economists seem content to wave their hands and say “as-if” or “these are just standard assumptions.”
Argument 3: Enables complexity
[Models] allow longer and more subtle chains of reasoning to be deployed without both author and reader becoming hopelessly tangled in them.
This claim by Woodford is similar to Krugman’s claim in “Two cheers for formalism” (a piece originally published in the Economic Journal):
Most of the topics on which economists hold views that are both different from "common sense" and unambiguously closer to the truth than popular beliefs involve some form of adding-up constraint, indirect chain of causation, feedback effect, etc.. Why can economists keep such things straight when even highly intelligent non-economists cannot? Because they have used mathematical models to help focus and form their intuition.
This sounds sensible: One could argue that individuals face relatively simple problems (“how much milk do I feel like drinking now?”), but that we need formal tools to understand what happens when they interact in markets or firms or whatever. However, this wouldn’t really be true. The argument is particularly off if you’re into rational expectations – in which case you want to impose the requirement that the agents in the model understand the model they are in and optimize in light of the real constraints they face. In that case, they need the same tools you do.
The argument is also off in other contexts. As I’ve argued elsewhere, it is actually an argument against all sophisticated, mathematical theories of individual choice. If mathematical modeling is a tool necessary for economists to reason their way through, say, rational addiction theory, how on earth do they expect “even highly intelligent non-economists” to discover that becoming a junky is their best shot at happiness? You might want to avoid the question by saying that they are able to do this “subconsciously”, but even that is a testable claim (hint: it’s empirically false). Also – if we really did solve such problems easily in our subconscious – wouldn’t these models seem intuitive and in line with our gut feelings? Put differently, it seems odd to develop a tool to overcome human cognitive frailty and then claim that this tool used at “full power”
Argument 4: Enables critical evaluation
Woodford’s essay again:
Often, reasoning from formal models makes it easier to see how strong are the assumptions required for an argument to be valid, and how different one’s conclusions may be depending on modest changes in specific assumptions. And whether or not any given practitioner of economic modeling is inclined to honestly assess the fragility of his conclusions, the use of a model to justify those conclusions makes it easy for others to see what assumptions have been relied upon, and hence to challenge them.
Here I’ll just refer back to the discussion on argument 2. Once again, I agree with Woodford in principle – but would argue that this is descriptively inaccurate in terms of how academic discussions in economics are actually conducted. When I examined the justification of specific assumptions in rational addiction theories (in “Taking absurd theories seriously”), I found that this was a lackadaisical affair: The most weird and unbelievable stuff was left uninterpreted in the model, other weird assumptions were justified by telling whatever anecdote would support it, or by giving loose evidence that would support a different but related assumption. The models, to sum up, were
poorly interpreted, empirically unfalsifiable, and based on wildly inaccurate assumptions selectively justified by ad-hoc stories.
Conclusion
Time to wrap up.
I realize that I’ve sounded critical of Woodford, but hope the “in principle”/”in practice” distinction is clear. My problem with his essay is that it’s framed as a defense of current economic practice. Evaluated in that regard, the arguments fails: He actually defends a form of “best practice” in modeling that is neither widespread nor widely recognized as such in economics today (as far as I can tell).

Monday, November 21, 2011

Hamermesh: Macro is rubbish, but the academic market is working and selecting for usefulness. It selected for Gary Becker, didn’t it?

Labor economist Daniel Hamermesh is interviewed at the Browser and asked for five books showing that economics is fun. At one point, the following exchange occurs:
With the economics profession, in the aftermath of the financial crisis, being somewhat in disrepute…
Stop! Stop, stop, stop. The economics profession is not in disrepute. Macroeconomics is in disrepute. The micro stuff that people like myself and most of us do has contributed tremendously and continues to contribute. Our thoughts have had enormous influence. It just happens that macroeconomics, firstly, has been done terribly and, secondly, in terms of academic macroeconomics, these guys are absolutely useless, most of them. Ask your brother-in-law. I’m sure he thinks, as do 90% of us, that most of what the macro guys do in academia is just worthless rubbish. Worthless, useless, uninteresting rubbish, catering to a very few people in their own little cliques.
I’m not sure most people in the outside world would make a distinction between macro and microeconomists.
I know. It’s up to us to educate them. I got this line from a friend in architecture the other day. He said exactly the same thing. I went through the same litany, trying to disabuse him of this notion. It’s like pushing a stone up a giant hill. It’s not going to get me very far, I agree. But nonetheless it is the case that most of us, and most of what we do, remains tremendously useful, tremendously relevant, and also fun!
He also names names. While Sargent, for instance, is a good guy, 
Not all the macro guys who won the Nobel are good. The guy who won it in 2004 was one of the main culprits in the nonsense, Ed Prescott.
At the same time, Hamermesh is an optimist, in that he believes the academic market selects, over time, for usefulness:
I do believe in markets. We had some useless macro guys here who just left, thank God, and we’re now looking for replacements. I do think the failure of these people is conditioning how we search for a replacement. I’m quite sure the journals in academe are going to reflect this too. People are interested in being useful in this profession. It doesn’t mean the people who were the bad guys from the last 20 years in macro are going to be doing anything different. They’re incapable of doing anything different! But markets do work and the dead and useless get shoved aside by the young and useful. I’m a tremendous optimist. I do believe markets work and that people run to fill niches. There’s an obvious niche here, and you’re already starting to see it being filled.
I think this is interesting: Macroeconomics the last few decades has basically been run by guys that Harmermesh charges with doing mostly “worthless rubbish. Worthless, useless, uninteresting rubbish, catering to a very few people in their own little cliques.” These are the guys who’ve dominated top journals and top economics departments and who have won Nobel Prizes for their macro work. Yet he still sees the academic market as a well-functioning mechanism selecting for usefulness.
There’s a second thing I find interesting about this: The “top economists” that Harmermesh mentions to show that micro (as opposed to macro) is useful, is the same economist that I trot out to show how absurd nonsense is accepted in economics.
Together with Hans Melberg, I recently argued that there is a “market failure” in (at least a large part of) the academic market for economists: If you have a model that is theoretically consistent and in line with “standard theory” (rational choice, equilibrium, etc.), and if the model matches some stylized facts and can reproduce regularities in market data – then you’re more or less given free reign to make causal claims and say that the “theory” can support strong and important claims regarding the welfare effects of actual real world policies.
In this work, Melberg and I looked at the kind of claims made in the literature on rational addiction theory. We argue that this is a literature featuring claims so obviously unsupported (we call them “absurd”), that their acceptance into good journals is a clear indication of a “broken market.”
The funny thing is: The whole literature on rational addiction theory – which we see as a clear example of how the “academic market” in economics allows policy-useless nonsense claims to rise to the top - is based on the work of Gary Becker. This same economist is one of two economists that Hamermesh mentions as examples of good economics that, presumably, show how well-functioning the market is.
There have been some great economists since then, in the last 30 to 40 years. [..] There’s Gary Becker, who in my view is the top economist of the last 50 years. His notions of family bargaining and how families behave are terribly important, and affect how, in the end, we all think.
To me, the rise of Gary Becker and his theories does not illustrate the usefulness (in the sense of credible, well-supported insights into the real world and the effects of actual policy choices on real people) of his work, but more that it “opened new markets” for economists: He showed them ways to build theories of the kind they were familiar with within a host of new areas (education, family, crime, addiction), in ways that seamlessly fit the criteria of “rational choice” and standard micro-economic practice. He provided innovative, creative, exciting strategies for economic imperialism. His work allows you to interpret all sorts of things using the universal acid of economic theory. Some of it may be truly useful and correct, some of it is very clearly not, yet all of it has been very successful within the discipline. To me, that makes it unlikely that “usefulness” was the selection criteria involved.

UPDATE: Came across a nice blogpost by Daniel Lemire who also doubts that science is successfully self-regulatory, though he argues from a different angle (he asks: how well does peer-review filter out bad research? To what extent does citation levels reflect quality?). I could also add this post which discusses a recent result that rebuttals don't affect how often a paper is cited, nor how well it is regarded.

Friday, November 18, 2011

The invisible hand is everywhere… you just need to notice every little detail!

In the book “Darwin’s Dangerous Idea” the philosopher Daniel C. Dennett called Darwin’s theory of evolution a form of “universal acid”:
it eats through just about every traditional concept, and leaves in its wake a revolutionized world-view, with most of the old landmarks still recognizable, but transformed in fundamental ways.
The same thing is true of economic choice theories: The logic of equilibria based on rational actors making marginal adjustments started as a description of the market, ate its way into “above market”-institutions such as regulatory agencies (regulatory capture) and government (public choice), as well as “non-market”-institutions such as families and – in a nice little satirical Ourobos-move – the discipline of economics:
The way I would describe Academic Choice theory is that it is “the sociology of economists, without romance.” Is this right? What an insightful comment. As you say, Academic Choice theory is a descriptive project, with no normative orientation. We apply a critical approach in order to counterbalance pervasive earlier notions of economists as scientific heroes struggling against popular ignorance in order to serve the common good.
What would you identify as the central insights of Academic Choice theory? The theory begins by identifying three principal ways in which economists try to maximize their utility. First, they receive salaries from universities, which can be increased if their course enrollment increases. Course enrollment is primarily driven by students with future careers in business and the financial sector, so an economist has an incentive to propound theories that CEOs and financial institutions find attractive. Even if adoption of these theories leads to substantial public costs, these costs will not be shouldered by the economist personally. Second, by developing such theories an economist can open the door to future wealth as a lobbyist or consultant. Third, the support of economists is critical to creating and maintaining special privileges for the financial services industry and for top corporate officers. By threatening to withdraw this support, economists can engage in rent-seeking. I call this last practice academic entrepreneurship.
The post is wroth worth reading in full. Remember – no matter what objection someone raises, you can always turn the firehose of economic acid on them and reduce them to yet another selfishly motivated rational agent. And when the economic worldview has eaten its way through everything and laid bare the underlying logic and structure of the world in all its stark, brutal detail? Then, perhaps we’ll all meet up in the “Invisible Hand Society” of Robert Anton Wilson’s novel “Schrodinger’s Cat Trilogy”:
Dr. Rauss Elysium had summed up the entire science of economics in four propositions, to wit:
1. Find out who profits from it.
This was merely a restatement of the old Latin proverb-a favorite of Lenin's-cui bono?
2. Groups never meet together except to conspire against other groups.
This was a generalization of Adam Smith's more limited proposition "Men of the same profession never meet together except to defraud the general public." Dr. Rauss Elysium had realized that it applies not just to merchants, but to groups of all sorts, including the governmental sector.
4. Every system evolves and expands until it encroaches upon other systems.
This was just a simplification of most of the discoveries of ecology and General Systems Theory.
4. It all returns to equilibrium, eventually.
This was based on a broad Evolutionary Perspective and was the basic faith of the Invisible Hand mystique. Dr. Rauss Elysium had merely recognized that the Invisible Hand, first noted by Adam Smith, operates everywhere. The Invisible Hand, Dr. Rauss Elysium claimed, does not merely function in a free market, as Smith had thought, but continues to control everything no matter how many conspiracies, in or out of government, attempt to frustrate it. Indeed, by including Propositions 2 and 3 inside the perspective of this Proposition 4, it was obvious-at least to him-that conspiracy, government interference, monopoly, and all other attempts to frustrate the Invisible Hand were themselves part of the intricate, complex working of the Invisible Hand itself.
He was an economic Taoist.
The Invisible Hand-ers were bitterly hated by the orthodox old Libertarians. The old Libertarians claimed that the Invisible Hand-ers had carried Adam Smith to the point of self-contradiction.
The Invisible Hand people, of course, denied that.
"We're not telling you not to oppose the government," Dr. Rauss Elysium always told them. "That's your genetic and evolutionary function; just as it's the government's function to oppose you."
"But," the Libertarians would protest, "if you don't join us, the government will evolve and expand indefinitely."
"Not so," Dr. Rauss Elysium would say, with supreme Faith. "It will only evolve and expand until it creates sufficient opposition. Your coalition is that sufficient opposition at this time and place. If it were not sufficient, there would be more of you."
Some Invisible Hand-ers, of course, eventually quit and returned to orthodox Libertarianism.
They said that, no matter how hard they looked, they couldn't see the Invisible Hand.
"You're not looking hard enough," Dr. Rauss Elysium told them. "You've got to notice every little detail."
Sometimes, he would point out, ironically, that many had abandoned Libertarianism to become socialists or other kinds of Statists because they couldn't see the Invisible Hand even in the Free Market of the nineteenth century.
All they could see, he said, were the conspiracies of the big capitalists to prevent free competition and to maintain their monopolies. They, the fools, had believed government intervention would stop this.
Government intervention was, to Dr. Rauss Elysium, just like the conspiracies of the corporations, merely another aspect of the Invisible Hand.
"It all coheres wonderfully," he never tired of repeating. "Just notice all the details."

Thursday, November 17, 2011

Rational models are NOT more constrained than irrational ones

One more comment on the Raquel Fernández conversation at the Straddler that I mentioned in a previous post. I thought she had several good points that she formulated well, but there was one comment that I’ve often seen economists make and that I think is wrong or at least misleading:
There is a beauty to the models in and of themselves. You assume, for example, that people are rational. I don’t think any really good economist thinks that people are perfectly rational, but, on the other hand, if you want to model people as not rational, all of a sudden it’s not clear what choice you should make. There are a million and one ways to be non-rational; there’s only one way to be rational within the confines of a model. Rationality means one thing: you’re maximizing your welfare subject to constraints. Now, if you say people don’t always maximize, and they’re beset by this and that, then all of a sudden you can have a million models. And that’s a little bit unsatisfactory too.
Yes, “there’s only one way to be rational within the confines of a model,” but so what? Within the confines of a specific model of irrationality there would be only one way to be irrational too. And  yes, “there are a million and one ways to be non-rational,” but there’s also a million and one ways to specify a utility function – and this gives us a million and one ways to act that are all rational.
There are actually three points (at least) here:
  1. Strictly speaking, “utility maximization” is empirically empty. We start with a preference relation that summarizes observed choice between pairs of consumption bundles, and which is “rational” in the sense of being complete, reflexive and transitive. We can then represent this with an ordinal utility function constructed to capture the choices described by this preference relation. Any preference relation – that is, any systematic set of choices fulfilling these conditions – can be represented by such a utility function. If you always did what hurt you the most, your choices could still be captured by such a utility function – and saying that you “maximize utility” means nothing more than saying that you “choose the one option within the choice set that would be selected no matter what other alternative in the choice set you set it against in a pairwise choice”. This makes no claims concerning why this option is selected – it may be because it benefits you, is best for the world (but not for you selfishly), is the most brightly colored, was most recently advertised or whatever.
  2. Economists then commonly make the “great leap of welfare economics” by assuming that all choices actually made aim to maximize the welfare of the choosing agent. “Utility” now measures “welfare” in some way.To be “rational” means to be “smart and selfish” – and arguments about whether or not A or B or C “is rational” quickly becomes a tiresome exercise in discussing psychological egoism. “Yes, he gave away his money to the beggar – but this gave him a warm glow which was the most welfare-maximizing item he could purchase for that sum of money”
  3. People are obviously not 100% selfish in terms of money and goods for themselves, so such utility functions need to be defined over non-observable goods as well as observable goods. This means that the “one” model of fully rational choice is actually a million models, due to the many degrees of freedom within the model. You do what maximizes your “utility,” but that can be anything. Take Gary Becker’s work: In his work, your utility function can be defined over “capital stocks” that refer to addictive capital, imagination capital, human capital etc. Looking at the different variants of rational addiction theory that have been developed within Becker’s framework, economists are happy to assume different numbers of such stocks and different cross-derivatives between stocks and other goods. Out, as a result, comes “rational consumption” that is rising, falling, cyclical, chaotic, or involves cold-turkey quitting.
I really don’t understand why (some) economists think “utility maximization” is such a “hard constraint” on theorizing in light of this. If you think it is – let me know one consumption pattern or human behavior that if it were observed repeatedly would be inconsistent with “rationality” or “utility maximization”. If it’s a hard constraint this should be simple – there should be long lists of possible, observable behaviors that could not occur if people were actually rational and maximized utility in some substantive sense and that would not occur if the hypothesis of “rational selfish maximization” was correct.
In actuality, I think you’ll find that there is no behavior weird enough to make rational choice economists doubt there being some rational utility-maximizing explanation out there provided we look long and hard enough. As Stigler and Becker wrote in their De Gustibus Non Est Disputandum article:
On our view, one searches, often long and frustratingly, for the subtle forms that prices and income take in explaining differences among men and periods. […] we are proposing the hypothesis that widespread and/or persistent human behavior can be explained by a generalized calculus of utility-maximizing behavior, without introducing the qualification “tastes remaining the same".
Put differently: If you see human action that doesn't look rational - doubt not! Rationality works in mysterious ways... Believe, think, pray and tinker with your model - and if you are wise enough all will be revealed and the Invisible Hand will publish your paper in a top-ranked journal...

Wednesday, November 16, 2011

The “canonical model” and the importance of default models

A Google+ post from Al Roth alerted me to an interesting conversation at The Straddler with Raquel Fernández. She has some great ways of making some nice points, such as her statement that a:

problem [in economics] is that methodology frequently trumps the question. Once you have a way to model things, much of the research becomes very self-referential; that is, it becomes more about how the model  behaves and less about the question. I think the question really matters, but a lot of economists believe the methodology matters more than the question. And this leads to very elaborate models of very many things without much of an outside reality check.

Another interesting impression I get from her talk, which is not explicit and may be a misreading on my part, flows from this point and concerns the importance of default models: The “default” or “canonical” model of economics describes a perfect-competition well-functioning market. We know that this is an incorrect description of the world, but it frequently shapes our “gut reaction,” and because we understand it fully we feel more comfortable arguing about this model than about the world. As a result, economists who give policy advice are treated more leniently by fellow economists if their advice is consistent with the standard model.

[…] the people who go and give advice usually end up with a very bad rap in economics. I am amazed at how much hatred—and I will say hatred—Paul Krugman evokes from some fellow economists. But one of the reasons for this is that he says things for which there is not “scientific” support and which go against what these people believe is "good" economics. Now, people on the other side also say things for which they do not have "scientific" support incidentally, and they don’t get the same amount of hatred.[…]

Take the argument we’ve been having recently. Should we be trying to increase aggregate demand or should we be reducing the deficit? […] Well, a model is not going to give you the answer because it depends on whether you write the model in such a way that getting aggregate demand up is a good idea, or whether you write it in such a way that people are really worried about future deficits that are coming around the road and they won’t invest because they know that taxes are going to be high in the future.

These things are rigged into the model from the beginning when it’s such an unsettled question, and we don’t really have an exact science-based way to answer it, which is why we argue about history. […]

Economists don’t have to be free-marketers. But that ends up being the canonical model, and then everything else ends up being a departure from the canonical model, which you’ve then got to explain why you’re departing from. It’s not because the canonical model is right, it’s because you ask most economists and they’ll say, “At least we understand how that economy works very, very well. So you want to tell me that we’re going to move away from this one and move to something else, that’s fine, but you have to explain why you’re putting in all of these imperfections.” So it’s not that you can’t write those things down, it’s just that there is less of a standard way of doing it.

Friday, July 1, 2011

Have a nice summer!

Had some posts I needed to get out on the Norwegian-language blog of a colleague regarding housing prices in Norway. As a result I never got around to finishing up my thoughts on Gigerenzer´s criticism of behavioral economics (I really thought the paper was a good, enjoyable and interesting read, but my comments on this blog have so far been on things I didn´t like so much... use the search bar underneath the twitter box on the right to find the posts), nor some things I kind of want to think through regarding popularized economics, nor some ideas I would like to explore regarding the strong demand for assurance and its implications in politics and debate and academia.

Hope some of you readers will still come by once in a while next season.


Have a nice summer!

Saturday, June 18, 2011

Should parenting and drugs affect economic theory?

Would economic theory be different if economists were parents when initially taught it? Freakonomics-blogger Justin Wolfer says yes, because that´s what the experience of becoming a father has told him. Overcoming Bias-blogger Robin Hanson says no, and says becoming a father is like having a mystical experience on drugs: It doesn´t inform us about the real structure of the world.

I´m wondering if the difference between these two may reduce to one thing: How “religiously” they´ve believed in the “ultimate truth” of the economic model of rational decision making. If we take Wolfer at his own word, he always saw it as
the basic idea informing economics—that people are purposeful, analytic decision makers. And this idea just seemed entirely natural to me. I had always believed in the analytic self; I was rational, calculating, and tried to make smart decisions. Of course real people don’t use math, but I figured that we’re still weighing costs and benefits just as our models say. Or at least that was my understanding of the world.
In other words, he sounds like the kind of guy who believed all behavior could be explained by economic theory as optimal, even if some of it would require complex choice models that assume people take subtle feedback effects, strategic “he knows that I know that he knows that I know X” issues and complicated delayed consequences of present actions into account in an optimal “rational” manner. After having a kid, this no longer seems to describe his own experience of himself:
My feelings toward my daughter Matilda aren’t easily expressed in analytic terms. I struggle to express it, just as I struggle to understand it.



There’s something new and strange about all this. Today, I feel the powerful force of biology. It’s visceral; it’s real; it’s hormonal, and it’s not in our economic models. I’m helpless in the face of feelings that overwhelm me. Yes, I know that a twenty-something reader will cleverly point out that I just need to count kids as a good which yields utility, or perhaps we need to add a state variable to the utility function as in rational addiction models. But that’s not the point. I’m surprised by how little of this I’ve consciously chosen. While the economic framework accurately describes how I choose an apple over an orange, it has had surprisingly little to say about what has been the most important choice in my life.
Hanson, in contrast, seems to see economic models as attempts to capture some of the regularities in human behavior:
First, econ makes sense of a complex social world by leaving important things out, on purpose – that is the point of models, to be simple enough to understand. More important, econ models almost never say anything about consciousness or emotional mood – they don’t at all assume people choose via a cold calculating mindset, or even that they choose consciously. As long as choices (approximately) fit certain consistency axioms, then some utility function captures them. So how could discovering emotional and unconscious choices possibly challenge such models.
Given Hanson´s view of economic theory, there is no need to redefine everything after having a kid. People will still tend to buy less as the price rises, avoid risk, and so on. It surprises me somewhat, though, that Hanson doesn´t see that there are a number of economists with a more fundamentalist belief in the neoclassical model. I´ve met several, and I bet I´ve met fewer economists in general than Hanson. I´ll admit this is pure speculation, but I´ve wondered if some economists feel threatened by behavior that deviates from the “rational choice” model they hold. They don´t say "Well, this is a simplified model, sure there´ll be deviations, but we`re capturing some regularities and that´s what we´re aiming for. Explaining something is better than not explaining anything and we´ll never be able to explain everything.” Instead, they try to twist their brains into coming up with ad-hoc assumptions that would reveal these deviations to be full, sophisticated optimization. At times, this means that increasingly stupid and shortsighted behavior is explained as increasingly subtle and complex optimization. Maybe it´s a fear of letting non-rational explanations get a foot in the door, maybe it´s because the "welfare effects" often tacked on at the end of choice models would no longer be "valid" (not that they are valid today, but if you truly believe all choices always maximize the ultimate good of importance to the acting agents, then I guess they might seem valid to you).

Hanson concludes that
Having an emotional parenting experience is as irrelevant to the value of neoclassical econ as having a mystical drug experience is to the validity of basic physics. Your subconscious might claim otherwise, but really, you don’t have to believe it.
 I´m not sure. If a person sees economic theory as Hanson describes it, then I agree with him. But if a person thinks his way of seeing the world is the only one that is valid and possible (in the sense of consistent with past experiences), then having a child or a high dose of psilocybin in a controlled setting may both be ways of learning otherwise?

Tuesday, June 14, 2011

Tim Harford´s "Adapt" - a book review

Tim Harford´s new book “Adapt” is a wonderful read but difficult to pigeonhole. There´s interesting stuff about the Iraq war, the finance crisis, development aid, randomized experiments, skunk works, the design of safety systems, whistleblowers, overconfidence and groupthink phenomena, not to mention a truly wonderful explanation of how a carbon tax would work and why environmentalists should embrace it. Even when he covers topics that have been ably covered by others elsewhere, he does so in a light and enjoyable way and manages to dig up new, cool anecdotes. It´s partly a popularization of science, partly a business book, at times it almost moves into self-help territory, and at times it seems to present new and interesting perspectives on big topics (such as financial regulation). Still - though it may sound sprawling, I didn´t really find it so when reading it. At one level it reads like a series of interesting pieces of journalism on different topics, but on another, there´s an underlying thread of ideas that gradually emerges.

The way I read it, the main point of the book is that the problems we face are too complex for us to understand and figure out the solutions to from behind a desk. Evidence from the failed predictions of experts to the extinction records of firms and the failure of high-level military strategies support this. There´s a number of reasons why this is so, ranging from the difficulty of capturing and aggregating information at a sufficiently finely grained level to psychological tendencies to trust in our (frequently false) beliefs and suppress possible evidence that they´re wrong. Still - we do solve problems - but this happens through an evolutionary process: We make lots of bets - each one of which is small enough that failure is acceptable - and the winning bets identify “good enough for now” solutions that we replicate and grow. The best examples of this (as a method for human problem solving) are market economies and science. Lots of entrepreneurs who hope to strike it big, some of whom combine the factors of production in a way that better creates value than others - thus making a profit (to put the point in an Austrian way). Lots of scientists stating hypotheses, some of whom are able to better predict the outcomes of experimental and quasi-experimental data than others - thus having their hypotheses strengthened (on a related note - I recently made the argument together with a colleague that this process is broken in economics - see more on that here).

Harford also discusses a host of implications that follow from this - the need to “decouple” systems so that failure in a single component (such as a bank in the financial system) doesn´t bring down the entire system, the need to finance both “highly certain” research ideas as well as “long shot” ideas, avoiding groupthink by including people likely to disagree (thus creating room for disagreement in the group) and demanding disagreement, and using prizes to elicit experiments. He also discusses how such evolutionary processes can be exploited better in policy- which is where he gets to his beautiful explanation of how a carbon tax works by tilting the playing field (there are two chapters here that should be reworked into a pamphlet and handed out in schools and parliaments).

That´s my brief take on the underlying “storyline” - but it doesn´t do justice to the book, which reads like a string of intellectual firecrackers. The wide-ranging topics, however, also means that they are necessarily touched on lightly - it´s an appetizer for a lot of ideas more than a fully satisfying meal. For instance, if success in the market (and elsewhere) consists of being the “lucky” winner who made a bet that - ahead of time - had no stronger claim to being right than others, how does this factor into our views on entitlements and redistributive taxation? If prizes (such as the prize for a space-going flight) actually elicit large-scale, expensive experiments that we only need to pay for when they succeed - does this mean that they exploit some irrational overconfidence in the competitors? If people were sensible and unbiased in their estimate of success, would they spend more than their expected reward? And if not - wouldn´t that mean the prize money would have to be sufficient to finance all the experiments - in which case it doesn`t save us any money? To what extent does the desire for control play into the desire for top down planning and control? (Imagine you were the prime minister - would you feel comfortable if loads of schools were allowed to try out whatever they felt like, risking the chance that some of them would beat kids or indoctrinate them in some way that blew up in the media?) In an online interview by Cory Doctorow, Harford states that

I also looked at the banking crisis and big industrial accidents such as Deepwater Horizon, and found that there were almost always people who could have blown the whistle — and sometimes did — but the message didn’t get through. So those communication lines need to be opened up and kept open.

Yes - but no…. After all, if there´s a host of signals coming up, most of them wrong, it might well be rational to have some filtering mechanism in place that also weeds out many of the correct signals in order to avoid being swamped and misguided by wrong ones.

While we´re on the topic of whistleblowers - I also wish he´d said a word or two about some of the biggest transparency cases of recent years. On the one hand, the whistleblower-friendly candidate Obama who changed his tune once he got in office. This could have served as a way of discussing how hard it is to actually have people looking over your shoulder and criticizing you, even when you think (or at least see the arguments for) allowing them to do so. Also, I would have been interested in Tim Harford´s views on Wikileaks, which in some ways is the biggest attempt to increase transparency in modern times - as well as his views on the conflicts it generated (a book championing the cause of whistle-blowers should also at least mention the awful treatment of claimed whistleblower Bradley Manning). Given the many stories from the Iraq war and the US military about the dangers of a strictly enforced official partyline/strategy/story, the potential value in Wikileaks shining a light on what is actually going on seems pretty clear. Or at least worthy of discussion.

Given the number of topics covered in the book there are obviously quibbles you may have with certain facts that are wrong or the way some of them are treated, but that´s to be expected. More importantly, there were parts of the argument that I felt were missing - especially concerning how difficult it is to learn from experience. As documented in for instance Robyn Dawes´ excellent “House of cards” (in the context of psychology and the misguided beliefs of treatment professionals), there are clear cases where statistical decision rules consistently outperform human judgments, without this being enough to convince the experts who could gain from them. Or consider this post on the backfire effect from the you are not so smart blog, which discusses experiments suggesting that people can react to evidence that they were wrong by being even more convinced in their wrongness. The way politicians respond to arguments about the surprisingly weak effect of drug decriminalization on usage levels is another example. In terms of Harford´s argument - adaption and evolution not only requires us to test things and find out what works - it also requires us to accept what works and implement it more broadly. Taking into account the number of things covered, he probably covered this too - but if he wants his ideas to be taken up in policy circles I think (that is, my gut-feeling is) that this would be perhaps the hardest part.

Finally, the book could also have been tempered by applying its thesis to the thesis itself: Has “planned evolution” been attempted, and did it actually work? As the book argues, the devil is often in the details and seemingly good ideas based on solid case stories may turn out to work quite differently in practice from what we expected.

Monday, June 13, 2011

Economics, math and science - Krugman 1996 vs. Krugman 2008

Recently came across a slate essay by Krugman from 1996 where he basically asserts that economics is a hard-core sciencey discipline because it uses mathematical models, and that those who dislike modern economics do so because they would prefer literary criticism style blah-blah. He starts his piece by discussing criticism of him (which I haven´t read) from Bob Kuttner. Krugman states that the disagreement has nothing to do with politics:
We are both, after all, liberals.  (...) What we are really fighting about is a matter of epistemology, of how one perceives and understands the world.
(...)
A strong desire to make economics less like a science and more like literary criticism is a surprisingly common attribute of anti-academic writers on the subject.
(...)
More than 40 years ago, the scientist-turned-novelist C.P. Snow wrote his famous essay about the war between the "two cultures," between the essentially literary sensibility that we expect of a card-carrying intellectual and the scientific/mathematical outlook that is arguably the true glory of our civilization. That war goes on; and economics is on the front line. Or to be more precise, it is territory that the literati definitively lost to the nerds only about 30 years ago--and
they want it back. That is what explains the lit-crit style so oddly favored by the leftist
critics of mainstream economics. Kuttner and Galbraith know that the quantitative, algebraic reasoning that lies behind modern economics is very difficult to challenge on its own ground. To oppose it they must invoke alternative standards of intellectual authority and legitimacy.
In effect, they are saying, "You have Paul Samuelson on your team? Well, we've got Jacques Derrida on ours."
(...)
The literati truly cannot be satisfied unless they get economics back from the nerds. But they can't have it, because we nerds have the better claim.
I find this interesting for two reasons. For one thing, economics is about the real world, yet Krugman doesn`t mention empirical evidence with a single word. Based on this piece, the discussion seems to be a theological debate between Pythagorean mystics who believe in the revelatory power of math and the medieval scholastics who want to focus on conceptual distinctions and dialectical reasoning. With both of them seeing themselves as the more scientific.
The second thing is that this belief in divine revelation through algebra is exactly what Krugman later attacked when he had had enough of the absurdities of highly regarded, peer-reviewed work in top journals spouting poorly justified empirical claims. After the financial crisis, he wrote,
the fault lines in the economics profession have yawned wider than ever. Lucas says the Obama administration’s stimulus plans are “schlock economics,” and his Chicago colleague John Cochrane says they’re based on discredited “fairy tales.” In response, Brad DeLong of the University of California, Berkeley, writes of the “intellectual collapse” of the Chicago School, and I myself have written that comments from Chicago economists are the product of a Dark Age of macroeconomics in which hard-won knowledge has been forgotten. What happened to the economics profession? And where does it go from here?
As I see it, the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.
(...)
the central cause of the profession’s failure was the desire for an all-encompassing, intellectually elegant approach that also gave economists a chance to show off their mathematical prowess.
(...)
what’s almost certain is that economists will have to learn to live with messiness. That is, they will have to acknowledge the importance of irrational and often unpredictable behavior, face up to the often idiosyncratic imperfections of markets and accept that an elegant economic “theory of everything” is a long way off. In practical terms, this will translate into more cautious policy advice — and a reduced willingness to dismantle economic safeguards in the faith that markets
will solve all problems.
My point with this is not that it`s wrong to use mathematics (Krugman made sure to clarify this as well). My point is that it`s wrong to think that you can reason your way to empirical truth without getting involved with the messy reality around us. Claims about reality need evidence from reality. The claims about reality that you start out with could derive from a formal model or a verbal argument or even a diagram - but to analyze empirical evidence requires quantification of phenomena and statistics, so this is not an argument against either numbers, mathematical methods or hard-to-understand algebra. It`s an argument against theology in science and the belief that you can dispense with empirical evidence provided you`ve thought "logically" enough from a priori "truths" using some method or other - whether based on mathematics, literary criticism-style discussion, or symbology.


Sunday, June 12, 2011

Bitcoin - the newest e-money, internet threat and speculative bubble - all in one?

The idea of untraceable, privately controlled electronic currency is not new. Remember Kevin Kelley made a big deal about it in "Out of control" (chapter 12), his breathlessly enthusiastic book on the accelerating digital age published back in 1995:
The nature of e-money -- invisible, lightning quick, cheap, globally
penetrating -- is likely to produce indelible underground economies, a
worry way beyond mere laundering of drug money. In the net-world, where
a global economy is rooted in distributed knowledge and decentralized
control, e-money is not an option but a necessity. Para-currencies will
flourish as the network culture flourishes. An electronic matrix is
destined to be an outback of hardy underwire economies. The Net is so
amicable to electronic cash that once established interstitially in the
Net's links, e-money is probably ineradicable.
Kelley didn`t discuss Bitcoin, as even a futurist would be hard pressed to discuss by name something that would be developed 13 years later. But it`s the same thing: Untraceable, outside government control, loved by libertarians and kind of geeky. A good description is here, a recent "oh-my-god-they-sell-drugs-with-this" article from wired here, and a very bullish "this-is-where-i´m-gonna-place-all-my-savings" post on the appreciation trend of the bitcoin here.
Off the cuff, my guess would be that a simple, safe on-line currency that was as easy to use as cash would be quite useful. If you`re buying some one-off good or service online, buying a piece of software directly from the vendor, want to leave something in a blogger´s tip jar, etc. - rather than using a number of different services (visa, paypal, google checkout, tipjar etc) a simple e-cash would be nice. Based on my very cursory look, bitcoin is not quite there. Most importantly, it seems like a chore to get money into and out of bitcoins (partly because paypal, mastercard and visa don`t want to help). If they fix this problem, there would also seem to be a user interface issue: They need to make this integrated into browsers or some ubiquitous tool (facebook? google account?) so that it truly became as easy as pulling a bill out of your pocket. As it stands, my guess would be that it might keep appreciating for a while as gold-standard devotees and Ayn Rand fans discover it, there may be a slight influx of blackmarket funds, and the resulting appreciation may attract people who see it as an investment vehicle. Unless "currency exchange" becomes easier (so you can get the money into and out of the real world) and usability improves, I don´t quite see why this would become big. And if you can´t get your money out without a lot of bother (it might even get worse if governments see the money laundering issue as a problem) - then the investment aspect of it is going to suffer as well.

Tuesday, June 7, 2011

The “flaw” in modern economics – and how to fix it?

Why do economists produce such sophisticated, intelligent work and yet end up supporting claims about the real world that seem – at times – insane, absurd and clearly unsupported by evidence? (We realize you might disagree that this is ever a problem, but (as the quotes below will show) we are not alone in making this observation.)

A colleague and I have tried to understand why this happens in a recently published paper. An essay presenting the same ideas in a shorter, simpler, and more readable form is here, and for those who prefer to get “the gist of it” through a video, you can do so here. An even shorter version follows in this blogpost… ;-)

The puzzle that we try to explain is this frequent disconnect between high-quality, sophisticated work in some dimensions, and almost incompetently argued claims about the real world on the other. DeLong recently blogged about this as the “Walrasian” mindset (as opposed to the more pragmatic and empirically oriented Marshallian) he feels characterizes some macroeconomists:

The microfoundation-based theoretical framework is not to be tested, but simply applied. It is not an "engine for the discovery of concrete truth" but rather a body of truth itself. Once a Walrasian has pointed out some not-wholly-implausible microfoundation-based mechanisms, his work here is done.

The implied claim is that some economists are seduced-by-theoretical-beauty and talk about the real world even though their gaze is fixed almost exclusively on the Platonic ideal of their equations and models. This is similar to Olivier Blanchard`s recent statement that

Before the crisis, we had converged on a beautiful construction" to explain how markets could protect themselves from harm […] But beauty is not synonymous with truth.

This, again, was similar to Krugman’s claim in the 2009  essay “How did economists get it so wrong?”:

As I see it, the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.

I`d also note the recent reflections of blogger noahpinion on his graduate economics courses, where

the course [… in macroeconomics] didn't discuss how we knew if these theories were right or wrong. We did learn Bob Hall's test of the PIH. That was good. But when it came to all the other theories, empirics were only briefly mentioned, if at all, and never explained in detail. When we learned RBC, we were told that the measure of its success in explaining the data was - get this - that if you tweaked the parameters just right, you could get the theory to produce economic fluctuations of about the same size as the ones we see in real life. When I heard this, I thought "You have got to be kidding me!" Actually, what I thought was a bit more...um...colorful.

and (in part 2)

all of the mathematical formalism and kludgy numerical solutions of DSGE give you basically zero forecasting ability (and, in almost all cases, no better than an SVAR). All you get from using DSGE, it seems, is the opportunity to puff up your chest and say "Well, MY model is fully microfounded, and contains only 'deep structural' parameters like tastes and technology!"...Well, that, and a shot at publication in a top journal.

Though these observations seem related, they still don`t explain how this happens and why – and that makes it hard to find a good way to fix things.

Our explanation can be put in terms of the research process as an “evolutionary” process: Hunches and ideas are turned into models and arguments and papers, and these are “attacked” by colleagues who read drafts, attend seminars, perform anonymous peer-reviews or respond to published articles. Those claims that survive this process are seen as “solid” and “backed by research.” If the “challenges” facing some types of claims are systematically weaker than those facing other types of claims, the consequence would be exactly what we see: Some types of “accepted” claims would be of high standard (e.g., formal, theoretical models and certain types of statistical fitting) while other types of “accepted claims” would be of systematically lower quality (e.g., claims about how the real world actually works or what policies people would actually be better off under).

In our paper, we pursue this line of thought by identifying four types of claims that are commonly made – but that require very different types of evidence (just as the Pythagorean theorem and a claim about the permeability of shale rock would be supported in very different ways). We then apply this to the literature on rational addiction and argue that this literature has extended theory and that, to some extent, it is “as if” the market data was generated by these models. However, we also argue that there is (as good as) no evidence that these models capture the actual mechanism underlying an addiction or that they are credible, valid tools for predicting consumer welfare under addictions.  All the same – these claims have been made too – and we argue that such claims are allowed to piggy-back on the former claims provided these have been validly supported. We then discuss a survey mailed to all published rational addiction researchers which provides indicative support – or at least is consistent with – the claim that the “culture” of economics knows the relevant criteria for evaluating claims of pure theory and statistical fit better than it knows the relevant criteria for evaluating claims of causal or welfare “insight”. To see this, just compare the Bradford-Hill criteria for establishing causality in medicine/epidemiology with the evidence presented in modern macro or rational addiction theory or a game-theoretic model of the climate treaty negotiation process.

If this explanation holds up after further challenges and research and refinement, it would also provide a way of changing things – simply by demanding that researchers state claims more explicitly and with greater precision, and that we start discussing different claims separately and using the evidence relevant to each specific one. Unsupported claims about the real world should not be something you`re allowed to tag on at the end of a work as a treat for competently having done something quite unrelated.

Anyway, this is also an experiment in spreading research – and in addition to this blogpost you can pick from three different levels of interest: The full paper, the essay or the video.

Comments welcome :-)

A blind spot in economics? Unjustified claims about reality

Thursday, June 2, 2011

Bob Lucas – believe the vision, belie the evidence

Noahpinion has a nice “Marshallian” take on the recent talk by Robert “Rational-Expectations” Lucas, the Godfather of modern macro. He shows easily available empirical evidence that strikingly goes against each of the three main assertions Lucas made about the US macroeconomic woes.

In this recent lecture at the University of Washington, Lucas makes the following assertions:

1. The persistent gap in income levels among rich economies is due to the costs of European welfare states.

2. The length of the Great Depression was due in part to the emergence of strong unions.

3. The reason for our current ongoing weakness in employment and business investment is the recent expansion of the U.S. welfare/regulatory state.

All three of these assertions are baldly contradicted by history.

Head over to Noahpinion to read the smack-down (well worth reading). What I`d like to do here is just to add a relevant and telling anecdote from Lucas`s professional memoir that I came across in one of the comments on DeLong:

"'Crossing over' was a term introduced to us to describe a discrepancy between Mendelian theory and certain observations. No doubt there is some underlying biology behind it, but for us it was presented as just a fudge-factor, a label for our ignorance. I was entranced with Mendel’s clean logic, and did not want to see it cluttered up with seemingly arbitrary fudge-factors. “Crossing over is b—s—,” I told Mike.

In fact, though, there was a big discrepancy between the Mendelian prediction without crossing over and the proportions we observed in our classroom data, too big to pass over without comment.

My report included a long section on experimental error.... Mike...replaced my experimental error section with a discussion of crossing over. His report came back with an A. Mine got a C-, with the instructor’s comment: “This is a good report, but you forgot about crossing-over.”

I don’t think there is anyone who knows me or my work as a mature scientist who would not recognize me in this story. The construction of theoretical models is our way to bring order to the way we think about the world, but the process necessarily involves ignoring some evidence or alternative theories—setting them aside. That can be hard to do—facts are facts—and sometimes my unconscious mind carries out the abstraction for me: I simply fail to see some of the data or some alternative theory. This failing can be costly and embarrassing to me, but I don’t think it has any effect on the advance of knowledge. Others will see the blind spot, as Mike did with crossing-over, keep what is good and correct what is not."

From Robert Lucas, Professional Memoir, pp. 4-5

This may also be an appropriate time to call attention to the classic old Solow quote about Lucas that you can find here.

Wednesday, June 1, 2011

Friedman`s schizophrenic legacy in economic methodology

Brad DeLong had an unexpected take on Friedman`s methodological legacy in economics, highlighting his desire to stay close to data when theorizing rather than his defense of "as-if" theorizing. In DeLong`s words, Friedman was a (pragmatic) Marshallian rather than a (purist) Walrasian:

are the theoretical mechanisms we are studying things that we can see? Are their predictions consistent with the gross features of reality? Supply curves slope up: if we say that demand has changed and pushed us along a supply curve, is it in fact the case that both quantities and prices have risen (or fallen)? Demand curves slope down: if we say that supply has changed and pushed us along a demand curve, is it in fact the case that quantities have risen and prices have fallen (or fallen and risen)?

If the first-order predictions of our theories are not visible in the first-order movements of the data--quantities, prices, asset values, and expectations--then, Friedman (and Marshall) would say, our theory is broken and we need to fix it.

I`ve often been puzzled by examples Friedman`s  pragmatic, close-to-the-data, uncover-the-actual-mechanisms approach and its mismatch with the message economists took away from his essay on methodology. In a footnote in Hausman's book on "the inexact and separate science of economics" he mentions that Lee Hansen
recalls economists in the 1950s reacting to Friedman`s essay with a sense of liberation. They could now get on with the job of exploring and applying their models without bothering with objections to the realism of their assumptions.
More recently, Nathan Berg and Gerd Gigerenzer wrote a paper where they set up the "as if" methodology associated with Friedman as the great big flaw of behavioral as well as neoclassical economics:
For a research program that counts improved empirical realism among its primary goals, it is startling that behavioral economics appears, in many cases, indistinguishable
from neoclassical economics in its reliance on as-if arguments to justify ―psychological models that make no pretense of even attempting to describe the psychological processes that underlie human decision making.
This image of Friedman as the staunchest defend of absurdly speculative rational choice fiction always seemed at odds with other stories about the man`s research. As I understand it, he pored through meeting minutes from the Fed together with Anna Schwartz to understand why the Fed did what it did during the Great Depression, and he was sceptical of data-fitting and overly complex theoretical models. Also, when the Economic Journal had a 100 year anniversary issue (January 1991, vol 101 no 404) and asked a number of famous economists for their predictions about the "next 100 years" of our discipline, Friedman went back to the early issues to actually see what (if anything) had changed. As far as I remember, the other contributions I read were mainly
economists saying that in the future the discipline would finally move
towards what they themselves had been doing for a long time. Friedman concluded that the core subjects of the late 1800s would still be present, some new topics (e.g., property rights, crime, public choice) would probably be present, along with some new topics. The methods would be an updated but recognizable mix of pure theory, descriptive statistics and econometrics. And to conclude he quoted a conclusion Ashley had made after a similar exercise in 1907:
When one looks back on a century of economic teaching and writing, the chief lesson should, I feel, be one of caution and modesty, and especially when we approach the burning issues of our own day. We economists...have been so often in the wrong!


Wednesday, May 25, 2011

What graduate school economics did and did not teach some random dude

I`ve got no idea who this guy is – found links to these posts from Tyler Cowen’s blog – but I found his reflection on his graduate economics education (see also part 2)  insightful and interesting.

Some highlights (that is to say – things that remind me of my own opinions ;-)

coming as I did from a physics background, I found several things that annoyed me about the course (besides the fact that I got a B). One was that, in spite of all the mathematical precision of these theories, very few of them offered any way to calculateany economic quantity. In physics, theories are tools for turning quantitative observations into quantitative predictions. In macroeconomics, there was plenty of math, but it seemed to be used primarily as a descriptive tool for explicating ideas about how the world might work. At the end of the course, I realized that if someone asked me to tell them what unemployment would be next month, I would have no idea how to answer them.

As Richard Feynman once said about a theory he didn't like: "I don’t like that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation - a fix-up to say, 'Well, it might be true.'"

That was the second problem I had with the course: it didn't discuss how we knew if these theories were right or wrong. We did learn Bob Hall's test of the PIH. That was good. But when it came to all the other theories, empirics were only briefly mentioned, if at all, and never explained in detail. When we learned RBC, we were told that the measure of its success in explaining the data was - get this - that if you tweaked the parameters just right, you could get the theory to produce economic fluctuations of about the same size as the ones we see in real life. When I heard this, I thought "You have got to be kidding me!" Actually, what I thought was a bit more...um...colorful.

(This absurdly un-scientific approach, which goes by the euphemistic name of "moment matching," gave me my bitter and enduring hatred of Real Business Cycle theory, about which Niklas Blanchard and others have teased me. I keep waiting for the ghost ofFrancis Bacon or Isaac Newton to appear and smite Ed Prescott for putting theory ahead of measurement. It hasn't happened.)

[…]

DeLong and Summers are right to point the finger at the economics field itself. Senior professors at economics departments around the country are the ones who give the nod to job candidates steeped in neoclassical models and DSGE math. The editors of Econometrica, the American Economic Review, the Quarterly Journal of Economics, and the other top journals are the ones who publish paper after paper on these subjects, who accept "moment matching" as a standard of empirical verification, who approve of pages upon pages of math that tells "stories" instead of making quantitative predictions, etc. And the Nobel Prize committee is responsible for giving a (pseudo-)Nobel Prize to Ed Prescott for the RBC model, another to Robert Lucas for the Rational Expectations Hypothesis, and another to Friedrich Hayek for being a cranky econ blogger before it was popular.

And from the follow-up blog-post which discusses the field-courses he chose (which, AFAIK are the courses he voluntarily chose):

The field course addressed some, but not all, of the complaints I had had about my first-year course. There was more focus on calculating observable quantities, and on making predictions about phenomena other than the ones that inspired a model's creation. That was very good.

But it was telling that even when the models made wrong predictions, this was not presented as a reason to reject the models (as it would be in, say, biology). This was how I realized that macroeconomics is a science in its extreme infancy. Basically, we don't have any macro models that really work, in the sense that models "work" in biology or meteorology. Often, therefore the measure of a good theory is whether itseems to point us in the direction of models that might work someday.

[…]

all of the mathematical formalism and kludgy numerical solutions of DSGE give you basically zero forecasting ability (and, in almost all cases, no better than an SVAR). All you get from using DSGE, it seems, is the opportunity to puff up your chest and say "Well, MY model is fully microfounded, and contains only 'deep structural' parameters like tastes and technology!"...Well, that, and a shot at publication in a top journal.

Finally, my field course taught me what a bad deal the whole neoclassical paradigm was. When people like Jordi Gali found that RBC models didn't square with the evidence, it did not give any discernible pause to the multitudes of researchers who assume that technology shocks cause recessions. The aforementioned paper by Basu, Fernald and Kimball uses RBC's own framework to show its internal contradictions - it jumps through all the hoops set up by Lucas and Prescott - but I don't exactly expect it to derail the neoclassical program any more than did Gali.

Tuesday, May 24, 2011

The source of our policy views – an honest opinion from Steven Levitt

Freakonomics-author Levitt recently posted on why he strongly opposed the US ban on internet poker, while weakly preferring drug prohibition (despite the good arguments against it) and legalized abortion.

I’ve never really understood why I personally come down on one side or the other with respect to a particular gray-area activity.  […]

It wasn’t until the U.S. government’s crackdown on internet poker last week that I came to realize that the primary determinant of where I stand with respect to government interference in activities comes down to the answer to a simple question: How would I feel if my daughter were engaged in that activity?

If the answer is that I wouldn’t want my daughter to do it, then I don’t mind the government passing a law against it. I wouldn’t want my daughter to be a cocaine addict or a prostitute, so in spite of the fact that it would probably be more economically efficient to legalize drugs and prostitution subject to heavy regulation/taxation, I don’t mind those activities being illegal.

Some express disappointment in Levitt for this comment:

What's missing in Levitt? The whole idea of tolerance. It's easy to tolerate people doing what you would do and approve of. It's harder to tolerate what you don't approve of. It's even harder to tolerate activities and behaviors that you find disgusting. Levitt has just confessed that he's intolerant or, at least, that he won't object to a government that's intolerant. That's disappointing. I had expected better of him.

Personally, I find this a misreading of his point. I don`t think he`s saying that he believes this is how it should be – just that this seems to be the way it is. If anything, the fact that he has tried to reflect on the source of his opinions and their possible basis in emotions makes me trust the guy more.

Seems to me that we often have  a strong feeling or “intuition” that something is good or bad, and that the smarter we are the better we`re able to convince ourselves that this is due to logical arguments. There`s a host of good stuff on the psychological mechanisms driving our attitudes towards sources of risk in Dan Gardner`s book “The science of fear.” There`s a host of good stuff on how easily we trick ourselves in Kurzban`s “Why everyone (else) is a hypocrite”. Who hasn`t been in a discussion with intelligent, informed people who dig themselves deeper and deeper into a hole while trying to defend some ridiculous opinion. (And who hasn`t at times been that very same person themselves?)

Note: I`m not making the argument that we can`t learn and modify our views when confronted by evidence. But I am making the claim that this is frequently difficult to do, and that someone able to reflect on their feelings and biases (as Levitt does here) seems more open to changing his views than somebody who ignorantly imagines him- or herself to be a rational, evidence-based and principled logic machine.

Monday, May 23, 2011

“As-if behavioral economics” – puzzle: How can an as-if theory be normative?

Although I enjoyed it, I’ve spent the last few days on this blog noting some issues where I disagree with the paper ”As-if behavioral economics”. Today I want to reflect on something they touch upon without fully resolving.

Some economists argue that their assumptions can`t be questioned because their models are “as-if” - they are merely tools that allow you to successfully predict market data, and the realism of the assumptions is irrelevant. If that is so - why are there so many norms and criteria apart from prediction that a “good” model should fulfill? And why - if they are mere “as-if” prediction-generating machines - are the neoclassical models held up as a normative ideal we should strive to aim for in our own decision making?

Berg and Gigerenzer touch on this puzzle in a couple of places. For one thing, two of the points they emphasize is that

  • behavioral economics suffers from subscribing to the as-if method, which ignores the realism of the assumptions (similarity of model to the real-world mechanism/process), and that
  • behavioral economics has grown to see behavioral “heuristics” as “biases” that violate the normatively correct neoclassical rules

Later, they also note that the

the normative interpretation of deviations as mistakes does not follow from an empirical investigation linking deviations to negative outcomes. The empirical investigation is limited to testing whether behavior conforms to a neoclassical normative ideal.

Consider - if the model is nothing but a black box that spits out impressive predictions:

  • Why is it important that agents inside the model are optimizing and rational?
  • Why is it important that the agents are well informed?
  • Why is it important that preferences are “standard” (thus generating well behaved utility functions and nice indifference curves)?
  • Why does it matter whether or not your prediction is based on an “equilibrium” inside the model?
  • How can the utility and welfare effects of a model imply anything about real people`s welfare?

This is particularly odd since, as far as I can tell, rational optimizers can behave in all sorts of ways depending on their preferences and the choice problem they face. When assumptionsdon’t need to be supported by empirical evidence, this means that any observable behavior pattern can be modelled as rational behavior given some hypothetical choice problem. If you don't believe me, ask yourself whether you can describe any specific behavior pattern that could not be the result of rational choice. Note that this has to be a pattern, that is to say that it has to be stated in terms of observables without reference to “underlying” but non-observable preferences. You can refer to prices, consumption goods, patterns across time and between goods, etc., and using such categories I don`t think it is possible to find any “non-rationalizable consumption pattern” that would be accepted as that by most economists.

So what?

Well - if anything can be rationalized by such a theory, and assumptions can be as unrealistic as you want - then any stable pattern can be “explained” by such a “theory.” In actuality, though, you would just be describing the pattern using a different format (the “rational choice model” format). Which raises the question of why it is so important to use that format.

After all - if all you want to do is to predict, then it shouldn`t matter whether you assumed people to behave “as if” they were maximizers or not. Any model would be just as good if it predicted equally well.

Also - if the rational choice model is just a format - a way of describing behavior by identifying some “story” that would generate it - then why should it have normative power?

This is extra puzzling if you consider the old-school style Chicago-economics that sees all behavior as rational. If this is so, then there is no normative power beyond “do whatever you do cause that’s what’s optimal.” Taken at face value, this view of the world would also lead to apathy: There’s

no point in criticizing politicians or engaging with the world, because everyone knows what they’re doing and are doing what’s best for themselves. Politicians – that’s public choice. Regulators - they`ve been captured by special interests. Economists? Well - I guess their doings could be made endogenous as well.

I don’t have an answer to this puzzle - but I wonder if it may have something to do with politics. By both claiming that everyone is rational  and that this rationality represents the normative ideal for action, then a world of unfettered markets seems like a good idea: It would be a world of informed, self-interested people generating huge benefits to each other through their selfish doings. If so - then behavioral economics becomes the “interventionist” response: Yes - a neoclassical paradise would be great – however, unfortunately, we’re just evolved apes with lots of biases and flaws. With a little carefully designed policy, though, we can regulate and nudge people in the direction of the truly rational agent.

Does anyone know of a survey that would make it possible to correlate policy views and politics with economists`attitudes towards behavioral and old-school rational choice theory?

Friday, May 20, 2011

Strauss-Kahn and rational assault

Tyler Cowen generated a bit of discussion recently with his blog-comment on Dominique Strauss-Kahn
Dominique Strauss-Kahn has been arrested, taken off a plane to Paris, and accused of a shocking crime.  When I hear of this kind of story, I always wonder how the “true economist” should react.  After all, DSK had a very strong incentive not to commit the crime, including his desire to run for further office in France, not to mention his high IMF salary and strong network of international connections.  So much to lose.
Should the “real economist” conclude that DSK is less likely to be guilty than others will think? 
Let`s try to answer the question:
A  bad economist would think : Strauss-Kahn clearly has more to lose and thus less of an incentive to sexually assault – which makes it unlikely that he did. So he is probably innocent.
A better economist would go one step further: Strauss-Kahn realizes that we would think this way, which makes crime relatively risk-free for him. This makes it likely that he did perform the crime. So he is probably guilty.
The even better economist would go even further: Since we realize that Strauss-Kahn would realize this, and that he would want to exploit this mechanism, we can conclude that he is probably guilty.
The “real economist,” finally, would realize that this infinite loop would lead Strauss-Kahn to play his part in implementing a randomized, mixed-strategy equilibrium by throwing a dice to decide whether or not to run naked down hallways assaulting hotel staff. The economist would then write up the model, derive suitably generalized solutions for various assumptions of payoffs and attitudes towards risk, and publish it in a high ranking journal, using the Kahn-Strauss story as a motivating example in the introduction.

Wednesday, May 18, 2011

“As if behavioral economics” - flaw 3: Adding a parameter is not all behavioral economists have done

I´m writing through some issues raised by the paper As-if behavioral economics. I have one more annoyance I want to raise with the paper before I move on to some of its strong points.

The annoyance I want to note today is one that disappoints me. Berg and Gigerenzer write:

Behavioral models frequently add new parameters to a neoclassical model, which necessarily increases R-squared. Then this increased R-squared is used as empirical support for the behavioral models without subjecting them to out-of-sample prediction tests.

This is silly. Yes, adding a parameter does increase R-squared (the share of the variation in the data that your statistical model captures), but this way of phrasing it makes it sound as though any variable added to a statistical model would increase R-squared by the same amount. That´s not the case: A randomly picked variable that is irrelevant would (if we ignore time trends and that sort of data) on average have zero explanatory power. The standard test is to check the significance level of the variable. This answers the following question: If the variable actually has no explanatory power for the data - how likely is it that it would “by chance” seem to explain whatever it seems to explain in the current dataset? The normal significance level to test at is 5%, and if you use that significance level the “irrelevant” variable will seem relevant in your data only 5% of the time. I´m pretty sure Berg and Gigerenzer know this.

A related flaw shows up in their discussion of Fehr and Schmidt´s model of inequality aversion (which assumes that some people dislike inequality, especially inequality in their own disfavor). Berg and Gigerenzer write:

In addition, the content of the mathematical model is barely more than a circular explanation: When participants in the ultimatum game share equally or reject positive offers, this implies non-zero weights on the “social preferences” terms in the utility function, and the behavior is then attributed to “social preferences.”

This, too, is weak. What Fehr and Schmidt´s model assumes is that there is a specific structure to the inequity aversion: That your dislike of how much better (or worse) off someone else is than you is a linear function of how much better off than you they are. And, second, that it´s worse being behind someone than in front of someone, even if you´d prefer most of all that you were equal. It may be this model is "wrong," but it is more than circular and there is a variety of competing models that others have promoted as better ways of capturing typical patterns in experimental data on various economic games (off the top of my head, Charness and Rabin (2002), Bolton and Ockenfels (2000) and Engelmann and Strobel (2004)).

Having said that, it might well be that Fehr and Schmidt is a crude model that fails to capture and process the relevant data in the best way. However, it does so well enough to be useful and interesting. If you found a model that did better and that could also predict well for new experiments, as well as in different settings - using less information that could more credibly be related to actual pscyhological processes - then I´m pretty sure you would be published quickly in a good journal. That´s not to say that “you shouldn´t criticize unless you can do better,” but it is to say that the current model captures something interesting in a simple way - even if it is clearly imperfect. Clarifying its weaknesses is fair game - but Berg and Gigerenzer should do better than brushing it off as though its fit with data was no better than any random model thrown up.