Tuesday, May 10, 2011

Is behavioral economics a flawed band-aid on the neoclassical enterprise?

I finally got around to reading the paper “As-if behavioral economics: Neoclassical economics in disguise?” by Nathan Berg and Gerd Gigerenzer this past Easter holiday. I found it enjoyable, often insightful, and somewhat confused. It contained a lot of stuff, so I´ll split this into several parts.
Today I´ll merely go through the overall “story” they seem to be operating from. This isn´t the “storyline” of the paper, but more the story such as I can piece it back together from the pieces and clues they scatter throughout the paper.
Their story is that economics was a sensible science informed by psychological science until an italian economist called Pareto turned it into the current, neoclassical “monster” we have today.
a fundamental shift in economics which took place from the beginning of the twentieth century: the  ̳Paretian turn‘. This shift, initiated by Vilfredo Pareto and completed in the 1930s and 1940s by John Hicks, Roy Allen and Paul Samuelson, eliminated psychological concepts from economics by basing economic theory on principles of rational choice.
This new framework assumed that people´s stable preferences can be described by a mathematical utility function such that any good (provided in sufficient quantities) can fully compensate for a reduction in any other good.
If, for example, x represents a positive quantity of ice cream and y represents time spent with one‘s grandmother, then as soon as we write down the utility function U(x, y) and endow it with the standard assumptions that imply commensurability, the unavoidable implication is that there exists a quantity of ice cream that can compensate for the loss of nearly all time with one‘s grandmother.
In addition, this framework built up an axiomatic, logical theory of normative rationality centered around internal consistency. That is to say, they argued that people should have transitive preferences, conform to expected utility axioms and have Bayesian beliefs.
This was actually just an unsupported (and in Berg and Gigerenzer´s view, false) assumption, in that they never even attempted to establish that such rules would lead to better outcomes in the real world.
Expected utility violators and time-inconsistent decision makers earn more money in experiments (Berg, Johnson, Eckel, 2009).
Because this theory completely misspecified how people make choices and process beliefs, it became necessary to ignore the realism of the assumptions. For this reason, they turned to the “as-if” methodology that they saw Friedman as having preached: All models are only to be evaluated in terms of how well they predict - and the realism of the assumptions is irrelevant. They describe this as
the Friedman as-if doctrine in neoclassical economics focusing solely on outcomes.
This did not fully solve the underlying problem: Since people do not choose in this way, predictive ability was poor. Behavioral economists initially wanted to tackle the root of the problem by reintroducing realism (psychology) into the description of consumer behavior. After a while, though, they were instead reduced to adding bells and whistles of various kinds to patch up the existing formal framework so that it would better predict in an as-if sense.
Instead of asking how real people—both successful and unsuccessful—choose among gambles, the repair program focused on transformations of payoffs (which produced expected utility theory) and, later, transformations of probabilities (which produced prospect theory) to fit, rather than predict, data. The goal of the repair program appeared, in some ways, to be more statistical than intellectual: adding parameters and transformations to ensure that a weighting- and-adding objective function, used incorrectly as a model of mind, could fit observed choice data.
Their work, by introducing further complications into the choice models, actually made things worse - in that they made the resulting “theory” of human choice even less plausible.
Leading models in the rise of behavioral economics rely on Friedman‘s as-if doctrine by putting forward more unrealistic processes—that is, describing behavior as the process of solving a constrained optimization problem that is more complex—than the simpler neoclassical model they were meant to improve upon.
On the normative side, most behavioral “epicycles” that were introduced came to be seen as biases and flaws that needed nudging and paternalistic regulation.
To these writers (and many if not most others in behavioral economics), the neoclassical normative model is unquestioned, and empirical investigation consists primarily of documenting deviations from that normative model, which are automatically interpreted as pathological. In other words, the normative interpretation of deviations as mistakes does not follow from an empirical investigation linking deviations to negative outcomes. The empirical investigation is limited to testing whether behavior conforms to a neoclassical normative ideal.
Finally, perhaps in an effort to avoid revealing how poor both the neoclassical and behavioral models actually are, the bar for predictive success was lowered even further by turning it into an exercise in fitting models to existing data rather than an exercise in making successful out-of-sample predictions.
Behavioral models frequently add new parameters to a neoclassical model, which necessarily increases R-squared. Then this increased R-squared is used as empirical support for the behavioral models without subjecting them to out-of-sample prediction tests.
That´s the story as I read it, and the authors continue to describe their view of what they think should be done. But that will have to wait for another time.