I take this as evidence against the view that people systematically miscalculate expected utility in repeated, real market settings. If they did, you would expect to see commercial lures like this much more often. Maybe in mortgage markets, or credit card markets, people are overoptimistic about the bad (too many floating rate mortgages or too many people accepting the risk of high default fees), but I don't think in pizza markets they are overoptimistic about the good. A restaurant which makes this kind of offer, of course, has to charge systematically higher prices, the greater the customer's chance of winning the lottery,I wholeheartedly agree. If people systematically miscalculated expected utility in repeated, real market settings you would see a lot more stuff like, for instance, rip-off "extra warranties" when you purchase electronic equipment, or maybe even "lotteries," slot machines, roulette games etc. with negative expected payoff that people participated in regularly, etc. Good thing that's only the stuff of science fiction.
Friday, December 11, 2009
Friday, November 27, 2009
It’s a bit childish, but it’s also fun to see a ridicule or teasing wrapped up in an academic argument. The short and blunt version: “Ha ha! Modern macro is doing the same thing as the planned economy guys they (rightly) think are stupid. That means the modern macro guys are stupid as well!”
The argument is an interesting and (I think) insightful analogy that bunches the free-market macroeconomists in with old-style planned-economy advocates that they are politically opposed to :-)
In the old debates about planned vs. market based economies, the “libertarian” economist Friedrich Hayek (who believed on theoretical grounds any intervention in the market would set in motion a slippery slope to totalitarianism) argued the importance of information: Prices, emerging in free markets, were the only way of gathering the often tacit knowledge of market participants into a form that communicated the relative values of various uses of a resource. A lack of market-based prices meant ignorance regarding the value of resources, and misallocation of capital and manpower etc.
Keeping this in mind, it is fun to see the argument that modern representative agent macroeconomics make the same mistake as the advocates of planned economies. Both ignore the importance of dispersed knowledge, over-emphasize the ease with which information/data can be gathered and provide a true and total picture of the world, and base their work on one or more agents having this total understanding of the world (the central planner, or the representative agent).
Thursday, November 5, 2009
"All economists are wrong, but some are useful."
Should I have written `some' with a capital S...?
Monday, October 19, 2009
Serious economist: Giving a gift is not efficient if the recipeint values the gift less than the full cost of the gift. In fact, even if they value it higher than the costs, it may not be efficient since - assuming you know your own preferences better than other people - you could have spent the money on something that gave you an even higher surplus. So, obviously we should only give each other money. By not doing so Americans waste around $12 billion per year, according to very unbiased estimations.
I do not know whether to laugh or cry. I once read a satirical paper about this is an economics journal. To my surprise I then found that the American Economic Journal has published several articles on the topic - with lots of discussion. And please tell me that I was wrong, please tell me that I just mistunderstood the article, please, please ... but it seems to me that these papers were not satirical.
And now a book has been published by one of the participants: Scroogenomics. The $12 billion is from the book. I have not read it and I pray to God that it is satirical, clever and funny ... but from the description it sounds like he is a serious economist.
Why do I hope this? I would not mind getting more money and less baggy underwear, but the nature and meaning of a gift-exchange is often very different when it involves money. Ariely has some observations on this in his book Predictably Irrational. For instance, bringing wine to a dinner party is OK. Giving money instead is not. Maybe economists would advise us to change the norms, but maybe also economist should recognize that this is not all there is to it. Waldfogel probably knows all this, but goes on anyway. So instead of just saying that it is absurd, I did a real survey and asked people about their willingness to sell gifts and at what price. The result? Many gifts were valued much higher than the cost. Why? Probably because of the sentimental value. So instead of destroying value, Christmas gifts increased value in my calculations.
Wow, I cannot believe I was tricked into doing this. Taking the discussion seriously and playing on their court. Even if the estimation had shown a loss I do not believe we should take it seriously. There are simply too many aspects of gift giving that will be left out in of the quantification to make the conclusion credible. Verdict: Absurd!Afternote: On no, he is serious! See interview.
Tuesday, October 13, 2009
Q. In your view, what can save economics?
A. I am very pessimistic about whether we can actually pull out of this. I think we have created a locomotive. This is the sociology of the economics profession. We have created a monster that is very difficult to stop.
Q. Could real-world empirical facts or a severe economic cataclysm change it?
A. That would certainly change it, but I do not see that around the corner. Perhaps I am too pessimistic, and it is very depressing to stay there. There does not seem to me to be any way out.
(Mark Blaug, The problems with formalism – Interview with Mark Blaug, Challenge, May-June 1998)
Friday, September 25, 2009
Economists are so preoccupied with efficiency that they twist debates away from more important aspects. Often, big social questions do not need to be answered optimally at the margin, they need the right answer.
We economists argue based on what economics is: the subject concerned with the allocation of scarce resources - the subject that teaches efficiency.
But how much should (the rest of) society care about efficiency? How important is it compared to other goals?
ex 1: the equity debate. Economists usually retreat from any debate about unjust distributions of opportunities, wealth, factors and everything else by claiming that due to the "welfare theorems" of some mathematician from long ago by the name of Walras, they do not need to care: The cake should be as big as possible and the sharing can be done afterwards. In that sense, we claim, we do equity a favour - more is for sure better, isn't it?
We solve the social optimization problem and then say a perfect market can achieve it, given the right initial distribution of "x". But nobody says out loud that we can not impact the initial allocations practically. Lump sum transfers do not grow on trees on this planet… And, of course the initial distributions are not "right". So we do not get at the real issue - welfare loss due to wrong initial allocations. The welfare loss we can get at are efficiency losses from redistributive measures - that, we are good at. But that's often not the issue. And by the way, we have no social welfare function in first place, making the initial problem unsolvable (Arrow's impossibility theorem).
That is not to say that the entire subject of economics ignores the relationship betweeen inequities and efficiency - far from. Development economists point out causal effects running both ways. But when economists are not directly concerned with distributive effects, they rest their not so troubled minds on the pillow provided by Walras and go on arguing for efficiency as if there is no other game in town.
ex 2: the climate/technology debate. What might be efficient today is maybe not so good for future developments. If one wants to retain or increase living standards and avoid climate change, one needs new production and consumption technologies.
For sure, we economists do care about growth and have all kinds of growth models; Solow, Romer, Acemoglu, what have you. A caricature of standard models is to "throw money at the R&D sector and out will come growth". And of course there is research about climate change that accounts for technological development.
But what comes out of economics into the public are mainly static efficiency arguments - we should start cleaning up where it is cheapest. Buy into CDM projects in less developed countries. Put up a market and get the prices right and stuff.
The argument for doing the most expensive things today at home since that might mean more rapid technological advances rather than (or in addition to) installing scrubbers in some ancient coal plants in faraway does exist. So does the argument that this symbolizes political will and leadership, engages the public and other countries etc. But those are fuzzy and not as mathematically clear cut as the efficiency argument. Hence they tend to be mentioned as an aside rather than the real issue.
I'm not saying that this or that argument is right or wrong. I'm just saying the latter tends to be put aside because it isn't about what we are most preoccupied with - efficiency. And that means trouble if that's not what it's all about.
That doesn't get easier if economic results are sold as if they would contain more than just efficiency: everything. The implicit or explicit believe that all arguments can be wrapped into some fabric of supply and demand crosses leads to neglecting that there is more to a social issue that can be put into economic models. Again, economics is about allocation of scarce resources. A lot of the world can be looked at that way. But not everything.
Monday, September 21, 2009
The previous post raised two questions that need to be kept separate:
- “How could you convince a fellow die-hard-formalistic-member-of-the-economics-tribe that the profession has gone wrong because it fell in love with mathematical formalism?”
- Is it true? I.e., is it possible to empirically validate the hypothesis that economists believe stupid empirical claims on the basis of empirically irrelevant mathematical proof/beauty/elegance/norms/whatever?
These are two VERY different questions. The first one bites itself in the tail: If someone relies on irrelevant and false criteria when evaluating claims, then you would have to come up with an argument that succeeds given these false and irrelevant criteria in order to convince him. This sounds logically humorous, like convincing a Christian that God doesn’t exist by making him believe God told him so. Or, in our case, making a nice, formal model that fits the intuition of economists, coheres with all established formal protocol, elegantly extends the standard framework of microeconomic choice, and that results in some absurd and silly proof that rational agents can be rationally misguided by acting and judging in the way the economists see themselves as acting and judging.
Put differently, this is a question of how to manipulate a system (in this case the one determining the beliefs of economists), how to change someone’s beliefs. It is a question of pscyhology.
The second question is – I believe – more important. There is a clear distinction (at least conceptually) between things that influence choice/thought/behavior and things that a person him-/herself believes to influence his/her choice/thought/behavior. What we would like to know is what kind of evidence/arguments/proofs economists emphasize when judging empricial claims from other economists. I am writing as I’m thinking here – always a somewhat risky proposition – but some first thoughts:
- Ask a sample of economists for the 5 strongest arguments that support some academically based empirical conclusion they support (“What are the 5 strongest reasons you know that support your view that technological shocks drive business cycle behavior?”)
- Find some way of categorizing articles and their mix/share of theory/empirics and sum over the references used to support policy claims by economists
- Ask what kinds of implications economists draw from some branch of theory – and then ask what kinds of evidence they believe to be relevant for judging these (an approach Hans Olav Melberg and I have attempted in an article we are currently finishing up)
Thursday, September 17, 2009
Wednesday, September 16, 2009
For a brief while there will continue to be a flurry of debate over the intellectual shortcomings of modern economics, centered on the perceived failure of modern macroeconomics in the wake of the financial crisis. A great list of recent contributions to the macroeconomics debate can be found here, on Mark Thoma’s blog “Economist’s View”.
Have fun while it lasts – but don’t think things will change. My prediction is that there may be a surface change in which models the profession believes and so on, but that the underlying issue of “as-if,” consistency with “standard assumptions,” over-mathematization of weird psychological assumptions, overreliance on “stylized facts” and quasi-scientific “rigor” and all the rest will remain more or less as it is. I believe the big problem is that the criteria we use to evaluate claims about the real world are severely lacking – and this is not really improved by exchanging the current models for models that a broader set of people feel comfortable with. If anything, this may impede progress as critical assessment of our subject drops as people become more comfortable with what we say.
Never underestimate the power of the dark side.
Thursday, September 3, 2009
However, buried in the article are also some quotes about economists and their relationship with evidence that deserve to be repeated. First:
But there was something else going on: a general belief that bubbles just don’t happen. What’s striking, when you reread Greenspan’s assurances, is that they weren’t based on evidence — they were based on the a priori assertion that there simply can’t be a bubble in housing.
However, he also writes, more interestingly, that:
To be fair, finance theorists didn’t accept the efficient-market hypothesis merely because it was elegant, convenient and lucrative. They also produced a great deal of statistical evidence, which at first seemed strongly supportive. But this evidence was of an oddly limited form. Finance economists rarely asked the seemingly obvious (though not easily answered) question of whether asset prices made sense given real-world fundamentals like earnings. Instead, they asked only whether asset prices made sense given other asset prices. Larry Summers, now the top economic adviser in the Obama administration, once mocked finance professors with a parable about “ketchup economists” who “have shown that two-quart bottles of ketchup invariably sell for exactly twice as much as one-quart bottles of ketchup,” and conclude from this that the ketchup market is perfectly efficient.
Ketchup economists! The real problem was not that they did not use empirical evidence, but that only a certain type of evidence was admitted. This is different and it raises the question of why only some types of evidence were accepted? Ideology? Ease of quantification?
Tuesday, September 1, 2009
Two popular ways of making waves in Economics is by:
- Revealing a seemingly smart thing to be dumb. Economists love unintended consequences. We love it if you can build up an argument that regulatory agencies will actually benefit the monopolistic industries they are set to serve (e.g., Stigler’s regulatory capture), that politicians are no more interested in the “public good” than business leaders (e.g., Buchanan’s Public Choice), that fiscal policy has no effects (e.g., Lucas and Barro), that minimum wages hurt the poor, etc.
- Revealing a seemingly dumb thing to be smart. Economists also love it if you can prove that individual optimization and markets are smarter and better than non-economists believe. We love it it you can build up an argument that criminals rationally weigh costs and benefits, risks and penalties, that junkies getting hooked are implementing forward looking rational “taste-planning” (both Becker, who has a long long list of such arguments), that QWERTY-keyboards are actually better than any alternatives, that Betamax deserved to lose out to VHS, and so on.
Of course – any of these may be true. Sometimes truth happens to lie where we want it. My point here is more that I have a gut-feeling (that might well be wrong) that tells me these two types of narratives are welcomed more eagerly by economists than mechanisms or hypotheses that don’t fit the mould.
Sometimes I get the feeling that evolutionary biology has some of the same problem. This is related to the “feud” or argument between Dawkins and (now deceased) Stephen Jay Gould. Both fought for evolution and wrote excellent popular non-fiction, but Gould felt that Dawkins had too much of a “full optimization equilibrium” spin on reality around us.
An example of the latter is the paper discussed briefly here. This paper argues that depression – which seems like a dumb thing for an organism to fall into – is actually really really smart.
When one considers all the evidence, depression seems less like a disorder where the brain is operating in a haphazard way, or malfunctioning. Instead, depression seems more like the vertebrate
eye—an intricate, highly organized piece of machinery that performs a specific function.
Tuesday, August 25, 2009
I do have one qualm, though, which isn’t really about Bernanke, but rather about the broader symbolism of the reappointment — namely, it unfortunately seems to be a reaffirmation of Serious Person Syndrome, aka it’s better to have been conventionally wrong than unconventionally right.
This sounds right and important. And it need not only apply to reappointments, but to theories and whether they are accepted or not. Absurd statements in conventional packings - accept. True statements in unconventional packings - reject. But this is not just a complaint, more a call to investigate and change. Investigate whether there really is such a bias and if so, try to change the conventions so that they function better as a filter to distinguish between good and bad theories.
Is it possible to do this? And how? First, to believe the Serious Person Syndrome I would like to have something more than anecdotes to base it on. For instance an experiment in which individuals played a game in which every round some people win or lose based partly on ability and partly on luck. Moreover, there would be some "conventional" strategy and unconventional strategies. Maybe based on focal points or just norms or conventions established before the game by the instructors. One could then examine whether the players assigned less blame to a player who was playing an unsuccesful but conventional strategy, than an unsuccesful unconventional strategy.
Vague words you say? What is the empirical proof? Well, I recall a paper where the author believed to have found a sub-optimal conventional strategy in American football. The existence of this was explained by coaches trying to avoid blame for losing a game and there was less blame involved when chosing the sub-optimal, but conventional, strategy.
Maybe the reader can give more examples and create better experiments?
Thursday, August 20, 2009
One successful strategy in microeconomics is to search for something that seems stupid, ignorant, misguided, etc. and dream up some implausible, ridiculous story that explains it as actually being sophisticated, subtle optimization. The explanation is linked very loosely to the real world by linking selected assumptions and effects to anecdotes or stylized facts, and if someone says it is all nonsense you can retreat to the “as-if” defence. Since you can “explain” it through your convoluted Alice-in-Wonderland model, you can then claim that the phenomena in question is “no longer a challenge to standard rational choice theory.”
Personally, I became aware of this through my PhD work on rational addiction theory. A 1988 note from University of California, Berkeley Professor Jeffrey A. Frankel that was recently linked to in Krugman’s blog points to the same thing in macroeconomics.
Stockman comments on how everything can be made consistent with dynamic stochastic general equilibrium models based on indiviudal optimization (the “equilibrium view”). When economists thought purchasing power parity held, this was interpreted as evidence in favor of the equilibrium view. When they thought it didn’t… well, this was evidence in favor of the equilibrium view. So when a dataset doesn’t allow you to reject the null hypothesis that real exchange rate follows a random walk, this:
[…] is […] interpreted as evidence in favor of the equilibrium theory, even though the latter has no more testable implications for the real exchange rate than does the proposition that 9 is a prime number.
He notes evidence that would seem to favor sticky-price theory, and shows how “equilibrium” theorists work out convoluted, bizarre models to “explain” this within their framework.
Such explanations are clever, and make for good
journal articles that are popular among academic economists. But that doesn't make them true.
Speaking of "agents," spy novels are a good analogy for stories that are clever and make entertaining reading, but have little to do with the truth. Datum: a few minutes ago, I got up from my chair next to Alan Stockman on the stage, and walked over to take my place here at the podium. Hypothesis 1: I am a spy for a foreign power, Alan is a CIA counterspy who was about to assassinate me, and so I got up to move out of range. This hypothesis is "consistent with the facts" in the sense that, if true, it would explain them; but it is convoluted and not very plausible. Hypothesis 2: John Le Carre was in British intelligence before he
began his second career as a novelist. This hypothesis is interesting to speculate about. I have no idea whether it is true or not. It is also "consistent with the facts" in the weak sense that it does not contradict the datum. But it seems no more relevant than the statement that 9 is a prime number, the proposition that agents dynamically optimize, or the hundreds of other hypotheses that I "fail to reject" every morning in the shower. Hypothesis 3: I came up to the podium for the simple reason that AEI invited me here to comment on Alan's paper. While not as clever as the other propositions, this hypothesis is simple, plausible, and
consistent with the facts in the strong sense that it would explain them while most other hypotheses would
not. (I will leave it to you to decide which hypothesis is the correct one.)
He also has some fun comments on the flip to a state where the goal of macroeconomics became to find nothing (no explanation, no relevant causal variables etc.):
The word "nothing" will play a key role in my comments.
["We know nothing, therefore we should do nothing."] The word does not often appear explicitly in the writings of equilibrium theorists. The popular phrase in the econometric writings is "random walk." [The usual conclusion is stated as "I have found that such-and-such a variable follows a random walk." Or, at
best, "I cannot reject the hypothesis that this variable follows a random walk." You seldom hear someone say, "After studying this variable for 6 months, I have absolutely nothing to say that would help to predict its movements." But the statements mean the same thing.] In Stockman's paper, the phrase is "in the current
state of knowledge:" "In the current state of knowledge...exchange rates and the current account should play little role...[in the conduct of monetary policy]"
Tuesday, August 11, 2009
One cannot find good, under-forty economists who identify themselves or their work as ‘Keynesian’. Indeed, people even take offense if referred to as ‘Keynesians’. At research seminars, people don’t take Keynesian theorizing seriously anymore; the audience starts to whisper and giggle to one another.
I loved finding this quote by Lucas. It means I can finally enjoy the Solow quote below without any trace of bad conscience:
Suppose someone […] announces to me that he is Napoleon Bonaparte. The last thing I want to do with him is to get involved in a technical discussion of cavalry tactics at the Battle of Austerlitz. If I do that, I’m getting tacitly drawn into the game that he is Napoleon Bonaparte.
(Bob Solow, talking about Robert Lucas’s rational expectations work in “Conversations with Economists” by Klamer and Colander)
Saturday, August 1, 2009
The standard defence of absurd theories is usually that they predict. Predictive success is the only relevant criterion, assumptions be damned. But what should we say when we fail to predict – and fail spectacularly at that, as in the recent financial crisis. The answer: Flip our defence around. We may not predict, but at least we have the understanding and insight to explain what happened after the fact. As Chris Dillows discusses here.
He starts by pointing out that it is possible to find theoretical concepts, small groups or individual researchers that have worked on things in the past that now seem relevant to understanding the financial crisis. He then relates this to Jon Elster’s troubling view of mechanisms as a form of completely non-predictive understanding.
To put it more crudely than this guy himself does: We have loads and loads of story elements in economic theory that can be combined to join any set of assumptions to any outcome. All we need is to know what outcome actually occured, and we’ll be able to serve up a convincing and sophisticated “explanation” in no time.
[…]we have […] lots of mechanisms, capable of explaining why things happen and the links between them. What we don’t have are laws which generate predictions. In his book, Nuts and Bolts for the Social Sciences, Jon Elster stressed this distinction. The social sciences, he said:
“Can isolate tendencies, propensities and mechanisms and show that they have implications for behaviour that are often surprising and counter-intuitive. What they are more able to do is to state necessary and sufficient conditions under which the various mechanisms are switched on.”
This is precisely the problem economists had in 2007. We knew that there were mechanisms capable of generating disaster. What we didn’t know is whether these were switched on. The upshot is that, although we didn’t predict the crisis, we can more or less explain it after the fact. As Elster wrote:
“Sometimes we can explain without being able to predict, and sometimes predict without being able to explain. True, in many cases one and the same theory will enable us to do both, but I believe that in the social sciences this is the exception rather than rule.”
The interesting question is: will it remain the exception? My hunch is that it will; economists will never be able to produce laws which yield systemically successful forecasts.
What’s more, I am utterly untroubled by this. The desire for such laws is as barmy as the medieval search for the philosopher’s stone. If you need to foresee the future, you are doing something badly wrong.
I especially dig the ending. Friedman in the more extreme statements of his “as-if” article on positive economics made absurd assumptions a virtue, and now this guy is making a lack of predictive success a virtue.
Turning our assumptions on ourselves as economists, we are (of course) always doing the optimal thing. A fact that gives me immense peace of mind: Whatever I’m doing, I can rest assured that I could have done no better.
Thursday, July 23, 2009
This is a very sensible criticism that is subject to a fairly predictable—and likewise sensible—response: if we throw out the models that incorporate unrealistic assumptions, what do we have left? Correlations and rhetoric, which can only get you so far.
This is a stupid “something is better than nothing” defence that I’ve come across several times before. It’s based on the assumption that any map is better than no map. Yeah, right.
“Hey, Honey, I don’t have a map of this crocodile-infested swamp – but don’t worry, I have a map of New York so we’ll be out of here in no time…”
Thursday, April 30, 2009
…in that they think the other guy is an idiot…
The mood now is uglier. On the left, Krugman says: "This is really fairly shameful, that we should be wasting precious months as a profession retracing debates that were settled 70 years ago." On the right, John H. Cochrane of the University of Chicago dismisses those who advocate Keynesian stimulus, saying: "Professional economists, the guys I hang out with, are not reverting to ancient Keynesianism any more than physicists are going back to Aristotle when they can't understand how fast the universe is expanding."
From businessweek – an otherwise not very impressive essay that all the same has a couple of minor bits of interst.
An essay I’ve meant to read for some time from the Financial Times blog is called “The unfortunate uselessness of most ’state of the art’ academic monetary economics”. It was well written and entertaining. I’m tempted to quote amusing quotes such as
Even during the seventies, eighties, nineties and noughties before 2007, the manifest failure of the EMH [i.e. Efficient Market Hypothesis] in many key asset markets was obvious to virtually all those whose cognitive abilities had not been warped by a modern Anglo-American Ph.D. eduction.
However, his serious points deserve mention, although I’m not qualified to state the extent to which he is on target regarding the state of the macro literature. Here are some snippets, but the whole thing is recommended:
On complete markets:
The most influential New Classical and New Keynesian theorists all worked in what economists call a ‘complete markets paradigm’. In a world where there are markets for contingent claims trading that span all possible states of nature (all possible contingencies and outcomes), and in which intertemporal budget constraints are always satisfied by assumption, default, bankruptcy and insolvency are impossible.[…]
Both the New Classical and New Keynesian complete markets macroeconomic theories not only did not allow questions about insolvency and illiquidity to be answered. They did not allow such questions to be asked.
[…] Goods and services that are potentially tradable are indexed by time, place and state of nature or state of the world. Time is a continuous variable, meaning that for complete markets along the time dimension alone, there would have to be rather more markets for future delivery (infinitely many in any time interval, no matter how small) than you can shake a stick at. Location likewise is a continuous variable in a 3-dimensional space. Again rather too many markets. Add uncertainty (states of nature or states of the world), never mind private or asymmetric information, and ‘too many potential markets’, if I may ruin the wonderful quote from Amadeus attributed to Emperor Joseph II, comes to mind. If any market takes a finite amount of resources (however small) to function, complete markets would exhaust the resources of the universe.
On efficient markets
In financial markets, and in asset markets, real and financial, in general, today’s asset price depends on the view market participants take of the likely future behaviour of asset prices. If today’s asset price depends on today’s anticipation of tomorrow’s price, and tomorrow’s price likewise depends on tomorrow’s expectation of the price the day after tomorrow, etc. ad nauseam, it is clear that today’s asset price depends in part on today’s anticipation of asset prices arbitralily far into the future. Since there is no obvious finite terminal date for the universe (few macroeconomists study cosmology in their spare time), most economic models with rational asset pricing imply that today’s price depend in part on today’s anticipation of the asset price in the infinitely remote future.
What can we say about the terminal behaviour of asset price expectations? The tools and techniques of dynamic mathematical optimisation imply that, when a mathematical programmer computes an optimal programme for some constrained dynamic optimisation problem he is trying to solve, it is a requirement of optimality that the influence of the infinitely distant future on the programmer’s criterion function today be zero.
And then a small miracle happens. An optimality criterion from a mathematical dynamic optimisation approach is transplanted, lock, stock and barrel to the behaviour of long-term price expectations in a decentralised market economy. In the mathematical programming exercise it is clear where the terminal boundary condition in question comes from. The terminal boundary condition that the influence of the infinitely distant future on asset prices today vanishes, is a ‘transversality condition’ that is part of the necessary and sufficient conditions for an optimum. But in a decentralised market economy there is no mathematical programmer imposing the terminal boundary conditions to make sure everything will be all right.
On linearization in modern macro
If one were to hold one’s nose and agree to play with the New Classical or New Keynesian complete markets toolkit, it would soon become clear that any potentially policy-relevant model would be highly non-linear, and that the interaction of these non-linearities and uncertainty makes for deep conceptual and technical problems. Macroeconomists are brave, but not that brave. So they took these non-linear stochastic dynamic general equilibrium models into the basement and beat them with a rubber hose until they behaved. This was achieved by completely stripping the model of its non-linearities and by achieving the transsubstantiation of complex convolutions of random variables and non-linear mappings into well-behaved additive stochastic disturbances.
Those of us who have marvelled at the non-linear feedback loops between asset prices in illiquid markets and the funding illiquidity of financial institutions exposed to these asset prices through mark-to-market accounting, margin requirements, calls for additional collateral etc. will appreciate what is lost by this castration of the macroeconomic models. Threshold effects, critical mass, tipping points, non-linear accelerators - they are all out of the window. Those of us who worry about endogenous uncertainty arising from the interactions of boundedly rational market participants cannot but scratch our heads at the insistence of the mainline models that all uncertainty is exogenous and additive.
Technically, the non-linear stochastic dynamic models were linearised (often log-linearised) at a deterministic (non-stochastic) steady state. The analysis was further restricted by only considering forms of randomness that would become trivially small in the neigbourhood of the deterministic steady state. Linear models with additive random shocks we can handle - almost !
Even this was not quite enough to get going, however. As pointed out earlier, models with forward-looking (rational) expectations of asset prices will be driven not just by conventional, engineering-type dynamic processes where the past drives the present and the future, but also in part by past and present anticipations of the future. When you linearize a model, and shock it with additive random disturbances, an unfortunate by-product is that the resulting linearised model behaves either in a very strongly stabilising fashion or in a relentlessly explosive manner. There is no ‘bounded instability’ in such models. The dynamic stochastic general equilibrium (DSGE) crowd saw that the economy had not exploded without bound in the past, and concluded from this that it made sense to rule out, in the linearized model, the explosive solution trajectories. What they were left with was something that, following an exogenous random disturbance, would return to the deterministic steady state pretty smartly. No L-shaped recessions. No processes of cumulative causation and bounded but persistent decline or expansion. Just nice V-shaped recessions.
Tuesday, April 21, 2009
From a recent interview with Daniel Kahneman:
"In the last half year, the models simply didn't work. So the question arises: Why do people use models? I liken what is happening now to a system that forecasts the weather, and does so very well. People know when to take an umbrella when they leave the house, or when it will snow. Except what? The system can't predict hurricanes. Do we use the system anyway, or throw it out? It turns out they'll use it."
“The interesting psychological problem is why economists believe in their theory, but this is the problem with the theory, any theory. It leads to a certain blindness. It's difficult to see anything that deviates from it."
We only look for information that supports the theory and ignore the rest.
"Correct. That appears to be what happened with Greenspan: He had a theory under which the market operates, and that the market would correct itself."
Thursday, March 26, 2009
Keynes once famously wrote that “Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.”
A recent article in Prospect Magazine argues that much modern policy and many practices amongst investors were influenced by the two “false theories” of rational expectations and efficient markets, which
[…] are not only misleading but highly ideological, have become so dominant in academia (especially business schools), government and markets themselves. While neither theory was totally dominant in mainstream economics departments, both were found in every major textbook, and both were important parts of the “neo-Keynesian” orthodoxy, which was the end-result of the shake-out that followed Milton Friedman’s attempt to overthrow Keynes. The result is that these two theories have more power than even their adherents realise: yes, they underpin the thinking of the wilder fringes of the Chicago school, but also, more subtly, they underpin the analysis of sensible economists like Paul Samuelson.
As David Hendry, until recently head of the Oxford economics department, has noted: “Economists critical of the rational expectations based approach have had great difficulty even publishing such views, or maintaining research funding. For example, recent attempts to get ESRC funding for a project to test the flaws in rational expectations based models was rejected. I believe some of British policy failures have been due to the Bank accepting the implications [of REH models] and hence taking about a year too long to react to the credit crisis.”
Although there was never any empirical evidence for REH, the theory took academic economics by storm for two reasons. First, the assumptions of clearly-defined laws and identical expectations were easily translated into simple mathematical models—and this mathematical tractability soon came to be viewed as a more important academic objective than correspondence to reality or predictive power. Models based on rational expectations, insofar as they could be checked against reality, usually failed statistical tests. But this was not a deterrent to the economics profession. In other words, if the theory doesn’t fit the facts, ignore the facts. How could the world have allowed such crassly unscientific attitudes to dominate a serious academic discipline, especially one as important to society as economics?
The article also claims that this was desirable for ideological reasons:
That government activism was doomed to failure was exactly what politicians, central bankers and business leaders of the Thatcher and Reagan periods wanted to hear. Thus it quickly became established as the official doctrine of the political and economic establishments in America—and from this powerful position it was able to conquer the entire academic world.
And for efficient markets:
To make matters worse, rational expectations gradually merged with the related theory of “efficient” financial markets. This was gaining ground in the 1970s for similar reasons—an attractive combination of mathematical and ideological tractability. This was the efficient market hypothesis (EMH), developed by another group of Chicago-influenced academics, all of whom received Nobel prizes just as their theories came apart at the seams. EMH, like rational expectations, assumed that there was a well-defined model of economic behaviour and that rational investors would all follow it; but it added another step. In the strong version of the theory, financial markets, because they were populated by a multitude of rational and competitive players, would always set prices that reflected all available information in the most accurate possible way. Because the market price would always reflect more perfect knowledge than was available to any one individual, no investor could “beat the market”—still less could a regulator ever hope to improve on market signals by substituting his own judgement. But if prices perfectly reflected all information, why did these prices constantly fluctuate and what did such movements mean? EMH cut this Gordian knot with a simple assumption: market movements are meaningless random fluctuations, equivalent to tossing a coin or a drunken sailor’s “random walk.”
This anarchic-sounding view was actually very reassuring. If market movements were really like coin-tosses, they might be totally irregular in the short term, but very predictable over longer periods, like the takings of a casino. Specifically, the coin-tossing and random walk analogies could be shown to imply what statisticians call a “normal” or Gaussian probability distribution. And the mathematics of Gaussian distributions (plus what is called the “law of large numbers”) reveals that catastrophic disturbances are vanishingly unlikely to occur. For example, if the daily fluctuations on Wall Street follow a normal distribution, it is possible to “prove” that the odds of a one-day movement greater than 25 per cent are about one in three trillion. The fact that at least four statistically “impossible” financial events occurred in just 20 years—in stock markets in 1987, bonds in 1994, currencies in 1998 and credit markets in 2008—would by normal standards, have meant the end of EMH. But as in the case of rational expectations, the facts were rejected while the theory continued to reign supreme, albeit with some recalibration.
Friday, March 6, 2009
That’s the claim in a brief, new and quite well-written paper called “The financial crisis and the systemic failure of academic economics”. The paper is written by a bunch of authors from US, Germany, France and Denmark, and claims that the profession was blind to the build-up of the financial crisis and underestimated its dimensions when it started to unfold. It argues that this failure can be traced “to the profession’s insistence on constructing models that, by desing, disregard the key elements driving outcomes in real-world markets.”
The authors take as a starting point that economists have a social function, an important part of which is to explain social phenomena such “unemployment, boom and bust cycles, and financial crisis”. Because of this, economists have an ethical responsibility to communicate when and where their models are applicable, and even to speak out if there is widespread abuse of them by others. This, the authors claim, is a responsibility economists have failed in.
They argue that financial models used for pricing derivatives and other instruments had to be based on short data series and shaky theoretical assujmptions, were used so widely or by market players so dominant that the individualistic “this is how you as a small and isolated market agent should value these assets” perspective was invalid, and provided a control illusion to users through their seemingly precise and quantitative relationships.
As for modern macro models, the authors argue that these are deeply flawed for a number of reasons:
- Expectations are rational, which is taken to mean model consistent. Agents in the model are taken to understand how the model they exist in works. This means that expectation mechanisms are only validated internally (by comparing the assumptions to the other assumptions in the model) rather than externally (by comparing the assumptions with data or what is known about expectations and judgments under uncertainty from psychology or other disciplines). “A behavioral interpretation of rational expectations would imply that individuals and the economist have a complete understanding of the economic mechanisms governing the world.”
- Representative agents. Each function (worker, consumer, invester etc.) in the economy is represented by one individual. This disregards the aggregation problem (how individual actions to achieve X may produce different outcomes in the aggregate from X. Like if everyone seated in an auditorium stands up to see better – each one assuming that the others will remain seated if they get up), and it reduces macro phenomena to micro phenomena. There are no differences in expectations or preferences between individuals, and there are no interaction mechanisms that allow emergent properties, where unintended and unforeseen consequences follow from the different individuals and the way they interact.
- Empirically, the models are calibrated – and parameter estimates of discount functions etc. are taken from micro studies and placed in the utility function of the representative agent – again skipping the entire aggregation and interaction problem.
The authors argue that we should develop flexible, data-driven models for the macroeconomic phenomena we are interested in – and use these as benchmarks against which to test proposed theoretical models.
Finally, they also argue that theoretical results without established empirical applicability are used to support questionable policy claims. Walrasian general equilibrium theory and the finding of Arrow-Debreu that all uncertainty can be eliminated given sufficient contingent claims in the market are seen as underlying
the belief shared by many economists that the introduction of new classes of derivatives can only be welfare increasing […]. It is worth emphasizing that this view is not an empirically grounded belief but an opinion derived from a benchmark model that is much too abstract to be confronted with data.
Again, as in so much criticism against economics, the “as-if” argument is not tackled directly. This will make it easy to avoid the criticism without dealing with it by saying that the theories in question are not meant to explain anything they can be shown not to explain (even if that should be claimed or implied in the theoretical works in question), that they were abused by stupid and greedy people, that the cutting edge of research (as opposed to what everyone learns and takes away from the subject) has already done some even more sophisticated and complicated shit that is really good and solves all our problems, that interaction is fully handled now because somebody put two consumers rather than one into a recent theory, etc. etc.
And so it goes.
Monday, March 2, 2009
When your theory fails to explain what you set out to explain there are two strategies often followed: One seems to be driven by an interest in the real world phenomena, which makes people open to altering even basic assumptions if these are important causal factors in the theory but lack empirical support. The other seems to be driven by an interest in theoretical “purity”, which makes people open to creating more and more absurd theories if that is what they need to retain and defend the core assumptions that define the "discipline” in their own eyes.
This is my take on the issue Krugman discusses in today’s column in the NY Times. He writes about how modern macroeconomics has fallen into an ivory tower decadence that makes our profession fail in its role as informed and thoughtful policy advisers. Writing on how business cycles are more persistent than Lucas and the Real Business Cycle people would expect, he claims that the profession divided:
One group went down the “new Keynesian” route, arguing that something such as small costs of changing prices must explain the rigidity we actually seem to see. This group isn’t averse to putting a lot of rationality into its models, but it’s willing to accept aspects of the world that seem clear in the data, even if it can’t (yet?) be fully explained in terms of deep foundations.
The other group decided that since they couldn’t come up with a rigorous microfoundation for price stickiness, there must not be any price stickiness: recessions are the result of adverse technological shocks, not demand shocks.
And the latter group, the equilibrium macro side, was so convinced of the logical correctness of its position that schools dominated by that view stopped teaching demand-side economics.
The really nice thing about Krugman’s column is that he brings up the important point that this matters because we are dealing with real world problems.
And the sad thing is that all of this matters. Our ability as a nation to respond to the current economic crisis is being seriously hampered by the gratuitous ignorance of many of our economists.
Sunday, March 1, 2009
Thanks to Hans Olav Melberg for this tip:
Nassim “Black Swan” Taleb and Pablo Triana (a derivatives consultant) wrote an opinion piece in the Financial Times last year where they asserted that the risk management techniques taught by financial economists were to blame for hiding the true risks in the economy and allowing the economy to swell up to the bursting point. The opinion piece makes two interesting claims:
1. Many members of the economics profession saw the flaws in these methods but allowed the financial economists to keep their prestige (e.g. “Nobel” prize in economics) and spread their financial models to students. Those economists are bystanders who failed in their moral duty to stop this before it created our current woes
2. We will not convince financial economists of their errors through logical persuasion and data – we need to use shame/social pressure
So how can we displace a fraud? Not by preaching nor by rational argument (believe us, we tried). Not by evidence. Risk methods that failed dramatically in the real world continue to be taught to students in business schools, where professors never lose tenure for the misapplications of those methods. As we are writing these lines, close to 100,000 MBAs are still learning portfolio theory – it is uniformly on the programme for next semester. An airline company would ground the aircraft and investigate after the crash – universities would put more aircraft in the skies, crash after crash. The fraud can be displaced only by shaming people, by boycotting the orthodox financial economics establishment and the institutions that allowed this to happen.
Wednesday, February 18, 2009
When I see criticism of economic theories they often seem to go on and on, use lots of “ism” words (positivism, pluralism, autism and worse), and be very vague and abstract and wordy. Which is why I have the following challenge: Take your criticism and formulate the main point in one sentence free from overly abstract and philosophical/methodological jargon.
Here’s my attempt:
- Against some types of welfare theory: If you have no empirical evidence concerning what people actually care about and what choice alternatives they actually face, then you do not have sufficient information to establish that they are welfare maximizing.
- Against some types of “explanatory theories”: If you claim to explain why something is the case, then all sorts of empirical information are relevant in assessing your driving assumptions and proposed mechanism– not just market data.
I don’t feel these two claims should be controversial. They seem rather obvious to me – yet these two “principles” (if accepted) are sufficient to kill off (I believe) much of the weirder (but accepted) theories in economics. My Ph. D. work on Rational Addiction theory was (in retrospect) basically the attempt to make these two claims and show why they were sufficient to discard many of the claims made in top 5 journals concerning the validity and implications of rational addiction theories:
- Welfare: As long as you merely observe market choices and develop a “choice model” to accurately reproduce/fit patterns in the market data (whether “stylized facts” or econometrically through structural models) – this does not prove/establish/support the claim that the consumers in question are maximizing their long term welfare. All you’ve proved is that they are acting “as if” they are maximizing some utility function (which you have not shown to be related to their actual welfare) given some choice set (that you have not shown to be the one they are facing). No matter how insanely and self-destructively you act – it will be possible to develop some model that rationalizes this as optimal behavior for some set of preferences and constraints.
- Explanation: Merely observing economic data and developing a “model” to accurately reproduce/fit patterns in this, is insufficient to establish that your proposed causal mechanism is actually at work in the real world. If your theory, e.g., asserts that people have information that does not exist, are solving problems that we know they are not facing, or influencing the world through relationships that we have no reason to believe in – then these are good reasons to discard your proposed mechanism as an explanation.
Is there some non-obvious flaw in my claims that rescues the weirder economic theories? I’m open to that possibility - please let me know.
Also interested in other attempts to formulate one-sentence summaries of
Monday, February 16, 2009
Have you taken into consideration how your opponent has taken into consideration that you have taken into consideration how they… damn… where was I?
Interesting blogpost on “cognitive hierarchy theory”, which basically seems to involve the empirical examination of how many such steps people use in their actual reasoning when making decisions.
Big surprise: They don’t take it to infinity.
"The cognitive hierarchy theory finds that people only do a few steps of this kind of iterated thinking," [Caltech’s Colin Camerer, Professor of Behavioral Economics] explains. "Usually, it's just one step: I act as if others are unpredictable. But sometimes it's two steps: I act as if others think *I* am unpredictable. You can think of the number of steps a person takes as their strategic IQ. A higher strategic IQ means you are outthinking a lot of other people."
Most of us have a pretty low strategic IQ, but that's to be expected, Camerer notes. To reach a truly high strategic IQ requires either special experience with a particular type of game (such as poker), training, or, in rare cases, special gifts.
Perhaps more interesting to us economists are the implications. The prof says: "We think it means you can fool some of the people some of the time"
That would seem to have “interesting implications” for welfare analysis…. ;-)
On a related note – when I encounter comedy based on this theme I always wonder whether the scriptwriter has studied economics. From the mediocre Ben Stiller comedy “mystery men”:
And the sitcom Friends had this as a central gag to the episode where the friends find out that Monica and Chandler have an affair. You can even get t-shirts with the line “They don't know that we know they know we know.”
Thursday, February 5, 2009
Wednesday, February 4, 2009
Take simplification: In first place, you cannot attack a model for being unrealistic in any particular aspect because of its need to be simple. I buy that.
But what does one mean by keeping it simple? “Stripping it from everything that is irrelevant for the question at hand” one might answer. Aha. But when undressing reality, how do I know when to stop? And how do I make sure that I can leave out something; that it doesn’t interact in any important way with what I mean to describe (To stay in the dressing room: taking off the belt might mean the trousers are on the floor)?
But it is not only that. If you point out that it is not only a simplification but also rests on wrong assumptions you are told that they are not wrong but heroic: needed to get at the very essence of what you are interested in. But, hello, are there any rules for those Hercules assumptions? I haven't seen them. Nobody seems to care about justifying assumptions. Rather, anything goes it seems as long as rationality is in place, assuring mathematics are on board. I see the point of using maths but I want to make sure that what I end up with is not to economic reality what the board game “Monopoly” is to real estate agents.
Take the pet I am supposed to keep watch over these days, the Cournot model; describing imperfect competition by few sellers that have some given capacity and decide on quantities to be put on the market. There is an itch, telling me that reality and Cournot do not always get along well. But how do I scratch it?
Do all other firm decisions not interact or interfere with the choice of quantity? Who am I to judge? Does it matter? And is it a mere simplification? Nope. One example: The model is requiring all the sellers to be perfectly informed and I don't think I need to make a case here arguing that production costs are business secrets. So it's not only simplifying, it is making assumptions that one knows are wrong. How to argue then that it still captures the essence of imperfect competition?
I don’t know, but it is argued. But if it is okay and necessary to have wrong/heroic assumptions, where does that leave the one wanting to criticize the model?
Imperfect competition means higher prices and less supply compared to the perfect competition case. The Cournot model says it is by restricting quantity taking the other suppliers' quantities into account that that happens, giving, at least theoretically (there are tons of info requirements on the researcher too) an idea of by how much the magnitudes change. Is the Cournot model just a straw man for "higher prices"? Does it not make any claim to be a description of how those prices are achieved?
My point is that 1) even with an MPhil in Economics (got that on paper) models are still mysterious monsters to me and 2) they don't tell me what type of monster they are, making it hard for me to decide whether to take the sword or the garlic to attack them.
Tuesday, February 3, 2009
Andrew Gelman is a professor of statistics and political science at the Columbia University. In a recent blogpost in the blog “statistical modeling, causal inference, and social science” he comments in passing on rational addiction theory – one of the examples used in a discussion on econometrics that he gives his perspetive on:
the study of smoking seems pretty wacky to me. First there is a discussion of "rational addiction." Huh?? Then Ziliak and McCloskey say "cigarette smoking may be addictive." Umm, maybe. I guess the jury is still out on that one . . . .
OK, regarding "rational addiction," I'm sure some economists will bite my head off for mocking the concept, so let me just say that presumably different people are addicted in different ways. Some people are definitely addicted in the real sense that they want to quit but they can't, perhaps others are addicted rationally (whatever that means). I could imagine fitting some sort of mixture model or varying-parameter model. I could imagine some sort of rational addiction model as a null hypothesis or straw man. I can't imagine it as a serious model of smoking behavior.
In response to a comment from an economist who explains briefly that Rational Addiction comes from a seminal paper and has proved influential in health economics, he responds:
My reply to this: Yeah, I figured as much. It's probably a great theory. But, ya know what? If Becker and Murphy want to get credit for being bold, transgressive, counterintuitive, etc etc., the flip side is that they have to expect outsiders like me to think their theory is pretty silly. As I noted in my previous entry, there's certainly rationality within the context of addiction (e.g., wanting to get a good price on cigarettes), but "rational addiction" seems to miss the point. Hey, I'm sure I'm missing the key issue here, but, again, it's my privilege as a "civilian" to take what seems a more commonsensical position here and leave the counterintuitive pyrotechnics to the professional economists.
Wednesday, January 28, 2009
Criticizing economic theories is dangerous. One false step and you are lumped in with one or another of the anti-economics camps. When that happens – when the reader can “label” you – then the rest of your arguments get interpreted through the perception that you are “one of those anti-math/leftist/ignorant/whatever” guys who don’t understand economics and its methodology, and who can be brushed aside and ignored. For instance, you are…
- Anti-mathematics and anti-formalism – As Krugman wrote in “Two Cheers for Formalism” (Economic Journal, 1998):
Attacks on the excessive formalism of economics – on its reliance on abstract models, on its use of too much mathematics – have been a constant for the past 150 years. […] much of the criticism of formalism in economics is an attack on a straw man […] when outsiders criticise formalism in economics, their real complaint is often not about method but about content – in particular, they dislike “formalistic” arguments not because they are formalistic, butr because they refute their pet doctrines.
- Biased and ideological. Put simply: You don’t like the way reality works, and reject economics for the same reason that Intelligent Design people reject Darwin: It’s not how you want the world to be. You hate free trade, or want to sharply increase minimum wage rates, or ban drugs without getting higher prices that cause financing crimes. Since (some) economic results contradict your world-view, you reject economics.
- Afraid of simplification. You don’t understand that all theories need to simplify, and you believe that any deviation from the fully detailed world around you is sufficient to throw out a theory. Whereas we economists understand that it is by abstracting away from irrelevant details that we can uncover the main determining factors and the characteristics of the underlying mechanisms at work.
- Afraid of human nature. You don’t want people to be selfish, you want to live in a Pollyanna world where everyone cares about each other and people are warm, empathetic beings – whereas we economists are able to face the truth that we are (also) influenced by cold, hard cash, and the desire for selfish gratification of own desires.