Tuesday, August 25, 2009

Serious Person Syndrome

Paul Krugman:
I do have one qualm, though, which isn’t really about Bernanke, but rather about the broader symbolism of the reappointment — namely, it unfortunately seems to be a reaffirmation of Serious Person Syndrome, aka it’s better to have been conventionally wrong than unconventionally right.


This sounds right and important. And it need not only apply to reappointments, but to theories and whether they are accepted or not. Absurd statements in conventional packings - accept. True statements in unconventional packings - reject. But this is not just a complaint, more a call to investigate and change. Investigate whether there really is such a bias and if so, try to change the conventions so that they function better as a filter to distinguish between good and bad theories.

Is it possible to do this? And how? First, to believe the Serious Person Syndrome I would like to have something more than anecdotes to base it on. For instance an experiment in which individuals played a game in which every round some people win or lose based partly on ability and partly on luck. Moreover, there would be some "conventional" strategy and unconventional strategies. Maybe based on focal points or just norms or conventions established before the game by the instructors. One could then examine whether the players assigned less blame to a player who was playing an unsuccesful but conventional strategy, than an unsuccesful unconventional strategy.

Vague words you say? What is the empirical proof? Well, I recall a paper where the author believed to have found a sub-optimal conventional strategy in American football. The existence of this was explained by coaches trying to avoid blame for losing a game and there was less blame involved when chosing the sub-optimal, but conventional, strategy.

Maybe the reader can give more examples and create better experiments?

Thursday, August 20, 2009

Macroeconomics as convoluted fiction

One successful strategy in microeconomics is to search for something that seems stupid, ignorant, misguided, etc. and dream up some implausible, ridiculous story that explains it as actually being sophisticated, subtle optimization. The explanation is linked very loosely to the real world by linking selected assumptions and effects to anecdotes or stylized facts, and if someone says it is all nonsense you can retreat to the “as-if” defence. Since you can “explain” it through your convoluted Alice-in-Wonderland model, you can then claim that the phenomena in question is “no longer a challenge to standard rational choice theory.”

Personally, I became aware of this through my PhD work on rational addiction theory. A 1988 note from University of California, Berkeley Professor Jeffrey A. Frankel that was recently linked to in Krugman’s blog points to the same thing in macroeconomics.

Stockman comments on how everything can be made consistent with dynamic stochastic general equilibrium models based on indiviudal optimization (the “equilibrium view”). When economists thought purchasing power parity held, this was interpreted as evidence in favor of the equilibrium view. When they thought it didn’t… well, this was evidence in favor of the equilibrium view. So when a dataset doesn’t allow you to reject the null hypothesis that real exchange rate follows a random walk, this:

[…] is […] interpreted as evidence in favor of the equilibrium theory, even though the latter has no more testable implications for the real exchange rate than does the proposition that 9 is a prime number.

He notes evidence that would seem to favor sticky-price theory, and shows how “equilibrium” theorists work out convoluted, bizarre models to “explain” this within their framework.

Such explanations are clever, and make for good
journal articles that are popular among academic economists. But that doesn't make them true.

Speaking of "agents," spy novels are a good analogy for stories that are clever and make entertaining reading, but have little to do with the truth. Datum: a few minutes ago, I got up from my chair next to Alan Stockman on the stage, and walked over to take my place here at the podium. Hypothesis 1: I am a spy for a foreign power, Alan is a CIA counterspy who was about to assassinate me, and so I got up to move out of range. This hypothesis is "consistent with the facts" in the sense that, if true, it would explain them; but it is convoluted and not very plausible. Hypothesis 2: John Le Carre was in British intelligence before he
began his second career as a novelist. This hypothesis is  interesting to speculate about. I have no idea whether it is true or not. It is also "consistent with the facts" in the weak sense that it does not contradict the datum. But it seems no more relevant than the statement that 9 is a prime number, the proposition that agents dynamically optimize, or the hundreds of other hypotheses that I "fail to reject" every morning in the shower. Hypothesis 3: I came up to the podium for the simple reason that AEI invited me here to comment on Alan's paper. While not as clever as the other propositions, this hypothesis is simple, plausible, and
consistent with the facts in the strong sense that it would explain them while most other hypotheses would
not. (I will leave it to you to decide which hypothesis is the correct one.)

He also has some fun comments on the flip to a state where the goal of macroeconomics became to find nothing (no explanation, no relevant causal variables etc.):

The word "nothing" will play a key role in my comments.
["We know nothing, therefore we should do nothing."] The word does not often appear explicitly in the writings of equilibrium theorists. The popular phrase in the econometric writings is "random walk." [The usual conclusion is stated as "I have found that such-and-such a variable follows a random walk." Or, at
best, "I cannot reject the hypothesis that this variable follows a random walk." You seldom hear someone say, "After studying this variable for 6 months, I have absolutely nothing to say that would help to predict its movements." But the statements mean the same thing.] In Stockman's paper, the phrase is "in the current
state of knowledge:" "In the current state of knowledge...exchange rates and the current account should play little role...[in the conduct of monetary policy]"

Tuesday, August 11, 2009

It’s OK to laugh at Robert Lucas, everyone :-)

One cannot find good, under-forty economists who identify themselves or their work as ‘Keynesian’. Indeed, people even take offense if referred to as ‘Keynesians’. At research seminars, people don’t take Keynesian theorizing seriously anymore; the audience starts to whisper and giggle to one another.

(Robert Lucas)

I loved finding this quote by Lucas. It means I can finally enjoy the Solow quote below without any trace of bad conscience:

Suppose someone […] announces to me that he is Napoleon Bonaparte. The last thing I want to do with him is to get involved in a technical discussion of cavalry tactics at the Battle of Austerlitz. If I do that, I’m getting tacitly drawn into the game that he is Napoleon Bonaparte.

(Bob Solow, talking about Robert Lucas’s rational expectations work in “Conversations with Economists” by Klamer and Colander)

Saturday, August 1, 2009

Economists understand everything – after the fact

The standard defence of absurd theories is usually that they predict. Predictive success is  the only relevant criterion, assumptions be damned. But what should we say when we fail to predict – and fail spectacularly at that, as in the recent financial crisis. The answer: Flip our defence around. We may not predict, but at least we have the understanding and insight to explain what happened after the fact. As Chris Dillows discusses here.

He starts by pointing out that it is possible to find theoretical concepts, small groups or individual researchers that have worked on things in the past that now seem relevant to understanding the financial crisis. He then relates this to Jon Elster’s troubling view of mechanisms as  a form of completely non-predictive understanding.

To put it more crudely than this guy himself does: We have loads and loads of story elements in economic theory that can be combined to join any set of assumptions to any outcome. All we need is to know what outcome actually occured, and we’ll be able to serve up a convincing and sophisticated “explanation” in no time.

[…]we have […] lots of mechanisms, capable of explaining why things happen and the links between them. What we don’t have are laws which generate predictions. In his book, Nuts and Bolts for the Social Sciences, Jon Elster stressed this distinction. The social sciences, he said:

“Can isolate tendencies, propensities and mechanisms and show that they have implications for behaviour that are often surprising and counter-intuitive. What they are more able to do is to state necessary and sufficient conditions under which the various mechanisms are switched on.”

This is precisely the problem economists had in 2007. We knew that there were mechanisms capable of generating disaster. What we didn’t know is whether these were switched on. The upshot is that, although we didn’t predict the crisis, we can more or less explain it after the fact. As Elster wrote:

“Sometimes we can explain without being able to predict, and sometimes predict without being able to explain. True, in many cases one and the same theory will enable us to do both, but I believe that in the social sciences this is the exception rather than rule.”

The interesting question is: will it remain the exception? My hunch is that it will; economists will never be able to produce laws which yield systemically successful forecasts.

 
What’s more, I am utterly untroubled by this. The desire for such laws is as barmy as the medieval search for the philosopher’s stone. If you need to foresee the future, you are doing something badly wrong.

I especially dig the ending. Friedman in the more extreme statements of his “as-if” article on positive economics made absurd assumptions a virtue, and now this guy is making a lack of predictive success a virtue.

Turning our assumptions on ourselves as economists, we are (of course) always doing the optimal thing. A fact that gives me immense peace of mind: Whatever I’m doing, I can rest assured that I could have done no better.

Thursday, July 23, 2009

Stupid defence of economics

Economist writes about a criticism of modern new-keynesian macroeconomics from Mark Thoma that

This is a very sensible criticism that is subject to a fairly predictable—and likewise sensible—response: if we throw out the models that incorporate unrealistic assumptions, what do we have left? Correlations and rhetoric, which can only get you so far.

This is a stupid “something is better than nothing” defence that I’ve come across several times before. It’s based on the assumption that any map is better than no map. Yeah, right.

“Hey, Honey, I don’t have a map of this crocodile-infested swamp – but don’t worry, I have a map of New York so we’ll be out of here in no time…”

Thursday, April 30, 2009

Great minds think alike…

…in that they think the other guy is an idiot…

The mood now is uglier. On the left, Krugman says: "This is really fairly shameful, that we should be wasting precious months as a profession retracing debates that were settled 70 years ago." On the right, John H. Cochrane of the University of Chicago dismisses those who advocate Keynesian stimulus, saying: "Professional economists, the guys I hang out with, are not reverting to ancient Keynesianism any more than physicists are going back to Aristotle when they can't understand how fast the universe is expanding."

From businessweek – an otherwise not very impressive essay that all the same has a couple of minor bits of interst.

Specifically – what one guy thinks is wrong with macroeconomics

An essay I’ve meant to read for some time from the Financial Times blog is called “The unfortunate uselessness of most ’state of the art’ academic monetary economics”. It was well written and entertaining. I’m tempted to quote amusing quotes such as

Even during the seventies, eighties, nineties and noughties before 2007, the manifest failure of the EMH [i.e. Efficient Market Hypothesis] in many key asset markets was obvious to virtually all those whose cognitive abilities had not been warped by a modern Anglo-American Ph.D. eduction.

However, his serious points deserve mention, although I’m not qualified to state the extent to which he is on target regarding the state of the macro literature. Here are some snippets, but the whole thing is recommended:

On complete markets:

The most influential New Classical and New Keynesian theorists all worked in what economists call a ‘complete markets paradigm’. In a world where there are markets for contingent claims trading that span all possible states of nature (all possible contingencies and outcomes), and in which intertemporal budget constraints are always satisfied by assumption, default, bankruptcy and insolvency are impossible.[…]

Both the New Classical and New Keynesian complete markets macroeconomic theories not only did not allow questions about insolvency and illiquidity to be answered.  They did not allow such questions to be asked.

[…] Goods and services that are potentially tradable are indexed by time, place and state of nature or state of the world.  Time is a continuous variable, meaning that for complete markets along the time dimension alone, there would have to be rather more markets for future delivery (infinitely many in any time interval, no matter how small) than you can shake a stick at.  Location likewise is a continuous variable in a 3-dimensional space.  Again rather too many markets.  Add uncertainty (states of nature or states of the world), never mind private or asymmetric information, and ‘too many potential markets’, if I may ruin the wonderful quote from Amadeus attributed to Emperor Joseph II, comes to mind.  If any market takes a finite amount of resources (however small) to function, complete markets would exhaust the resources of the universe.

On efficient markets

In financial markets, and in asset markets, real and financial, in general, today’s asset price depends on the view market participants take of the likely future behaviour of asset prices.  If today’s asset price depends on today’s anticipation of tomorrow’s price, and tomorrow’s price likewise depends on tomorrow’s expectation of the price the day after tomorrow, etc. ad nauseam, it is clear that today’s asset price depends in part on today’s anticipation of asset prices arbitralily far into the future.  Since there is no obvious finite terminal date for the universe (few macroeconomists study cosmology in their spare time), most economic models with rational asset pricing imply that today’s price depend in part on today’s anticipation of the asset price in the infinitely remote future.

What can we say about the terminal behaviour of asset price expectations?  The tools and techniques of dynamic mathematical optimisation imply that, when  a mathematical programmer computes an optimal programme for some constrained dynamic optimisation problem he is trying to solve, it is a requirement of optimality that the influence of the infinitely distant future on the programmer’s criterion function today be zero.

And then a small miracle happens.  An optimality criterion from a mathematical dynamic optimisation approach is transplanted, lock, stock and barrel to the behaviour of long-term price expectations in a decentralised market economy.  In the mathematical programming exercise it is clear where the terminal boundary condition in question comes from.  The terminal boundary condition that the influence of the infinitely distant future on asset prices today vanishes, is a ‘transversality condition’ that is part of the necessary and sufficient conditions for an optimum.  But in a decentralised market economy there is no mathematical programmer imposing the terminal boundary conditions to make sure everything will be all right.

On linearization in modern macro

If one were to hold one’s nose and agree to play with the New Classical or New Keynesian complete markets toolkit, it would soon become clear that any potentially policy-relevant model would be highly non-linear, and that the interaction of these non-linearities and uncertainty makes for deep conceptual and technical problems. Macroeconomists are brave, but not that brave.  So they took these non-linear stochastic dynamic general equilibrium models into the basement and beat them with a rubber hose until they behaved.  This was achieved by completely stripping the model of its non-linearities and by achieving the transsubstantiation of complex convolutions of random variables and non-linear mappings into well-behaved additive stochastic disturbances.

Those of us who have marvelled at the non-linear feedback loops between asset prices in illiquid markets and the funding illiquidity of financial institutions exposed to these asset prices through mark-to-market accounting, margin requirements, calls for additional collateral etc.  will appreciate what is lost by this castration of the macroeconomic models.  Threshold effects, critical mass, tipping points, non-linear accelerators - they are all out of the window.  Those of us who worry about endogenous uncertainty arising from the interactions of boundedly rational market participants cannot but scratch our heads at the insistence of the mainline models that all uncertainty is exogenous and additive.

Technically, the non-linear stochastic dynamic models were linearised (often log-linearised) at a deterministic (non-stochastic) steady state.  The analysis was further restricted by only considering forms of randomness that would become trivially small in the neigbourhood of the deterministic steady state.  Linear models with additive random shocks we can handle - almost !

Even this was not quite enough to get going, however.  As pointed out earlier, models with forward-looking (rational) expectations of asset prices will be driven not just by conventional, engineering-type dynamic processes where the past drives the present and the future, but also in part by past and present anticipations of the future.  When you linearize a model, and shock it with additive random disturbances, an unfortunate by-product is that the resulting linearised model behaves either in a very strongly stabilising fashion or in a relentlessly explosive manner.  There is no ‘bounded instability’ in such models.  The dynamic stochastic general equilibrium (DSGE) crowd saw that the economy had not exploded without bound in the past, and concluded from this that it made sense to rule out, in the linearized model, the explosive solution trajectories.  What they were left with was something that, following an exogenous  random disturbance, would return to the deterministic steady state pretty smartly.  No L-shaped recessions.  No processes of cumulative causation and bounded but persistent decline or expansion.  Just nice V-shaped recessions.

Tuesday, April 21, 2009

Does theory make you blind?

From a recent interview with Daniel Kahneman:

"In the last half year, the models simply didn't work. So the question arises: Why do people use models? I liken what is happening now to a system that forecasts the weather, and does so very well. People know when to take an umbrella when they leave the house, or when it will snow. Except what? The system can't predict hurricanes. Do we use the system anyway, or throw it out? It turns out they'll use it."

 

[…]

“The interesting psychological problem is why economists believe in their theory, but this is the problem with the theory, any theory. It leads to a certain blindness. It's difficult to see anything that deviates from it."


We only look for information that supports the theory and ignore the rest.

"Correct. That appears to be what happened with Greenspan: He had a theory under which the market operates, and that the market would correct itself."

Thursday, March 26, 2009

Economists to blame for crisis?

Keynes once famously wrote that “Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.”

A recent article in Prospect Magazine argues that much modern policy and many practices amongst investors were influenced by the two “false theories” of rational expectations and efficient markets, which

[…] are not only misleading but highly ideological, have become so dominant in academia (especially business schools), government and markets themselves. While neither theory was totally dominant in mainstream economics departments, both were found in every major textbook, and both were important parts of the “neo-Keynesian” orthodoxy, which was the end-result of the shake-out that followed Milton Friedman’s attempt to overthrow Keynes. The result is that these two theories have more power than even their adherents realise: yes, they underpin the thinking of the wilder fringes of the Chicago school, but also, more subtly, they underpin the analysis of sensible economists like Paul Samuelson.

[…]

As David Hendry, until recently head of the Oxford economics department, has noted: “Economists critical of the rational expectations based approach have had great difficulty even publishing such views, or maintaining research funding. For example, recent attempts to get ESRC funding for a project to test the flaws in rational expectations based models was rejected. I believe some of British policy failures have been due to the Bank accepting the implications [of REH models] and hence taking about a year too long to react to the credit crisis.”

[…]

Although there was never any empirical evidence for REH, the theory took academic economics by storm for two reasons. First, the assumptions of clearly-defined laws and identical expectations were easily translated into simple mathematical models—and this mathematical tractability soon came to be viewed as a more important academic objective than correspondence to reality or predictive power. Models based on rational expectations, insofar as they could be checked against reality, usually failed statistical tests. But this was not a deterrent to the economics profession. In other words, if the theory doesn’t fit the facts, ignore the facts. How could the world have allowed such crassly unscientific attitudes to dominate a serious academic discipline, especially one as important to society as economics?

The article also claims that this was desirable for ideological reasons:

That government activism was doomed to failure was exactly what politicians, central bankers and business leaders of the Thatcher and Reagan periods wanted to hear. Thus it quickly became established as the official doctrine of the political and economic establishments in America—and from this powerful position it was able to conquer the entire academic world.

And for efficient markets:

To make matters worse, rational expectations gradually merged with the related theory of “efficient” financial markets. This was gaining ground in the 1970s for similar reasons—an attractive combination of mathematical and ideological tractability. This was the efficient market hypothesis (EMH), developed by another group of Chicago-influenced academics, all of whom received Nobel prizes just as their theories came apart at the seams. EMH, like rational expectations, assumed that there was a well-defined model of economic behaviour and that rational investors would all follow it; but it added another step. In the strong version of the theory, financial markets, because they were populated by a multitude of rational and competitive players, would always set prices that reflected all available information in the most accurate possible way. Because the market price would always reflect more perfect knowledge than was available to any one individual, no investor could “beat the market”—still less could a regulator ever hope to improve on market signals by substituting his own judgement. But if prices perfectly reflected all information, why did these prices constantly fluctuate and what did such movements mean? EMH cut this Gordian knot with a simple assumption: market movements are meaningless random fluctuations, equivalent to tossing a coin or a drunken sailor’s “random walk.”
This anarchic-sounding view was actually very reassuring. If market movements were really like coin-tosses, they might be totally irregular in the short term, but very predictable over longer periods, like the takings of a casino. Specifically, the coin-tossing and random walk analogies could be shown to imply what statisticians call a “normal” or Gaussian probability distribution. And the mathematics of Gaussian distributions (plus what is called the “law of large numbers”) reveals that catastrophic disturbances are vanishingly unlikely to occur. For example, if the daily fluctuations on Wall Street follow a normal distribution, it is possible to “prove” that the odds of a one-day movement greater than 25 per cent are about one in three trillion. The fact that at least four statistically “impossible” financial events occurred in just 20 years—in stock markets in 1987, bonds in 1994, currencies in 1998 and credit markets in 2008—would by normal standards, have meant the end of EMH. But as in the case of rational expectations, the facts were rejected while the theory continued to reign supreme, albeit with some recalibration.

Friday, March 6, 2009

Is there a systemic failure in academic economics?

That’s the claim in a brief, new and quite well-written paper called “The financial crisis and the systemic failure of academic economics”. The paper is written by a bunch of authors from US, Germany, France and Denmark, and claims that the profession was blind to the build-up of the financial crisis and underestimated its dimensions when it started to unfold. It argues that this failure can be traced “to the profession’s insistence on constructing models that, by desing, disregard the key elements driving outcomes in real-world markets.”

The authors take as a starting point that economists have a social function, an important part of which is to explain social phenomena such “unemployment, boom and bust cycles, and financial crisis”. Because of this, economists have an ethical responsibility to communicate when and where their models are applicable, and even to speak out if there is widespread abuse of them by others. This, the authors claim, is a responsibility economists have failed in.

They argue that financial models used for pricing derivatives and other instruments had to be based on short data series and shaky theoretical assujmptions, were used so widely or by market players so dominant that the individualistic “this is how you as a small and isolated market agent should value these assets” perspective was invalid, and provided a control illusion to users through their seemingly precise and quantitative relationships.

As for modern macro models, the authors argue that these are deeply flawed for a number of reasons:

  • Expectations are rational, which is taken to mean model consistent. Agents in the model are taken to understand how the model they exist in works. This means that expectation mechanisms are only validated internally (by comparing the assumptions to the other assumptions in the model) rather than externally (by comparing the assumptions with data or what is known about expectations and judgments under uncertainty from psychology or other disciplines). “A behavioral interpretation of rational expectations would imply that individuals and the economist have a complete understanding of the economic mechanisms governing the world.”
  • Representative agents. Each function (worker, consumer, invester etc.) in the economy is represented by one individual. This disregards the aggregation problem (how individual actions to achieve X may produce different outcomes in the aggregate from X. Like if everyone seated in an auditorium stands up to see better – each one assuming that the others will remain seated if they get up), and it reduces macro phenomena to micro phenomena. There are no differences in expectations or preferences between individuals, and there are no interaction mechanisms that allow emergent properties, where unintended and unforeseen consequences follow from the different individuals and the way they interact.
  • Empirically, the models are calibrated – and parameter estimates of discount functions etc. are taken from micro studies and placed in the utility function of the representative agent – again skipping the entire aggregation and interaction problem.

The authors argue that we should develop flexible, data-driven models for the macroeconomic phenomena we are interested in – and use these as benchmarks against which to test proposed theoretical models.

Finally, they also argue that theoretical results without established empirical applicability are used to support questionable policy claims. Walrasian general equilibrium theory and the finding of Arrow-Debreu that all uncertainty can be eliminated given sufficient contingent claims in the market are seen as underlying

the belief shared by many economists that the introduction of new classes of derivatives can only be welfare increasing […]. It is worth emphasizing that this view is not an empirically grounded belief but an opinion derived from a benchmark model that is much too abstract to be confronted with data.

Again, as in so much criticism against economics, the “as-if” argument is not tackled directly. This will make it easy to avoid the criticism without dealing with it by saying that the theories in question are not meant to explain anything they can be shown not to explain (even if that should be claimed or implied in the theoretical works in question), that they were abused by stupid and greedy people, that the cutting edge of research (as opposed to what everyone learns and takes away from the subject) has already done some even more sophisticated and complicated shit that is really good and solves all our problems, that interaction is fully handled now because somebody put two consumers rather than one into a recent theory, etc. etc.

And so it goes.

Monday, March 2, 2009

The failure of modern economics? Choosing theory over data

When your theory fails to explain what you set out to explain there are two strategies often followed: One seems to be driven by an interest in the real world phenomena, which makes people open to altering even basic assumptions if these are important causal factors in the theory but lack empirical support. The other seems to be driven by an interest in theoretical “purity”, which makes people open to creating more and more absurd theories if that is what they need to retain and defend the core assumptions that define the "discipline” in their own eyes.

This is my take on the issue Krugman discusses in today’s column in the NY Times. He writes about how modern macroeconomics has fallen into an ivory tower decadence that makes our profession fail in its role as informed and thoughtful policy advisers. Writing on how business cycles are more persistent than Lucas and the Real Business Cycle people would expect, he claims that the profession divided:

One group went down the “new Keynesian” route, arguing that something such as small costs of changing prices must explain the rigidity we actually seem to see. This group isn’t averse to putting a lot of rationality into its models, but it’s willing to accept aspects of the world that seem clear in the data, even if it can’t (yet?) be fully explained in terms of deep foundations.

The other group decided that since they couldn’t come up with a rigorous microfoundation for price stickiness, there must not be any price stickiness: recessions are the result of adverse technological shocks, not demand shocks.

And the latter group, the equilibrium macro side, was so convinced of the logical correctness of its position that schools dominated by that view stopped teaching demand-side economics.

The really nice thing about Krugman’s column is that he brings up the important point that this matters because we are dealing with real world problems.

And the sad thing is that all of this matters. Our ability as a nation to respond to the current economic crisis is being seriously hampered by the gratuitous ignorance of many of our economists.

Sunday, March 1, 2009

Shame the financial economists?

Thanks to Hans Olav Melberg for this tip:

Nassim “Black Swan” Taleb and Pablo Triana (a derivatives consultant) wrote an opinion piece in the Financial Times last year where they asserted that the risk management techniques taught by financial economists were to blame for hiding the true risks in the economy and allowing the economy to swell up to the bursting point. The opinion piece makes two interesting claims:

1. Many members of the economics profession saw the flaws in these methods but allowed the financial economists to keep their prestige (e.g. “Nobel” prize in economics) and spread their financial models to students. Those economists are bystanders who failed in their moral duty to stop this before it created our current woes

2. We will not convince financial economists of their errors through logical persuasion and data – we need to use shame/social pressure

So how can we displace a fraud? Not by preaching nor by rational argument (believe us, we tried). Not by evidence. Risk methods that failed dramatically in the real world continue to be taught to students in business schools, where professors never lose tenure for the misapplications of those methods. As we are writing these lines, close to 100,000 MBAs are still learning portfolio theory – it is uniformly on the programme for next semester. An airline company would ground the aircraft and investigate after the crash – universities would put more aircraft in the skies, crash after crash. The fraud can be displaced only by shaming people, by boycotting the orthodox financial economics establishment and the institutions that allowed this to happen.

Wednesday, February 18, 2009

Challenge: Formulate your issue in one sentence

When I see criticism of economic theories they often seem to go on and on, use lots of “ism” words (positivism, pluralism, autism and worse), and be very vague and abstract and wordy. Which is why I have the following challenge: Take your criticism and formulate the main point in one sentence free from overly abstract and philosophical/methodological jargon.

Here’s my attempt:

  1. Against some types of welfare theory: If you have no empirical evidence concerning what people actually care about and what choice alternatives they actually face, then you do not have sufficient information to establish that they are welfare maximizing.
  2. Against some types of “explanatory theories”: If you claim to explain why something is the case, then all sorts of empirical information are relevant in assessing your driving assumptions and proposed mechanism– not just market data.

I don’t feel these two claims should be controversial. They seem rather obvious to me – yet these two “principles” (if accepted) are sufficient to kill off (I believe) much of the weirder (but accepted) theories in economics. My Ph. D. work on Rational Addiction theory was (in retrospect) basically the attempt to make these two claims and show why they were sufficient to discard many of the claims made in top 5 journals concerning the validity and implications of rational addiction theories:

  • Welfare: As long as you merely observe market choices and develop a “choice model” to accurately reproduce/fit patterns in the market data (whether “stylized facts” or econometrically through structural models) – this does not prove/establish/support the claim that the consumers in question are maximizing their long term welfare. All you’ve proved is that they are acting “as if” they are maximizing some utility function (which you have not shown to be related to their actual welfare) given some choice set (that you have not shown to be the one they are facing). No matter how insanely and self-destructively you act – it will be possible to develop some model that rationalizes this as optimal behavior for some set of preferences and constraints.
  • Explanation: Merely observing economic data and developing a “model” to accurately reproduce/fit patterns in this, is insufficient to establish that your proposed causal mechanism is actually at work in the real world. If your theory, e.g.,  asserts that people have information that does not exist, are solving problems that we know they are not facing, or influencing the world through relationships that we have no reason to believe in – then these are good reasons to discard your proposed mechanism as an explanation.

Is there some non-obvious flaw in my claims that rescues the weirder economic theories? I’m open to that possibility - please let me know.

Also interested in other attempts to formulate one-sentence summaries of

Monday, February 16, 2009

Have you taken into consideration how your opponent has taken into consideration that you have taken into consideration how they… damn… where was I?

Interesting blogpost on “cognitive hierarchy theory”, which basically seems to involve the empirical examination of how many such steps people use in their actual reasoning when making decisions.

Big surprise: They don’t take it to infinity.

"The cognitive hierarchy theory finds that people only do a few steps of this kind of iterated thinking," [Caltech’s Colin Camerer, Professor of Behavioral Economics] explains. "Usually, it's just one step: I act as if others are unpredictable. But sometimes it's two steps: I act as if others think *I* am unpredictable. You can think of the number of steps a person takes as their strategic IQ. A higher strategic IQ means you are outthinking a lot of other people."

Most of us have a pretty low strategic IQ, but that's to be expected, Camerer notes. To reach a truly high strategic IQ requires either special experience with a particular type of game (such as poker), training, or, in rare cases, special gifts.

Perhaps more interesting to us economists are the implications. The prof says: "We think it means you can fool some of the people some of the time"

That would seem to have “interesting implications” for welfare analysis…. ;-)

On a related note – when I encounter comedy based on this theme I always wonder whether the scriptwriter has studied economics. From the mediocre Ben Stiller comedy “mystery men”:

And the sitcom Friends had this as a central gag to the episode where the friends find out that Monica and Chandler have an affair. You can even get t-shirts with the line “They don't know that we know they know we know.”

Wednesday, February 4, 2009

Criticizing economics – uhm, how really?

Ole writes about the pitfalls when criticizing economics. Take it one step back – how to do it? Soap is sticky compared to any economic model.

Take simplification: In first place, you cannot attack a model for being unrealistic in any particular aspect because of its need to be simple. I buy that.


But what does one mean by keeping it simple? “Stripping it from everything that is irrelevant for the question at hand” one might answer. Aha. But when undressing reality, how do I know when to stop? And how do I make sure that I can leave out something; that it doesn’t interact in any important way with what I mean to describe (To stay in the dressing room: taking off the belt might mean the trousers are on the floor)?


But it is not only that. If you point out that it is not only a simplification but also rests on wrong assumptions you are told that they are not wrong but heroic: needed to get at the very essence of what you are interested in. But, hello, are there any rules for those Hercules assumptions? I haven't seen them. Nobody seems to care about justifying assumptions. Rather, anything goes it seems as long as rationality is in place, assuring mathematics are on board. I see the point of using maths but I want to make sure that what I end up with is not to economic reality what the board game “Monopoly” is to real estate agents.


Take the pet I am supposed to keep watch over these days, the Cournot model; describing imperfect competition by few sellers that have some given capacity and decide on quantities to be put on the market. There is an itch, telling me that reality and Cournot do not always get along well. But how do I scratch it?


Do all other firm decisions not interact or interfere with the choice of quantity? Who am I to judge? Does it matter? And is it a mere simplification? Nope. One example: The model is requiring all the sellers to be perfectly informed and I don't think I need to make a case here arguing that production costs are business secrets. So it's not only simplifying, it is making assumptions that one knows are wrong. How to argue then that it still captures the essence of imperfect competition?


I don’t know, but it is argued. But if it is okay and necessary to have wrong/heroic assumptions, where does that leave the one wanting to criticize the model?


Imperfect competition means higher prices and less supply compared to the perfect competition case. The Cournot model says it is by restricting quantity taking the other suppliers' quantities into account that that happens, giving, at least theoretically (there are tons of info requirements on the researcher too) an idea of by how much the magnitudes change. Is the Cournot model just a straw man for "higher prices"? Does it not make any claim to be a description of how those prices are achieved?


My point is that 1) even with an MPhil in Economics (got that on paper) models are still mysterious monsters to me and 2) they don't tell me what type of monster they are, making it hard for me to decide whether to take the sword or the garlic to attack them.

Tuesday, February 3, 2009

Non-economist comes across rational addiction theory

Andrew Gelman is a professor of statistics and political science at the Columbia University. In a recent blogpost in the blog “statistical modeling, causal inference, and social science” he comments in passing on rational addiction theory – one of the examples used in a discussion on econometrics that he gives his perspetive on:

the study of smoking seems pretty wacky to me. First there is a discussion of "rational addiction." Huh?? Then Ziliak and McCloskey say "cigarette smoking may be addictive." Umm, maybe. I guess the jury is still out on that one . . . .

OK, regarding "rational addiction," I'm sure some economists will bite my head off for mocking the concept, so let me just say that presumably different people are addicted in different ways. Some people are definitely addicted in the real sense that they want to quit but they can't, perhaps others are addicted rationally (whatever that means). I could imagine fitting some sort of mixture model or varying-parameter model. I could imagine some sort of rational addiction model as a null hypothesis or straw man. I can't imagine it as a serious model of smoking behavior.

In response to a comment from an economist who explains briefly that Rational Addiction comes from a seminal paper and has proved influential in health economics, he responds:

My reply to this: Yeah, I figured as much. It's probably a great theory. But, ya know what? If Becker and Murphy want to get credit for being bold, transgressive, counterintuitive, etc etc., the flip side is that they have to expect outsiders like me to think their theory is pretty silly. As I noted in my previous entry, there's certainly rationality within the context of addiction (e.g., wanting to get a good price on cigarettes), but "rational addiction" seems to miss the point. Hey, I'm sure I'm missing the key issue here, but, again, it's my privilege as a "civilian" to take what seems a more commonsensical position here and leave the counterintuitive pyrotechnics to the professional economists.

Wednesday, January 28, 2009

One (of many?) pitfalls when criticizing economics

Criticizing economic theories is dangerous. One false step and you are lumped in with one or another of the anti-economics camps. When that happens – when the reader can “label” you – then the rest of your arguments get interpreted through the perception that you are “one of those anti-math/leftist/ignorant/whatever” guys who don’t understand economics and its methodology, and who can be brushed aside and ignored. For instance, you are…

  • Anti-mathematics and anti-formalism – As Krugman wrote in “Two Cheers for Formalism” (Economic Journal, 1998):

Attacks on the excessive formalism of economics – on its reliance on abstract models, on its use of too much mathematics – have been a constant for the past 150 years. […] much of the criticism of formalism in economics is an attack on a straw man […] when outsiders criticise formalism in economics, their real complaint is often not about method but about content – in particular, they dislike “formalistic” arguments not because they are formalistic, butr because they refute their pet doctrines.

  • Biased and ideological. Put simply: You don’t like the way reality works, and reject economics for the same reason that Intelligent Design people reject Darwin: It’s not how you want the world to be. You hate free trade, or want to sharply increase minimum wage rates, or ban drugs without getting higher prices that cause financing crimes. Since (some) economic results contradict your world-view, you reject economics.
  • Afraid of simplification. You don’t understand that all theories need to simplify, and you believe that any deviation from the fully detailed world around you is sufficient to throw out a theory. Whereas we economists understand that it is by abstracting away from irrelevant details that we can uncover the main determining factors and the characteristics of the underlying mechanisms at work.
  • Afraid of human nature. You don’t want people to be selfish, you want to live in a Pollyanna world where everyone cares about each other and people are warm, empathetic beings – whereas we economists are able to face the truth that we are (also) influenced by cold, hard cash, and the desire for selfish gratification of own desires.