Friday, January 28, 2011

Ethical economists again...

Alex Tabarrok at marginal revolution applauds Glaeser's take on the ethical basis of economics and quotes a text-book he has co-authored with Tyler Cowen which he claims makes a similar point (see below).

In this case, I'd make the point that their "take" is only superficially similar. It presupposes more. Glaeser's point was that you make an assumption when you jump from "the person chose A over B" to "A is better for the person than B," and that "preferences" before you make this jump refer to nothing more than what you would observe the person choosing. Cowen and Tabarrok, on the other hand, write as though they've already made this jump.More specifically, they seem to beg the question when they state that economists don't second-guess people's "preferences" and do "not regard some preferences as better than others" and don't mind it if people "like" wrestling better than opera. Choice, here, is already taken as (always??) an expression of what serves the choosing person's actual tastes and judgments best.

Even though the predictions of economics are independent of any ethical theory, there are ethical ideas behind normative economic reasoning. An economist who rejects the idea of exploitation in kidney purchases, for example, is treating the seller of kidneys with respect—as a person who is capable of choosing for himself or herself even in difficult circumstances.

Similarly, economists don’t second-guess people’s preferences very much. If people like wrestling more than opera, then so be it; the economist, acting as economist, does not regard some preferences as better than others. In normative terms, economists once again tend to respect people’s choices.

None of this it to say that economists are always right in their ethical assumptions. As we warned you in the beginning, this chapter has more questions than answers. But the ethical views of economists—respect for individual choice and preference, support for voluntary trade, and equality of treatment—are all ethical views with considerable grounding and support in a wide variety of ethical and religious traditions.


Thursday, January 27, 2011

Why economics should not have a "moral core"

Here's a nice take on the Glaeser piece I noted below ( )

going beyond the facts is precisely what Glaeser's "moral compass" would have us do. Suppose psychologists find that people are less happy when they have the choice to become addicted to heroin. Should economists refuse to accept this fact? If I were an economist studying heroin addiction, I'd say: "Hey, policymakers and voters, here's the deal. If you let people do heroin, their happiness will go down, but they'll have more freedom of choice. I'll let you guys decide what to do."
Glaeser disagrees. He seems to think the economist's duty is either to A) recommend the alternative that entails more freedom of choice, or B) disbelieve the finding that allowing heroin use reduces happiness. (A) is saying that economists, as a class, have a fixed and definite role in deciding society's morality, kind of like a priesthood. (B) is saying that intellectual honesty and scientific integrity must take a back seat to a faith-based belief system.
Whichever he is saying, I highly disapprove.
If we limit our set of economic theories to those that seem to recommend individual freedom of choice - if we give economics a "moral compass" - we are refusing to take an honest look at the way the world really operates. That, in my moral opinion, is bad science.
And don't think that this doesn't happen in practice. Many of the same economists who espouse a belief in individual freedom of choice are biased against theories that imply a need for collective decision-making. They routinely refuse to believe in the existence of public goods, demand fluctuations, and other phenomena that imply a role for government. Behavioral economic theories, which assert that people are sometimes irrational, are routinely pooh-poohed by "conservative" economists, regardless of the mountain of laboratory evidence in those models' favor.

In short, the widespread belief that economists should act both as scientists and as priests has made them less effective as scientists.

Tim Harford pokes fun at rational addiction

He twittered my rational addiction video late last year and includes it in a side post ( ), so maybe I can take a small crumb of credit for the idea? (His is much more accessible to non-economists, though...)

Amplify’d from

I wasn’t always an alcoholic tramp. I am a man of letters. I studied Philosophy, Politics and Economics at Oxford, like that David Cameron fellow. But when I looked at the options open to me – over-worked banker, castrated civil-servant or, worst of all, parliamentarian – I decided that the optimal course of affairs would be to begin building up my stock of addictive capital.

I don’t want to romanticise life as a rough-sleeping bum. It gets cold and lonely. I’m not sure what is keeping my underpants together, though I’m sure they wouldn’t survive contact with suds and warm water. But my boozy existence has a cool, calculating logic. I know that seems odd, but thankfully Gary Becker and Kevin Murphy, two of the University of Chicago’s most celebrated economists, have worked out the details in their theory of rational addiction.

For sure, not everything is perfect. But I’m a rational addict; a utility-maximising old soak. I drink because it makes sense to do so – by following an ex-ante optimal inter-temporal consumption plan, as they say. Speaking of which, let me crack open a bottle of strong cider … that’s better.


The fundamental leap of welfare economics

I like this. This is honest. It's stupid, but it's honest. And clear: "Improved welfare" means nothing more than "new, previously not available choice option was picked". That's all. All of standard welfare economics presupposes that the best (only?) way to identify the welfare-maximizing choice available to an individual is to see what he chooses when left alone.

Maybe someone who didn't believe me will believe it when it comes from a Harvard economist praised by both Akerlof and Gary Becker... Here's Ed Glaeser

Teachers of first-year graduate courses in economic theory, like me, often begin by discussing the assumption that individuals can rank their preferred outcomes. We then propose a measure — a ranking mechanism called a utility function — that follows people’s preferences.

If there were 1,000 outcomes, an equivalent utility function could be defined by giving the most favored outcome a value of 1,000, the second best outcome a value of 999 and so forth. This “utility function” has nothing to do with happiness or self-satisfaction; it’s just a mathematical convenience for ranking people’s choices.

But then we turn to welfare, and that’s where we make our great leap.

Improvements in welfare occur when there are improvements in utility, and those occur only when an individual gets an option that wasn’t previously available. We typically prove that someone’s welfare has increased when the person has an increased set of choices.

When we make that assumption (which is hotly contested by some people, especially psychologists), we essentially assume that the fundamental objective of public policy is to increase freedom of choice.


Monday, January 24, 2011

The consensus is...

The blog below notes some instances of stated "consensus" in science in the last 25 years that are no longer the consensus. This kind of thing is tricky, though. It has to do with the optimal level of trust towards people in our (and other) disciplines. If we always take other people's results and claims on good faith, scientific progress would slow and maybe halt as false results were accepted on authority. On the other hand, if we never accept other people's results and claims, we would open ourselves to lots of beliefs that are wrong or even ridiculous with a high probability - especially outside our own domain of specialization. We want a diversity of views challenging each other in order for the scientific process to work, but we also want the scientific process to lead to "consensus" views that we can feel reasonably confident in.

Amplify’d from

Robin Hanson reminds us that the scientific consensus is often wrong. Ron Bailey did a Nexis search of the phrase 'scientific consensus over the past 25 years, and found the following:
  • saccharin causes cancer in humans
  • dietary fiber appeared to reduced the incidence of colon cancer.
  • agents found to cause cancer in animals should be considered suspect human carcinogens
  • fusion energy reactors would produce more energy than it consumed within five years
  • acid rain is destroying lakes and forests

These are no longer consensus findings. He did find the phrase 'scientific consensus' in regards to uncertainty about when life starts, which probably still stands. Yet in all, that's a pretty weak record for the consensus.


Thursday, January 20, 2011

Would people accept a market in organs?

Here's a guy riled up about how the liver-transplant Steve Apple-and-Pixar Jobs got a couple of years ago after having had pancreatic cancer may have been "wasted" if his recent health leave is due to his cancer returning. The reason he got the liver, according to this guy and his sources, was that the had the financial resources to "shop around" in the different state health systems.

Makes me curious how a purely market driven system would work. On the one hand, the fact that such "gaming" is EXTREMELY expensive now may make each individual case stand out more. In a market system, seeing a rich guy get his organ first would be an everyday occurence. On the other hand, if rich people could always buy their way to the front of the line, there would likely be many more cases of organs going to people who for other medical reasons may be poor recipients. If you have the cash and you're willing to gamble - here's your organ, even if the chances of success are low.

Maybe there would be supply-side reactions - people signing up for donation-under-specified-criteria? Or maybe the market revenue at stake would become a force for changing the default donation rules (which has enormous impact on donation rates - with roughly 20% changing from the default we can choose having 80% donation rates or 20% if I remember "Nudge" correctly)? Such factors would also make "gaming" and "medically suboptimal transplants" less salient. And I guess we should never underestimate the ability market players have for making things less transparent - whether it involves hiding how blood-stenched Coltan from Congo is in your cell-phone, how some financial sector legislation gets designed to benefit the financial sector, or how factory farming of hogs, turkeys and chickens is... well, evil (read Safran Foers new book on eating animals)

Amplify’d from

Two years ago, Jobs gamed the transplant allocation system to get a liver that could have saved somebody else. At the time, skeptics doubted that he should have received the organ, since he'd been treated for pancreatic cancer—in fact, he may have sought the liver because of the cancer—and the likelihood of the cancer's recurrence made him a bad bet for putting the liver to best use. If his health is now failing because of the cancer, that suspicion may be vindicated.

Jobs lives in Northern California, but he got his liver in Tennessee. Why? Different parts of the country have different waiting lists, and the wait in Northern California was three times longer than the wait in Tennessee. In fact, the median wait in the Tennessee area where Jobs snagged his liver was around 15 percent of the national average. Jobs confirmed last year that this is why he went to Tennessee: "My doctors here advised me to enroll in a transplant program in Memphis, Tennessee, where the supply/demand ratio of livers is more favorable than it is in California here."* Legally, you're allowed to get on multiple waiting lists around the country. That's how you game the system.

So why doesn't everybody do this? Because they can't. First you have to show up for an extensive in-person evaluation. Then you have to be available for a transplant in the area within hours of an organ becoming available. And while one jurisdiction might accept you as a charity case, if you want to play the field you'll have to prove you can pay for the transplant yourself. You also get priority points for being able to guarantee follow-up medical care, since this assures transplant allocators that the organ will be well cared for. Ordinary people can't compete with billionaires at meeting these tests. They can't go to multiple states for evaluations. They don't have private jets. Their insurance doesn't cover multiple evaluations and may not cover much of the half-million dollar transplant, much less the follow-up care.

there were roughly 16,000 people on the national liver waiting list when Jobs got a liver. He was one of 1,581 people who got livers in the United States in the first quarter of [2009]. Almost none of those people had any form of cancer. In fact, if Jobs' tumor has spread from his pancreas into his liver as is likely, some transplant surgeons say that they would not recommend a liver transplant because there is no data that shows a transplant will stop or even slow the spread of the cancer. This raises the question: Is this the best use of a liver?


Tuesday, January 18, 2011

Predictions - don't trust the odd one out.... but don't trust the rest either...

Yesterday I noted that the guys who are good at predicting extreme outcomes suck at predictions anyway because they’re crying “Wolf! Wolf!” all the time. Reminded me of something I read in a book which argues that “the rest” just say the same thing as each other without being correct. From Mark Buchanan’s “The Social Atom”:

A few years ago, for example, the economics consultancy London Economics assessed the recent predictions of more than thirty of the top British economic forecasting groups, including the Treasury, the National Institute, and the London Business School. They concluded:

It is a conventional joke that there are as many different opinions about the future of the economy as there are economists. The truth is quite the opposite. Economic forecasters . . . all say more or less the same thing at the same time; the degree of agreement is astounding. The differences between forecasts is are trivial relative to the differences between the forecass and what happens . . . what they say is almost always wrong . . . the consensus forecast failed to predict any of the most important developments in the economy over the past seven years […]

Monday, January 17, 2011

Today's best prediction is that things are gonna stay mostly the same...

...and don't you let anyone tell you otherwise...

Amplify’d from

We reserve a special place in society for those who promise genuine insights into the future — who can predict what will happen in business, in sports, in politics, technology, and so on. The media landscape is rich with these experts; Wall Street pays millions of dollars every year to analysts to put a precise dollar figure on next year’s company earnings. Those who manage to get a few big calls right are rewarded handsomely, either in terms of lucrative gigs or the adoration of a species that so needs to believe that the future is in fact predictable.

But are such people really better at predicting the future than anyone else?

To find the answer, Denrell and Fang took predictions from July 2002 to July 2005, and calculated which economists had the best record of correctly predicting “extreme” outcomes, defined for the study as either 20 percent higher or 20 percent lower than the average prediction. They compared those to figures on the economists’ overall accuracy. What they found was striking. Economists who had a better record at calling extreme events had a worse record in general. “The analyst with the largest number as well as the highest proportion of accurate and extreme forecasts,” they wrote, “had, by far, the worst forecasting record.”

Their work is the latest in a long line of research dismantling the notion that predictions are really worth anything. The most notable work in the field is “Expert Political Judgment” by Philip Tetlock of the University of Pennsylvania. Tetlock analyzed more than 80,000 political predictions ventured by supposed experts over two decades to see how well they fared as a group. The answer: badly. The experts did about as well as chance. And the more in-demand the expert, the bolder, and thus the less accurate, the predictions. Research by a handful of others, Denrell included, suggests the same goes for economic forecasters. An accurate prediction — of an extreme event or even a series of nonextreme ones — can beget overconfidence, which can lead to making bolder and bolder bets, and thus, more and more errors.

There’s no great, complex explanation for why people who get one big thing right get most everything else wrong, argues Denrell. It’s simple: Those who correctly predict extreme events tend to have a greater tendency to make extreme predictions; and those who make extreme predictions tend to spend most of the time being wrong — on account of most of their predictions being, well, pretty extreme. There are few occurrences so out of the ordinary that someone, somewhere won’t have seen them coming, even if that person has seldom been right about anything else.


Sunday, January 16, 2011

Macroeconomics yet again: The disinterest in reality

DeLong from last year with two questions: Why do good macroeconomists seemingly find "patently unrealistic" theories acceptable? And why don't they feel they need to make their theories consistent with the evidence described and collected by economic historians?

Again, I feel the answer has to involve the strategies and attitudes towards empirical facts, knowledge and data that economists too frequently allow. The types of arguments and challenges economists face in seminars and from referees and editors make it necessary to be consistent with current theoretical fads and remove the need to take into account certain types of evidence and arguments. Provided you know the right incantations and spells ("this is just an as if theory," "these are standard assumptions," etc.), then I'm confident you can ward off even the economic history bootcamp that DeLong proposes.

Amplify’d from
two questions:

First, it does not seem to me that it is the case that nobody really believes
these just-so stories. Ed Prescott of Arizona State University really does
believe that large-scale recessions are caused by economy-wide episodes
of the forgetting of the technological and organizational knowledge that
underpins total factor productivity—with the exception of episodes like
the Great Depression, which Prescott says was caused by the extraordinary
pro-labor pro-union policies of Herbert Hoover that pushed real wages far
above equilibrium values. Casey Mulligan of the University of Chicago
really does appear to believe that large falls in the employment-to-
population ratio are best seen as “great vacations”—and as the side-effects
of destructive government policies like those in place today, which are
leading workers to quit their jobs so they can get higher government
subsidies to refinance their mortgages. (I know; I find it incredible too.)
Things that strike Kocherlakota as “patently unrealistic” are not viewed as
such by many of his modern macroeconomic peers and colleagues. Why
not? Why do they find these just-so stories satisfactory?

Second, whether modern macroeconomics attributes our current
difficulties either to causes that I agree with Kocherlakota are “patently
unrealistic” or simply confesses ignorance, why do they have such a
different view than we economic historians do? Whether they have
rejected our interpretations and understandings or simply have built up or
failed to build up their own in ignorance of what we have done, why have
they not taken and used our work?

The second question is particularly disturbing to me. There is, after all, no
place for economic theory of any flavor to come from than from economic
history. Someone observes some instructive case or some anecdotal or
empirical regularity, says “this is interesting; let's build a model of this,”
and economic theory is off and running. Theory is crystalized history—it
can be nothing more. After the initial crystalization it does develop on its
own according to its own intellectual imperatives and processes, true, but
the seed is still there. What happened to the seed?

This situation is personally and professionally dismaying. I do not say that
the macroeconomic model-building of the past generation has been
pointless. I don’t think that it has been pointless. But I do think that the
assembled modern macroeconomists need to be rounded up, on pain of
loss of tenure, and sent to a year-long boot camp with the assembled
monetary historians of the world as their drill sergeants. They need to
listen to and learn from Dick Sylla about Cornelius Buller’s bank
rescue of 1825 and Charlie Calomiris about the Overend, Gurney crisis
and Michael Bordo about the first bankruptcy of Baring brothers and
Barry Eichengreen and Christy Romer and Ben Bernanke about the Great

If modern macreconomics does not reconnect—if they do not realize just
what their theories are crystallized out of, and what the point of the
enterprise is—then they will indeed wither and die.


Thursday, January 13, 2011

"The profession danced around the wrong models..."

More macro-criticism from last year - this time quotes from Joseph Stiglitz. And again, it's a case of "those guys used these models, which I think are stupid. Luckily, other people used these models which I think are smart - and these are the ones we should start using."

Again - I miss a focus on evidence and methodology: If these critics are right that our profession allowed madness to reign - how can we avoid this in the future? What should we demand from researchers who claim that they can guide policy, explain society, etc? Surely we need to do better than "they should employ the assumptions and modelling approaches that I find reasonable and that lead to the conclusions I am comfortable with"?

It is hard for non-economists to understand how peculiar the predominant
macroeconomic models were. Many assumed demand had to equal supply – and
that meant there could be no unemployment. (Right now a lot of people are
just enjoying an extra dose of leisure; why they are unhappy is a matter for
psychiatry, not economics.) Many used “representative agent models” – all
individuals were assumed to be identical, and this meant there could be no
meaningful financial markets (who would be lending money to whom?).
Information asymmetries, the cornerstone of modern economics, also had no
place: they could arise only if individuals suffered from acute
schizophrenia, an assumption incompatible with another of the favored
assumptions, full rationality.

Bad models lead to bad policy: central banks, for instance, focused on the
small economic inefficiencies arising from inflation, to the exclusion of
the far, far greater inefficiencies arising from dysfunctional financial
markets and asset price bubbles. After all, their models said that financial
markets were always efficient. Remarkably, standard macroeconomic models did
not even incorporate adequate analyses of banks...: even a cursory look at
the perverse incentives confronting banks and their managers would have
predicted short-sighted behavior with excessive risk-taking. ...

Fortunately, while much of the mainstream focused on these flawed models,
numerous researchers were engaged in developing alternative approaches. ...
With a few exceptions, most central banks paid little attention to systemic
risk and the risks posed by credit interlinkages. Years before the crisis, a
few researchers focused on these issues, including the possibility of the
bankruptcy cascades that were to play out in such an important way in the
crisis. This is an example of the importance of modeling carefully complex
interactions among economic agents (households, companies, banks) –
interactions that cannot be studied in models in which everyone is assumed
to be the same. Even the sacrosanct assumption of rationality has been
attacked: there are systemic deviations from rationality and consequences
for macroeconomic behavior that need to be explored.

Changing paradigms is not easy. Too many have invested too much in the wrong
models. Like the Ptolemaic attempts to preserve earth-centric views of the
universe, there will be heroic efforts to add complexities and refinements
to the standard paradigm. The resulting models will be an improvement and
policies based on them may do better, but they too are likely to fail.
Nothing less than a paradigm shift will do.

Ronald Coase on good and bad economics

This post contains no argument or data or big insight. File it under "Hey! Somebody famous said something I like the sound of!"

RC: The bad or wrong economics is what I called the "blackboard economics". It does not study the real world economy. Instead, its efforts are on an imaginary world that exists only in the mind of economists, for example, the zero-transaction cost world.
Ideas and imaginations are terribly important in economic research or any pursuit of science. But the subject of study has to be real.

Wednesday, January 12, 2011

The Empire strikes back.... Of course it does.

I'm working my way (backwards) through a pile of blogpostings that I wanted to read. Here's a couple of quotes from a longer interview where new classical Thomas Sargent brushes off that silly criticism that has been directed towards modern macro. It is foolish, intellectually lazy, and misinformed. Basically - they were doing very well and were not at all surprised by the financial crisis, and there's already interesting work available on how to generate this stuff within their models.

Sargent: I know that I’m the one who is supposed to be answering questions, but perhaps you can tell me what popular criticisms of modern macro you have in mind.
Rolnick: OK, here goes. Examples of such criticisms are that modern macroeconomics makes too much use of sophisticated mathematics to model people and markets; that it incorrectly relies on the assumption that asset markets are efficient in the sense that asset prices aggregate information of all individuals; that the faith in good outcomes always emerging from competitive markets is misplaced; that the assumption of “rational expectations” is wrongheaded because it attributes too much knowledge and forecasting ability to people; that the modern macro mainstay “real business cycle model” is deficient because it ignores so many frictions and imperfections and is useless as a guide to policy for dealing with financial crises; that modern macroeconomics has either assumed away or shortchanged the analysis of unemployment; that the recent financial crisis took modern macro by surprise; and that macroeconomics should be based less on formal decision theory and more on the findings of “behavioral economics.” Shouldn’t these be taken seriously?
Sargent: Sorry, Art, but aside from the foolish and intellectually lazy remark about mathematics, all of the criticisms that you have listed reflect either woeful ignorance or intentional disregard for what much of modern macroeconomics is about and what it has accomplished. That said, it is true that modern macroeconomics uses mathematics and statistics to understand behavior in situations where there is uncertainty about how the future will unfold from the past. But a rule of thumb is that the more dynamic, uncertain and ambiguous is the economic environment that you seek to model, the more you are going to have to roll up your sleeves, and learn and use some math. That’s life.
Rolnick: Putting aside fear and ignorance of math, please say more about the other criticisms.
Sargent: Sure. As for the efficient markets hypothesis of the 1960s, please remember the enormous amount of good work that responded to Hansen and Singleton’s ruinous 1983 JPE [Journal of Political Economy] finding that standard rational expectations asset pricing theories fail to fit key features of the U.S. data.1 Far from taking the “efficient markets” outcomes for granted, important parts of modern macro are about understanding a large and interesting suite of asset pricing puzzles, brought to us by Hansen and Singleton and their followers—puzzles about empirical failures of simple versions of efficient markets theories. Here I have in mind papers on the “equity premium puzzle,” the “risk-free rate puzzle,” the “Backus-Smith” puzzle, and on and on.2
Rolnick: What about the most serious criticism—that the recent financial crisis caught modern macroeconomics by surprise?
Sargent: Art, it is just wrong to say that this financial crisis caught modern macroeconomists by surprise. That statement does a disservice to an important body of research to which responsible economists ought to be directing public attention. Researchers have systematically organized empirical evidence about past financial and exchange crises in the United States and abroad. Enlightened by those data, researchers have constructed first-rate dynamic models of the causes of financial crises and government policies that can arrest them or ignite them. The evidence and some of the models are well summarized and extended, for example, in Franklin Allen and Douglas Gale’s 2007 book Understanding Financial Crises.7 Please note that this work was available well before the U.S. financial crisis that began in 2007.

Market failure in academic economics?

The two quotes below (first one by Paul Krugman and second by Laurence Meyer) claim that subdisciplines in Economics veer off into absurdity and silliness because you get fads: If you want to publish in peer-reviewed, well-esteemed journals, then you need to use this or that modelling approach even if it has nothing to do with reality.

The thing that gnaws at me when I read this, though, is the worry that this is one group of economists annoyed that "their" fad is not the one controlling the top journals. The claim here is pretty strong: It's not that you're accepted into top journals in spite of your absurdity, the claim is that you're accepted ONLY IF you embed your work in the currently popular absurdity. Silliness is a necessary condition for being published.

Seems to me this kind of situation presupposes a clearly crippled "scientific" culture: Unless there were accepted and widespread strategies for downplaying, ignoring or lying about the fact that your model has little or nothing to do with reality - then the problem discussed by these authors would be limited to small, limited bubbles of absurdity that were widely ridiculed in the broader economics community.

Amplify’d from
By the early 1980s it was already common knowledge among people I hung out with that the only way to get non-crazy macroeconomics published was to wrap sensible assumptions about output and employment in something else, something that involved rational expectations and intertemporal stuff and made the paper respectable. And yes, that was conscious knowledge, which shaped the kinds of papers we wrote. So you could do exchange rate models that actually had realistic assumptions about prices and employment, but put the focus on rational expectations in the currency market, so that people really didn’t notice. Or you could model optimal investment choices, with the underlying framework fairly Keynesian, but hidden in the background. And so on.

the real business cycle or neoclassical models. It’s what’s taught in graduate schools. It’s the only kind of paper that can be published in journals. It is called “modern macroeconomics.”

The question is, what’s it good for? Well, it’s good for getting articles published in journals. It’s a good way to apply very sophisticated computational skills. But the question is, do those models have anything to do with reality? Models are always a caricature—but is this a caricature that’s so silly that you wouldn’t want to get close to it if you were a policymaker?


Tuesday, January 11, 2011

Agent based modelling - sensible when modelling machines?

Agent based modelling defines simple agents and lets them act and interact in a simulation. A common complaint is that this is stupid because people are more sophisticated than these agents are. However (cfr. last post), when the major share of trading is done by algorithmic software, then this objection loses force. Surely, software should be able to mimic software. In principle, it should even be possible to collect actual trading software and pit these against each other in a virtual and purely simulated market. In practice, I doubt firms would let their "proprietary" software out of their sight.

BTW - I'm not saying the guy mentioned below (LeBaron) has found true results or that this method is splendid. I don't know enough about it to make any such claims. But the method sounds interesting: Let agents with different hypotheses about their environment, different ways of using past info to distill patterns and predictions, compete in an evolving ecosystem where growth is related to past success - and see what aggregate outcomes, persistent regularities and interesting properties we get out of it.

Amplify’d from

'stability' is a word few would use to describe the chaotic markets of the past few years, when complex, nonlinear feedbacks fuelled the boom and bust of the dot-com and housing bubbles, and when banks took extreme risks in pursuit of ever higher profits.

In an effort to deal with such messy realities, a few economists — often working with physicists and others outside the economic mainstream — have spent the past decade or so exploring 'agent-based' models that make only minimal assumptions about human behaviour or inherent market stability (see page 685). The idea is to build a virtual market in a computer and populate it with artificially intelligent bits of software — 'agents' — that interact with one another much as people do in a real market. The computer then lets the overall behaviour of the market emerge from the actions of the individual agents, without presupposing the result.

Agent-based models have roots dating back to the 1940s and the first 'cellular automata', which were essentially just simulated grids of on–off switches that interacted with their nearest neighbours. But they didn't spark much interest beyond the physical-science community until the 1990s, when advances in computer power began to make realistic social simulations more feasible. Since then they have found increasing use in problems such as traffic flow and the spread of infectious diseases (see page 687). Indeed, points out Helbing, agent-based models are the social-science analogue of the computational simulations now routinely used elsewhere in science to explore complex nonlinear processes such as the global climate.

LeBaron has spent the past decade and a half working with colleagues, including a number of physicists, to develop an agent-based model of the stock market. In this model, several hundred agents attempt to profit by buying and selling stock, basing their decisions on patterns they perceive in past stock movements. Because the agents can learn from and respond to emerging market behaviour, they often shift their strategies, leading other agents to change their behaviour in turn. As a result, prices don't settle down into a stable equilibrium, as standard economic theory predicts. Much as in the real stock market, the prices keep bouncing up and down erratically, driven by an ever-shifting ecology of strategies and behaviours.

Nor is the resemblance just qualitative, says LeBaron. Detailed analyses of the agent-based model show that it reproduces the statistical features of real markets, especially their susceptibility to sudden, large price movements. "Traditional models do not go very far in explaining these features," LeBaron says.


Wall Street 3 - The Rise of the Machines

Discussed algorithmic trading recently with an economics professor and mentioned the claim that this can create instabilities. For instance, if too many use similar programs, these can create feedback loops such that sales trigger new sales that trigger new sales etc. Or that the mix of currently existing strategies interact in such a way that they create a bubble. The response was that this is unlikely to happen - as traders will take into account the nature of and intereactions between the different strategies that are out there. As Robert Anton Wilson said, you can always see the invisible hand as long as you look hard enough and long enough (or something along those lines).

Amplify’d from
Over the past decade, algorithmic trading has overtaken the industry. From the single desk of a startup hedge fund to the gilded halls of Goldman Sachs, computer code is now responsible for most of the activity on Wall Street. (By some estimates, computer-aided high-frequency trading now accounts for about 70 percent of total trade volume.) Increasingly, the market’s ups and downs are determined not by traders competing to see who has the best information or sharpest business mind but by algorithms feverishly scanning for faint signals of potential profit.
at its worst, it is an inscrutable and uncontrollable feedback loop. Individually, these algorithms may be easy to control but when they interact they can create unexpected behaviors—a conversation that can overwhelm the system it was built to navigate. On May 6, 2010, the Dow Jones Industrial Average inexplicably experienced a series of drops that came to be known as the flash crash, at one point shedding some 573 points in five minutes. Less than five months later, Progress Energy, a North Carolina utility, watched helplessly as its share price fell 90 percent. Also in late September, Apple shares dropped nearly 4 percent in just 30 seconds, before recovering a few minutes later.

And they started applying those methods to every aspect of the financial industry. Some built algorithms to perform the familiar function of discovering, buying, and selling individual stocks (a practice known as proprietary, or “prop,” trading). Others devised algorithms to help brokers execute large trades—massive buy or sell orders that take a while to go through and that become vulnerable to price manipulation if other traders sniff them out before they’re completed. These algorithms break up and optimize those orders to conceal them from the rest of the market. (This, confusingly enough, is known as algorithmic trading.) Still others are used to crack those codes, to discover the massive orders that other quants are trying to conceal. (This is called predatory trading.)

The result is a universe of competing lines of code, each of them trying to outsmart and one-up the other. “We often discuss it in terms of The Hunt for Red October, like submarine warfare,” says Dan Mathisson, head of Advanced Execution Services at Credit Suisse. “There are predatory traders out there that are constantly probing in the dark, trying to detect the presence of a big submarine coming through. And the job of the algorithmic trader is to make that submarine as stealth as possible.”

In late September, the Commodity Futures Trading Commission and the Securities and Exchange Commission released a 104-page report on the May 6 flash crash. The culprit, the report determined, was a “large fundamental trader” that had used an algorithm to hedge its stock market position. The trade was executed in just 20 minutes—an extremely aggressive time frame, which triggered a market plunge as other algorithms reacted, first to the sale and then to one another’s behavior. The chaos produced seemingly nonsensical trades—shares of Accenture were sold for a penny, for instance, while shares of Apple were purchased for $100,000 each. (Both trades were subsequently canceled.) The activity briefly paralyzed the entire financial system.