An essay I’ve meant to read for some time from the Financial Times blog is called “The unfortunate uselessness of most ’state of the art’ academic monetary economics”. It was well written and entertaining. I’m tempted to quote amusing quotes such as
Even during the seventies, eighties, nineties and noughties before 2007, the manifest failure of the EMH [i.e. Efficient Market Hypothesis] in many key asset markets was obvious to virtually all those whose cognitive abilities had not been warped by a modern Anglo-American Ph.D. eduction.
However, his serious points deserve mention, although I’m not qualified to state the extent to which he is on target regarding the state of the macro literature. Here are some snippets, but the whole thing is recommended:
On complete markets:
The most influential New Classical and New Keynesian theorists all worked in what economists call a ‘complete markets paradigm’. In a world where there are markets for contingent claims trading that span all possible states of nature (all possible contingencies and outcomes), and in which intertemporal budget constraints are always satisfied by assumption, default, bankruptcy and insolvency are impossible.[…]
Both the New Classical and New Keynesian complete markets macroeconomic theories not only did not allow questions about insolvency and illiquidity to be answered. They did not allow such questions to be asked.
[…] Goods and services that are potentially tradable are indexed by time, place and state of nature or state of the world. Time is a continuous variable, meaning that for complete markets along the time dimension alone, there would have to be rather more markets for future delivery (infinitely many in any time interval, no matter how small) than you can shake a stick at. Location likewise is a continuous variable in a 3-dimensional space. Again rather too many markets. Add uncertainty (states of nature or states of the world), never mind private or asymmetric information, and ‘too many potential markets’, if I may ruin the wonderful quote from Amadeus attributed to Emperor Joseph II, comes to mind. If any market takes a finite amount of resources (however small) to function, complete markets would exhaust the resources of the universe.
On efficient markets
In financial markets, and in asset markets, real and financial, in general, today’s asset price depends on the view market participants take of the likely future behaviour of asset prices. If today’s asset price depends on today’s anticipation of tomorrow’s price, and tomorrow’s price likewise depends on tomorrow’s expectation of the price the day after tomorrow, etc. ad nauseam, it is clear that today’s asset price depends in part on today’s anticipation of asset prices arbitralily far into the future. Since there is no obvious finite terminal date for the universe (few macroeconomists study cosmology in their spare time), most economic models with rational asset pricing imply that today’s price depend in part on today’s anticipation of the asset price in the infinitely remote future.
What can we say about the terminal behaviour of asset price expectations? The tools and techniques of dynamic mathematical optimisation imply that, when a mathematical programmer computes an optimal programme for some constrained dynamic optimisation problem he is trying to solve, it is a requirement of optimality that the influence of the infinitely distant future on the programmer’s criterion function today be zero.
And then a small miracle happens. An optimality criterion from a mathematical dynamic optimisation approach is transplanted, lock, stock and barrel to the behaviour of long-term price expectations in a decentralised market economy. In the mathematical programming exercise it is clear where the terminal boundary condition in question comes from. The terminal boundary condition that the influence of the infinitely distant future on asset prices today vanishes, is a ‘transversality condition’ that is part of the necessary and sufficient conditions for an optimum. But in a decentralised market economy there is no mathematical programmer imposing the terminal boundary conditions to make sure everything will be all right.
On linearization in modern macro
If one were to hold one’s nose and agree to play with the New Classical or New Keynesian complete markets toolkit, it would soon become clear that any potentially policy-relevant model would be highly non-linear, and that the interaction of these non-linearities and uncertainty makes for deep conceptual and technical problems. Macroeconomists are brave, but not that brave. So they took these non-linear stochastic dynamic general equilibrium models into the basement and beat them with a rubber hose until they behaved. This was achieved by completely stripping the model of its non-linearities and by achieving the transsubstantiation of complex convolutions of random variables and non-linear mappings into well-behaved additive stochastic disturbances.
Those of us who have marvelled at the non-linear feedback loops between asset prices in illiquid markets and the funding illiquidity of financial institutions exposed to these asset prices through mark-to-market accounting, margin requirements, calls for additional collateral etc. will appreciate what is lost by this castration of the macroeconomic models. Threshold effects, critical mass, tipping points, non-linear accelerators - they are all out of the window. Those of us who worry about endogenous uncertainty arising from the interactions of boundedly rational market participants cannot but scratch our heads at the insistence of the mainline models that all uncertainty is exogenous and additive.
Technically, the non-linear stochastic dynamic models were linearised (often log-linearised) at a deterministic (non-stochastic) steady state. The analysis was further restricted by only considering forms of randomness that would become trivially small in the neigbourhood of the deterministic steady state. Linear models with additive random shocks we can handle - almost !
Even this was not quite enough to get going, however. As pointed out earlier, models with forward-looking (rational) expectations of asset prices will be driven not just by conventional, engineering-type dynamic processes where the past drives the present and the future, but also in part by past and present anticipations of the future. When you linearize a model, and shock it with additive random disturbances, an unfortunate by-product is that the resulting linearised model behaves either in a very strongly stabilising fashion or in a relentlessly explosive manner. There is no ‘bounded instability’ in such models. The dynamic stochastic general equilibrium (DSGE) crowd saw that the economy had not exploded without bound in the past, and concluded from this that it made sense to rule out, in the linearized model, the explosive solution trajectories. What they were left with was something that, following an exogenous random disturbance, would return to the deterministic steady state pretty smartly. No L-shaped recessions. No processes of cumulative causation and bounded but persistent decline or expansion. Just nice V-shaped recessions.