Wednesday, March 30, 2011

Why “the Green Paradox” is (probably) nonsense

The “Green paradox” is a phrase coined by economist Hans-Werner Sinn, and it refers to the idea that fossil fuel producers will shift their extraction to earlier periods to avoid future (more stringent) climate policies. The idea seems to be that fossil fuel resource owners see themselves as holding an exhaustible resource, and that they have optimized ala Hotelling’s rule: The resource owners can pump an extra unit of oil today and put the money in the bank, or keep it in the ground and sell it at tomorrow’s price. They equalize these two alternative uses of the oil, and design the extraction path that gives an expected price increase equal to the rate of interest.
The Green Paradox follows quite simply from this model: Assume that carbon taxes are credibly announced for tomorrow. This depresses future demand and thus future price, and the resource owners extract more today – until today’s price + rate of interest is again equal to tomorrow’s price (after the carbon tax). Same thing if the world’s government invest in environmental tech R&D  or effective demand reduction/energy efficiency initiatives – if this is thought to have a significant impact on future costs of green energy or demand for fossil fuels, it will decrease the future oil price and cause resource owners to extract more today – thus undoing or at least reducing the impact of the policy.
I should admit I’m not really into this area of economics, but a five-minute googling makes it seem that this is mainly a theoretical concept used in a number of formal models (even when they seem to be asking an empirical question, as here). My skepticism, however, is related to whether this is a mechanism worth taking seriously as an important empirical phenomenon. It’s a subtle mechanism with a lot of moving parts, and each of them, it seems to me, has to work for the mechanism to actually matter: There’s a lag of several years from the time you start building a new well to the point when the new oil comes online. The “early period” you shift your extraction into therefore has to be at least a decade or something unless we assume a lot of spare capacity lying around. That means the oil price forecasts that affect your extraction rate has to at least be oil price forecasts looking 10-30 years into the future. You need to believe in your forecast sufficiently to invest substantial amounts in infrastructure on the basis of them, and these forecasts need to be sufficiently detailed that they can assess the likely effects on market price of future regulation that is, in itself, of uncertain type and uncertain magnitude.
What makes me sceptical of this? To begin with, oil prices are not very predictable – neither oil futures, expert opinions or various modelling approaches that have been tried are any good at predicting even short term changes in the oil price (up to a year, which is short term in our context). From an article in the Journal of Applied Econometrics:
We conduct a systematic evaluation of the out-of-sample predictive accuracy of oil futures-based forecasts and of a broader set of oil price forecasting approaches based on the forecast evaluation period 1991.1–2007.2. A robust finding across all horizons from 1 month to 12 months is that the no-change forecast tends to be more accurate than forecasts based on other econometric models and much more accurate than professional survey forecasts of the price of crude oil. Likewise, both forecasts based on oil futures prices and forecasts based on the oil futures spread tend to be less accurate than the no-change forecast under standard loss functions including quadratic and absolute loss. They also are more biased than the no-change forecast.
The result that oil futures prices fail to improve on the accuracy of simple no-change forecasts contradicts widely held views among policymakers and financial analysts. It also differs from some earlier empirical results in the academic literature based on shorter samples.
If the current oil futures price perfectly predicted the future price of oil, then a plot of the 12-month futures price would be an exact replica of the actual price shifted one year to the left. In other words, it would look like this (annual data – 2009 dollars – source BP energy review 2010):
In reality, the article shows that this diagram looks quite different  - it seems (based on a very off-the-cuff and cursory look) that it is rare indeed that the futures price pre-dates the actual price to any impressive extent. The article argues that even the sign of “change-one-year-down-the-road” estimated from the futures price is no better than flipping a coin.
When markets are crap at estimating prices one year down the road, the likelihood that they can predict well on a ten year basis seems rather slim – especially when we take into account how off experts have been. Consider US President Jimmy Carter’s energy crisis speech to the US nation from 1977:
The oil and natural gas we rely on for 75 percent of our energy are running out. In spite of increased effort, domestic production has been dropping steadily at about six percent a year. Imports have doubled in the last five years. Our nation's independence of economic and political action is becoming increasingly constrained. Unless profound changes are made to lower oil consumption, we now believe that early in the 1980s the world will be demanding more oil that it can produce.
Each new inventory of world oil reserves has been more disturbing than the last. World oil production can probably keep going up for another six or eight years. But some time in the 1980s it can't go up much more. Demand will overtake production. We have no choice about that.
In reality, oil consumption dipped slightly as OPEC dug in their heels, and then continued growing:
Not only that, but prices collapsed from late-70s levels and stayed (relatively) low until closer to 2000:
Investment in new oil rigs are actually very sensitive to the current price, which seems odd: With the swings observed above, long-term forecasts should be more independent of current price (or?).
To recap: current investment is mostly related to current spot price, and the current price is largely unforeseen by futures prices and experts, and I think we all would have heard if the collapse of the Copenhagen climate process in 2009, or the climategate emails etc. had provided large and persistent shocks to the oil price (either current or in the future) and caused radical changes in the investment levels of the oil industry. The Green Paradox is a nifty theoretical curiosity, but I would be sceptical of anyone pushing it as an important practical policy-issue.
As usual – I’m open to being wrong (but probably psychologically resistant to being proved wrong all the same ;-) .

Addendum: Svenn Jensen pointed out that if the intertemporal extraction-decisions are factored into the price futures this will complicate things. True. Will think about that some other time - at least unexpected info that made future climate policies less likely would cause immediate price increases as producers saved the oil in the ground for the now "greener" (in dollar-terms for them) future.

Tuesday, March 29, 2011

Yes, economists are influenced by incentives, but that doesn’t mean they’re amoral, intellectual guns-for-hire

Recently saw “Inside Job,” the 2010 documentary about the financial crisis. The director, Charles Ferguson, raises a valid point about the financial ties between some of the deregulation-happy economists and the financial sector (see Ferguson’s written piece on the topic here) – but he seems to imply at times that the economists have simply been paid to take (and argue for) the policy positions they have. In his written piece, he comments on

John Campbell, chairman of Harvard's economics department, who finds it very difficult to explain why conflicts of interest in economics should not concern us.

But could he be right? Are these professors simply being paid to say what they would otherwise say anyway? Unlikely. Mishkin and Portes showed no interest whatever in Iceland until they were paid to do so, and they got it totally wrong. Nor do all these professors seem to make policy statements contrary to the financial interests of their clients. Even more telling, they uniformly oppose disclosure of their financial relationships.

My point is not that the guys he discusses in the film (and the written piece) are blameless, but I think these things probably work in less explicit terms. They’re not intellectual prostitutes with a clear price-tag for taking the position you want to see them in. If you proposed explicit payment for their services they would feel horrified. However, they have a crush-at-a-distance to begin with, and as you wine and dine them, shower them with gifts and praise their opinions of you, they feel valued and vindicated and ever-more-certain that their view of you was correct.

This is not really rocket science: You have an economist who writes on the financial sector and regulation. He is positive to deregulated, free markets. You invite him to your financial firm, pay him well, and he meets lots of intelligent, smart, well-dressed and hard-nosed business men who tell him that he’s “spot on,” he’s “really understood things,” and who tell him anecdotes and stories that support and strengthen his views. Of course he’ll end up feeling even more strongly that he was correct (“Heck, I’m the guy who’s so insightful on reality that the guys on the ground want to hear my perspectives and pay handsomely for it!”), and of course you’ll enjoy his company and invite him around again, (“Heck, he’s the guy who sees our value and understands our potential!”)  and of course you’ll both feel that you have found a supportive and understanding and clear-sighted partner.

Now, economists are frequently happy to talk about how everyone is influenced by incentives – and if people disagree, economists will explain how this may be a “hidden motivation,” or how it may nudge us in un-noticed ways. Same here. There is no need to make it into a story of evil-doers wringing their hands and counting their cash as they cynically concoct models to support things they don’t believe in. Reciprocity, vanity, social proof… this is the sort of thing that happens every day.

Monday, March 28, 2011

Was economics (more of a science) 30 years ago?

Krugman recently made the claim (noted here) that economics was more scientific three decades ago. Here’s a quote about labor economics with a completely opposite sentiment. This one’s from Richard B Freeman (What Do Unions Do? – Journal of Labor Research, Vol. XXVI (4)), who some years ago wrote about labor economics in the eighties. According to him, this was a period when

[…] structural modeling, usually of labor supply behavior, was the vogue among labor economists. To those lucky enough to have missed this phase of research, structural modeling meant developing sophisticated models of optimizing behavior and then estimating those models with as much econometric sophistication as possible on data that was rarely rich enough to yield clear conclusions nor to illuminate real behavior. Little time was given to finding the pseudo experiments or valid instruments that might truly identify responses to economic incentives. Since none of the modelers knew the right structure and most dismissed behavioral economics as outside the space of economic investigation, this research taught us more about modeling than about the world. My view then and now is that this approach does not add greatly to our knowledge of economies.

Friday, March 25, 2011

Impressive accomplishments in economics #321

From Mother Jones (Kevin Drum)

One of the most impressive accomplishments of the modern right is its ability to generate plausible technical papers to justify conservative tropes that are basically ridiculous.

The (or “A”?) point of top economics journals

I doubt that there is a strong and decisive case in terms of “facilitating research progress” for having different tier journals (top, middle, specialty, etc). There doesn’t seem to be much hard, empirical evidence that peer-review provides a strong and credible signal of research quality/importance. However, as noted by a blog on The Economist, some of the other reasons for having top-journals may be important. They quote Cheap Talk:

“The full return to a 10-AEQ-page article in the top journal is thus estimated to be a 3.8 percent increase in salary.” (AEQ means the article is adjusted for page size to correspond to AER page length.)

And in an e-mail to Economist, Mark Thoma claims that the “separating the wheat from the chaff for overwhelmed readers” argument doesn’t actually hold:

Nobody reads the journals themselves much anymore, but where a paper hits is critical for promotion decisions. That's where the pecking order is established. The sciences are trying to break out of this, to some extent, but econ is a long ways from doing that.

Wednesday, March 23, 2011

When I say randomize my playlist I mean don’t…

I’m reading Future Babble by Dan Gardner, and came upon a Steve Jobs quote. A quick google suggests that this is the earliest version of the quote on the net:

I read Alex’s Adventures in Numberland, by Alex Bellos, on holiday. […]  Bellos has a wonderful quotation on p324 from Steve Jobs, chief executive of Apple, about a change made to the shuffle function on iTunes and the iPod: “We’re making it less random to make it feel more random.” Bellos compares an imaginary sequence of heads and tails with a real one and shows how we expect randomness to be more changeable than it is. Apple had the same problem with people complaining (I’ve done it myself) that shuffle gave them runs of the same artist.

Tuesday, March 22, 2011

Is economics a science? #2

Continuing my way through Brad DeLong’s “highlights” post, the next up is Krugman who comments on Ozimek that, yes, the profession rejected the Phillips curve because of empirical evidence, but they should also have rejected the real business cycle which is still accepted. Krugman concludes that

The point is that while economics certainly did have some of the characteristics of a science three decades ago, you can make a good case that significant parts of the field have lost those characteristics since then.

This seems weak to me, at least until Krugman provides the “good case” he says can be made. Otherwise, it sounds a bit like saying “economists in general seemed to be more in agreement with me 30 years ago, so we were more scientific back then.” My impression is that the level of empirical rigor and discourse in at least parts of economics is higher than before. And, though I think parts of" “behavioral economics” is a bit too much like “let’s put some behavioral lipstick on the rational choice pig,” it has at least made more economists agree that there are alternative models to the "standard” model that can be just as deserving of interest.

Also, I’d be interested in knowing what these guys mean by “scientific”. Is it “not completely 100% closed off against all possibilities of empirical falsification”?

Monday, March 21, 2011

Do “Tiger Moms” shift equilibrium of parenting strategies?

Read “The Ivy Delusion – the real reason the good mothers are so rattled by Amy Chua” in The Atlantic, which discussed and tried to explain the reactions of many parents to the Yale Law Professor’s parenting memoir “Battle Hymn of the Tiger Mother.” Or rather, I would guess, their reactions to the provocative collection of quotes and passages published in the Wall Street Journal under her name (but which she herself claims misrepresents her work).

Anyway, the reason I mention this is that the explanation proposed for the backlash in The Atlantic seems particularly amenable to economic modeling (whether or not it is correct):

  • Parents have preferences over their kids’ future material and professional success and over their current (and to some extent future) welfare, and there is trade-off between these. The more “Tiger Mom” you go on your kids (no playdates, piled on homework, extracurricular academic and artistic activities, etc) – the more you raise their chance of future success, but the lower their current welfare is (and to some extent, the higher the risk of future breakdowns).
  • The (primarily Asian) “Tiger Moms” place a much lower emphasis on welfare relative to success according to the author.
  • As a consequence, once discrimination became reduced, Asian kids came to dominate the merit-based slots at the top Universities. This involves an externality, in the sense that by driving their own kids hard, they raise the bar for other kids for getting into the top schools. (This is important – unless it is relative performance that matters to the parents, the mechanism breaks down)
  • This reduces the success of the practices followed by so-called “good moms” of the article (used somewhat ironically or sarcastically, it seems to me), and pushes them to shift towards a stricter parenting style in order for their kids to not lose out in the Academic race – a change they dislike relative to the previous situation.
  • The “good moms” then react by trying to get “everyone” to refrain from extreme Tiger Mom parenting – the goal: To reduce Tiger Mom behavior to the point where their own kids again are at the top of the performance distribution and get into the top schools.

In brief: As discrimination is reduced, Asian kids rise to the top, other middle-class parents see their kids left behind and consequently struggle to discredit and reduce the practices that make them lose out. The underlying mechanism: In a meritocracy where effort and time-consuming practice and discipline are a key determinant of your performance, the top spots go to those willing to pay the heaviest price – and those who are not willing to do so will want to change the game to rig it in their favor.

The thesis of the film [Race to Nowhere, which “good moms” arrange screenings of], echoed by an array of parents and experts, is that we can change the experience and reduce the stress and produce happier kids, so long as we all work together on the problem. This is the critical factor, it seems, the one thing on which all voices are in concert: no parent can do this alone; everyone has to agree to change. But of course parents can do this individually. By limiting the number of advanced courses and extracurricular classes a child takes, and by imposing bedtimes no matter what the effect on the GPA, they will immediately solve the problem of stress and exhaustion. It’s what I like to call the Rutgers Solution. If you make the decision—and tell your child about it early on—that you totally support her, you’re wildly engaged with her intellectual pursuits, but you will not pay for her to attend any college except Rutgers, everything will fall into place. She’ll take AP calculus if she’s excited by the challenge, max out at trig if not. It doesn’t matter, either way—Hello, New Brunswick!

But the good mothers will never do that, because when they talk about the soul-crushing race to nowhere, the “nowhere” they’re really talking about (more or less) is Rutgers. And more to the point, while you’re busily getting your child’s life back on track, Amy Chua and her daughters aren’t blinking.

Is economics a science? #1

I see that many have been discussing this question lately, but from the contributions I’ve seen it is difficult to know what they mean by the question.

My entrypoint to the discussion was Brad DeLong’s “highlights” post. Here’s some arguments from Adam Ozimek at Modeled Behavior:

if you’re going to hold economics research to an extremely high burden of proof, then you should be prepared to subject all of your beliefs to such standards. What this will leave you with is mostly weak beliefs about the world for a lot of stuff that matters to you, whether it be about medicine, history, biology, psychology, criminal justice, climate science, or economics. Maybe widespread weak beliefs are a better approximation of the truth, I don’t know, but I do know very few people do or are willing to reason like that consistently. Maybe they should. But even here the vast majority of humanity has more belief changing to do than economists.

I’m puzzled by this claim that economics research is held to an extremely high burden of proof. Is it? My impression is that the burden of proof required to make claims about reality differs strongly with the sub-discipline of economics you are looking at. Some of them are happy to whip up a stylized model consistent with “standard assumptions” (perfectly clearing markets, intertemporally optimizing rational consumers, etc), tweak it until it reproduces some “stylized facts” and explain that this has strong policy implications and that the phenomena has been satisfactorily explained. By saying that I find this type of economics about as convincing as homeopathy (which I find almost as convincing as their remaining concentration of active ingredients), am I holding economics research to an “extremely high burden of proof”? I would argue that nobody should find such work convincing, because it uses an inappropriate strategy for justifying the empirical claims they want to make. This is not a criticism of “economics,” it is a criticism of  silly methods. I would be just as sceptical of intricate social causal mechanisms from sociology that are based on high-flying conceptual distinctions and nuanced readings of past theorists with cursory mentions of some more or less representative facts about reality.

The reason I don’t see this as a criticism of economics is that there are many economists and parts of economics that are different: Researchers who are happy to get into long and serious discussions regarding data quality, alternative competing hypotheses, identification strategies for isolating causal effects, remaining weaknesses in their work, alternative interpretations – with more tentative and carefully worded policy recommendations coming out of it.

It can oftentimes be difficult to see the scientific process at work in economics, as in other fields. Sometimes we are stuck at impasses where we are left with little more than theory to guide us, and sometimes empiricism is limited to testing particular model parameters, and ultimately our confidence should be limited by this. And sometimes what looks like pointless or tautological theorizing is really theorists attempting to build tools and lay groundwork for empiricists. It’s easy to look at some of this and think it un-scientific, but not all steps of the scientific process look like science.

Yes, “ultimately our confidence should be limited by this.” But too often, in my experience, this is not the case: Economists do not temper their confidence in this way. That’s the problem, and that’s the “non-scientific” part of economics (in my opinion).

Another question is, if economics weren’t a science, then would previous paradigms so have been done in by empirical outcomes? The old Keynesian Phillips Curve held that there was a tradeoff between inflation and unemployment. When that relationship broke down during the stagflation of the 70s, the Phillips Curve was invalidated, and this helped shift macro away from old Keynesianism and towards the new classical paradigm. Real Business Cycle models of the 80s were also invalidated by reality: it was clear that money mattered, and in the real world it was hard to find technology shocks to explain actual recessions.

If the global federation of astrologists agreed that some rare astronomical event did not have the effect they believed it would have (because it occurred and not everyone born at that time became supersmart or had blue eyes or whatever) – that would not turn astrology into a science. Saying that one change of consensus due to empirical observations is sufficient to make something scientific is silly.

Saturday, March 19, 2011

Biases in science–a typology

The Mentaculus has a nice list with good explanations/definitions of different types of biases based on a recent article by David Chavalariasa and John Ioannidis. Haven’t read the underlying article, but the list provides a nice summary of potential biases of very different sorts that affect different parts of the scientific process and can help lead a field to report misleading results. Below, I’ve taken the items from the above list and merely shuffled them into a set of categories to reflect different parts of the research process:

  • Deciding on design of study/statistical model
    • confounding bias = when you think you are measuring the effect of variable X on variable Y, but in reality there is another variable Z that correlates with X and also affects Y, which you haven't considered. 
  • Deciding on who to collect data from
    • selection bias = when you think that all the various sub-groups of the population are proportionally just as likely to be in your sample, but in reality certain groups are more likely to be present than proportional, because of the way you collect your data.
    • sampling bias = when you think your sample is representative of the population, but really it is not, because it is skewed in ethnicity, attractiveness, age, gender, and/or etc, casting doubt on your generalizations from the sample to the population. (this is actually a sub-category of selection bias, with the distinction of external vs internal validity that sounds cool but also troublesomely postmodern)
  • Evaluating data quality
    • response bias = when respondents answer your questions in the way they think you want them to answer, rather than according to their true beliefs; this could also happen in animal research if you reward animals for responding in a certain way outside of the main test.
    • recall bias = when respondents are more likely to remember the content of your question if they hold a certain belief on it.
  • Researcher’s beliefs
    • attention bias = when you focus only on data that supports your hypothesis and ignore data that would make your hypothesis less likely.
    • publication bias = when you are more likely to publish or tell others about your results if they 1) conform to what you expect, or 2) are what you think others would prefer to hear.

Seems to me there would be others as well that might be worth considering, such as (off the top of my head)

  • Biased research questions – Let us say you are for or against some activity that (like most things) has positive and negative aspects (e.g. tobacco smoking, climate regulation, free trade, cannabis use, the internet). You narrow down and specify your research problem and outcome measure so as to only pick up on effects that go in your preferred direction, and present it as though it is a comprehensive or broad outcome measure or representation of the problem.
  • Biased analysis – when you (due to presumably common psychological mechanisms) try new model specifications when you are “not satisfied” (e.g., don’t get the results you like) and keep on running new regressions, new models, new methods until you get significant effects in your desired direction.
  • Publication bias type 2 – When editors and referees impose different burdens of proof depending on whether they agree with a piece of research or not (particularly if they do so even when there is a lack of consensus in the discipline, if everyone gets results in one direction and a new submission doesn’t, then it makes sense to ask for extraordinary evidence for extraordinary claims)
  • Biased data  – If the data was collected for some other reason than research and then employed for research, this may have affected the incentives the ultimate source had for reporting truthfully. E.g., tax reports may underestimate sources of income that it is easy to shield from the IRS. some parts of administrative forms that are filled out are filled out because “they have to be filled out” even though no one uses the information much – making them of poor quality.

Wednesday, March 16, 2011

Does peer review ensure quality?

I’ve written (somewhat unsystematically) on peer review and academic journals lately, especially on the question of why we have a hierarchy of journals (top journals, mid-tier etc.). I suggested some possible justifications for this, but they all rest on the assumption that editors and referees are good at estimating the “quality” of research (in the sense of long term importance). A reader suggested that having top journals could motivate researchers to do better work than otherwise, but this too requires that it is a relevant sense of scientific quality that determines acceptance in top journals. (This benefit mainly comes about if this system identifies and “marks” quality better than alternative systems focused on citation rates etc. Otherwise, it is mostly an early (noisy) estimate of long term importance that allows you to reap an expected status earlier than otherwise.)

Anyway – here’s some stuff I’ve gathered lately on the quality of refereeing. Mostly, this comes from bloggers sceptical of the current system – please add more positive stuff if you know of it.

First, from Cameron Neylon at Science in the Open

what evidence we do have shows almost universally that peer review is a waste of time and resources and that it really doesn’t achieve very much at all. It doesn’t effectively guarantee accuracy, it fails dismally at predicting importance, and its not really supporting any effective filtering.  If I appeal to authority I’ll go for one with some domain credibility, lets say the Cochrane Reviews which conclude the summary of a study of peer review with “At present, little empirical evidence is available to support the use of editorial peer review as a mechanism to ensure quality of biomedical research.” Or perhaps Richard Smith, a previous editor of the British Medical Journal, who describes the quite terrifying ineffectiveness of referees in finding errors deliberately inserted into a paper. Smith’s article is a good entry into to the relevant literature as is a Research Information Network study that notably doesn’t address the issue of whether peer review of papers helps to maintain accuracy despite being broadly supportive of the use of peer review to award grants.

I (very briefly) looked at Neylon’s links (hope to go more in-depth another time), but the Cochrane Review mostly notes that there is no evidence that peer review raises quality. That is, it is more a lack of evidence either way than evidence in one direction. Smith’s article, on the other hand, is more aggressive:

If peer review is to be thought of primarily as a quality assurance method, then sadly we have lots of evidence of its failures. The pretentiously named medical literature is shot through with poor studies. John Ioannidis has shown how much of what is published is false [4]. The editors of ACP Journal Club search the 100 'top' medical journals for original scientific articles that are both scientifically sound and important for clinicians and find that it is less than 1% of the studies in most journals [5]. Many studies have shown that the standard of statistics in medical journals is very poor [6].


While Drummond Rennie writes in what might be the greatest sentence ever published in a medical journal: 'There seems to be no study too fragmented, no hypothesis too trivial, no literature citation too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print.'

(BTW: I would recommend this readable article that provides a nice introduction on Ionnides)

The usually interesting Robin Hanson noted a recent study that shows that not much of the variability in referees’ ratings is explained by a tendency to agree (which tells you that the “signal” of quality is (if it is there) very noisy):


The above is from their key figure, showing reliability estimates and confidence intervals for studies ordered by estimated reliability. The most accurate studies found the lowest reliabilities, clear evidence of a bias toward publishing studies that find high reliability. I recommend trusting only the most solid studies, which give the most pessimistic (<20%) estimates.

Some years ago, Hanson also uncovered a study from the “good-old days” when you were allowed to mislead study subjects if this was necessary to get valid results. This study showed that reviewers do not agree, and by manipulating the conclusions and creating studies that were equivalent methodologically but supported different conclusions, they also showed that referee opinions regarding manuscripts were

strongly biased against manuscripts which reported results contrary to their theoretical perspective.

Ideally, we should have an external measure of quality that is reasonably independent of “bandwagon” effects (e.g., if everyone wants to cite American Economic Review because it is the top journal, then citation rates of articles from AER would tend to reflect the status of the journal rather than the quality of the articles). Seems to me like this would be easier to find in a more decisively empirical discipline than economics (or sociology for that matter), where long-term citation rates might more credibly reflect empirically valid and important results and theories.

Tuesday, March 15, 2011

Why should we have “top journals” – are there good reasons?

In a previous post I listed some functions of a system of academic publishing. The ones relevant to the “progress of science” were:

    • Facilitate scientific progress, by

      • ensuring quality of published research by weeding out work that is riddled with errors, poor methodology etc. through anonymous peer-review by relevant experts
      • assessing/predicting importance of researchand thus how “high up” in the journal hierarchy it should be published,
      • making research results broadly accessible so that disciplines can build their way brick-by-brick to greater truths
      • promoting a convergence towards consensusby ensuring reproducibility of research and promoting academic dialogue and debate

This got me thinking: Are there any good reasons at all for having a hierarchy of journals of different “quality” or importance? In asking this, it may be good to have a clear idea of the alternative we are comparing it to. I’m thinking of mega-journals such as PLOS One, that have referees evaluating whether the method and arguments and evidence is sufficiently good to merit publication – and who then publish everything that  - in the referees’ eyes - clears this “minimal” bar of purely scientific criteria. In other words – importance or “originality” does not feature into it.

Compared to this – what do we get from a hierarchy of journals? My first guess would be that they help answer questions such as:

  • “What is the key citation for this theory/hypothesis/claim?”– I would guess most researchers would prefer citing from a “top journal” than from a lower-tier journal (ceteris paribus). This may be useful provided the first and/or best justifications are generally found in articles from top-journals (i.e., if the quality sorting is good).
  • “What should I read/pay attention to? What are the most important recent research results/theoretical innovations/topics?” Helping readers sort out the dross and focus on the choice bits of juicy, nutritious research. Again, may be useful if the quality sorting is good.”

Are there others? I realize top journals are important for other reasons (such as helping rank faculty), but are there other good and valid reasons such a system would help facilitate scientific progress?

Monday, March 14, 2011

Mistaking beauty for truth… again…

Olivier Blanchard has recently been quoted as saying that

"Before the crisis, we had converged on a beautiful construction" to explain how markets could protect themselves from harm, said Olivier Blanchard, an economics counselor at the International Monetary Fund. "But beauty is not synonymous with truth."

This isn’t that far from what Krugman said some years ago. “Mistaking beauty for truth” was the first section of his widely discussed 2009 essay “How did economists get it so wrong?” where he also wrote that

As I see it, the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth. […] as memories of the Depression faded, economists fell back in love with the old, idealized vision of an economy in which rational individuals interact in perfect markets, this time gussied up with fancy equations. […] the central cause of the profession’s failure was the desire for an all-encompassing, intellectually elegant approach that also gave economists a chance to show off their mathematical prowess.

Is it just me or does this sound crazy? If this is true, isn’t it a damning judgment on the entire “scientific process” of economics? How can presumably sensible and unquestionably intelligent researchers searching for a reasonable way to examine the world converge on a beautiful construction that completely misleads them? Instead of leading economists stepping forth to profoundly state that “beauty is not truth,” couldn’t they instead put their weight behind something that would help avoid this happening again in the future? If physicists over time converged on (beautiful, elegant, rigorously formal) theories that made laboratories blow up and spew poisonous gas over entire cities, wouldn’t it make sense to question the kinds of evidence and arguments used to justify empirical claims in physics?

As “Zombie Economics” author John Quiggin has stated,

The prevailing emphasis on logical rigor has given economics an internal consistency that is missing in other social sciences. But there is little value in being consistently wrong. Economics must move on from the infinitely rational, farsighted and asocial beings whose decisions have been the central topic of analysis in recent decades.


Finally, with the collapse of yet another economic ‘New Era’ it is time for the economics profession to display some humility. More than two centuries after Adam Smith, economists have to admit the force of Socrates’ observation that ‘The wisest man is he who knows that he knows nothing’. While knowledge in the sense of absolute certainty may be unattainable, economists can still contribute to a better understanding of the strengths and weaknesses of markets, firms and other forms of economic organisation, and the possibilities for policy action to yield improved economic and social outcomes.

Thursday, March 3, 2011

Are economists more “conservative” because they dislike complexity?

There was a recent buzz about liberal bias in academia, covered (amongst other places) on the Freakonomics blog. I noted that there was an interesting correlation from the online dating service OK Cupid that revealed that people’s stated preference for complex vs. simple people was a very strong predictor for whether they were Conservative or Democrat, and suggested that people who prefer complexity (e.g. Democrats/liberals) would also be more comfortable with and interested in research.

It turns out there are people who have been actively pursuing this hypothesis. Over on the Discover blog, Chris Mooney reports that his posting on the topic triggered a response from a researcher (Everett Young), that stated (amongst other things)

If the psychological profile that produces curiosity and the desire to learn both makes one liberal and makes one more likely an academic, then its making one a scientist is barely in need of explanation.

and that

if you look at the political science literature over the last few decades, the burden of proof has shifted dramatically onto those who would deny a psychology-ideology link. Even without Jost, the evidence has grown into somewhat of a mountain. And Alford, et al.’s findings on genetics are only controversial insofar as people don’t like them. The evidence for a genetics-ideology link is also overpowering, even if we haven’t mapped out exactly how it happens.

This, however, raises another interesting question: Can we go backwards from the degree of “liberal overrepresentation” in a research discipline to an inference about the discipline’s complexity? Ongoing research by Daniel Klein shows that the liberal “overrepresentation” is weakest in economics (the table is copied from this paper):


Some might balk at my insinuation that economics is not “complex” (“Hey – that math is hard!”), but I would suggest that maybe a lot of economic theory might be complicated but not complex. I’m thinking particularly of those working in theory building on standard rational choice discussions, the sentiment embodied in Stigler and Becker’s “we-should-try-to-explain-everything-within-our-present-framework” De Gustibus article. The focus and goal here is to protect an already established way of thinking – and fitting, squeezing and forcing everything into this mould, and to use only a small handful of principles that flow from the rational choice model to explain everything.

Returning to Mooney, he relates Young’s point to something several discussants of his previous post had claimed:

many discussants cited a “traditionalism vs. openness/progress axis, in which liberals/scientists were depicted as being in search of the different and new (new findings, new experiences) where as conservatives were painted as resistant to change and attracted to routines, stability, and long existing structures.”

This is what I’m asking: Could it be that a lot of mainstream economists are “resistant to change” in the standard theoretical framework, and that they are attracted to the long existing structures and stability and routine of explaining things within such models? (At times, such research seems almost like an exercise in re-labeling variables to make a model “apply” to a new field) And does this imply that there should be a stronger liberal “slant” to the researchers working in, say, behavioral economics, where the number of explanatory principles is larger and the ambiguity bigger? Could this also be relevant to understanding the rational-choice-ad-absurdum of those schools of macroeconomics which seem the most consistently “hard-core” neoclassical and simultaneously the most sceptical of government intervention?

Speculation on top of speculation sprinkled with speculation. I’m not claiming any expertise here.

Wednesday, March 2, 2011

Book “review”: Why everyone (else) is a hypocrite

“Why everyone (else) is a hypocrite” by Robert Kurzban is written in an extremely clear prose that made me envious of his ability to explain things in a simple, yet precise way. Lots of (to me) new ways of thinking about/interpreting, for instance, cognitive dissonance, morality, procrastination and preference “instability” and time inconsistency.

Basically, the book presents the modular view of the mind, that sees that mind as consisting of a number of specialized modules (specialized information-processing mechanisms)  linked together (or not) in various ways, and then proceeds to discuss a number of implications this view has for psychology, economics etc.

One idea I liked, was that when you add modules, linking them up to pass and receive information with other modules is work, and only undertaken if it is worth it. (All quotes out of “actual” order)

Evolution must act to connect modules, and it will only act to do so if the connection leads to better functioning.

As Nisbett and Wilson put it, “there may be little or no direct introspective access to higher order cognitive processes.” In other words, the cause of the decision in this case –whatever it was , and whether you want to call it “higher order” or anything else – is not available to the modules that are explaining the decision. The part of the mind that talks just doesn’t get the information from the decision-making modules.

The strong version of this argument is that some systems might be engineered specifically not to get information from (or send information to) other modules.

[…] an even more extreme version of this claim is that not only do some modules work better when they have less  information, some might work better when they have wrong information.’

As a result:

It’s a mistake to pay attention only to what comes out of the mouth when we’re trying to understand what’s in the mind, because there are many, many parts of the mind that can’t talk.

An example of this is

[…] “moral dumbfounding", [...]. At least for some kinds of moral judgments, people can’t give good reasons for their views, though they often try quite hard.

An important implication is that there is no singular, unitary “you,” your consciousness is just one amongst many modules, and lacks information about a number of causal reasons for your actions. Indeed, it may be “deliberately” misinformed:

[…] communication is obviously useful for manipulating what others think in a way that works to one’s advantage, and many modular systems in the mind seem to be designed for this purpose. Indeed, I think there is some sense in which the part of you that feels like “you” is, more or less, designed to serve this public relations function.

This is an important argument in the book, as it is our connections to other people, our need to have status and be accepted and valued by others, that provides us-as-organisms with strategic motives for not consciously being aware of our true motives. As an example of this, Kurzban discusses some interesting experiments where participants seem to actively refrain from getting information that would reveal whether benefitting themselves would hurt others (thus giving them plausible deniability).

Having information – especially information others know you have – changes how your choices – and, consequently, actions – are evaluated by others because there is the reasonable sense that you now have a duty to act on that information.

As Roy Baumeister put it, “Self-presentation is . . . the result of a trade-off between favorability and plausibility.”

As Sedikides and Gregg recently put it, “self-enhancement occurs within the constraints imposed by rationality and reality.”

Along the way Kurzban also explains how the serious and scientific business of evolutionary psychology is ridiculed and rejected by researchers with competing explanations from other disciplines who don’t even have any relevant expertise in ev-psych. He also spends some time explaining how ridiculous explanations from competing disciplines can be easily rejected by him (and us) even without expertise in those areas.

Kurzban also makes (justified) fun of economists and their view of the person as a unitary thing with one, consistent set of preferences.

It turns out that people might not have “real” preferences in the same way people don’t have “real” beliefs.

Instead, we have a number of modules that aim for various goals, are activated by various cues, and interact to determine action. The outcome of this system is not likely to be a consistent, simple and “rational” set of preferences. This seems to me somewhat inconsistent with Kurzban’s later praise of the free market:

Markets leave buyers and sellers better off. The sellers now have money, which they value more than the item they just sold, and the buyers have something that they value more than the money they just paid. Everyone’s better off.

Everyone is better off. The beauty of the markets is that – I really think it’s worth repeating – everyone is better off.

[…] Markets are a great way to ensure that the people who want something the most get it.

And so on. Yeah, I get it, and mostly agree with it: Markets are good in many cases. But it seems weird (hypocritical) to use this way of arguing about it after having written a whole book about how there is no unitary self whose interests can be easily identified, how evolution doesn’t care whether your consciousness is happy or not, how evolution doesn’t even care if the actions and thoughts caused by your modules make your consciousness happy or not, and so on.

Anyway. Good book. Well written. Neat, elegant theories consistent (not surprisingly) with the evidence he’s chosen to include. To me, quite persuasive and insightful (although I feel that I’m too easily swayed and drawn in by evolutionary-psychology theories. Evolution seems so self-evidently true, that it has shaped human brains seems so obvious, that I first become very enthusiastic and then kind of suspicious of my own enthusiasm). And, who knows, the instances of hypocrisy in the book may even have been added as a subtle joke?

Tuesday, March 1, 2011

Is it OK to disregard Macroeconomists? - Robin Hanson versus Robin Hanson

I don’t always agree with Robin Hanson on the Overcoming Bias blog, but I (very) often find him interesting.
Recently, Vladimir M  trashed macroeconomics for being a discipline dealing with ideologically charged questions and lacking clear research directions:
Even a casual inspection of the standards in this field shows clear symptoms of cargo-cult science: weaving complex and abstruse theories that can be made to predict everything and nothing, manipulating essentially meaningless numbers as if they were objectively measurable properties of the real world, experts with the most prestigious credentials dismissing each other as crackpots.
Hanson counters that
If ideology severely compromises others’ analysis on this subject, then most likely it severely comprises yours as well.  You should mostly just avoid having opinions on the subject.  But if you must have reliable opinions, average expert opinions are probably still your best bet.  (Unless of course you have a prediction market available. :) )
My first thought was that this sounds like a tiresome and difficult task. In a field as polarized as macroeconomics, estimating “average expert opinions” is not a simple matter (all economists? all AEA members? all Nobel Laureates? do we include both the DSGE models crowd, the Neo-Keynesians, Austrian Business Cycle theorists, New Keynesians, institutionalists? Only tenured professors – in the US only or globally?) And the choice of population is important when views are polarized: Changing the population you consider will have a big impact on the “average” opinion.
My second thought is that we might not need to bother. There is a different defence of Vladimir M from an earlier version of Robin Hanson.  Writing on the strong selection bias in economics, he noted that you can justify any conclusion or answer you want by selecting from the many assumptions and models that are available. He wrote that
we [economists] are so capable of choosing further assumptions to get the answers we want that outsiders can’t gain much policy advantage from our further insight.
 That sounds about right to me, given the current state of macro.

Economists - Drawing opposite conclusions from the same evidence

Christina Romer tried to explain recently why people have opposing views on the expected inflationary effects of monetary expansion. She argued that it was a distinction between theorists and empiricists. Krugman disagreed by saying she was too kind – the other guys are just crazy:

I mean, yes, there are theoretical models in which monetary expansion translates immediately into sudden inflation. But these same models also say, essentially, that what we’re experiencing now — a prolonged period of high unemployment in which wage growth has slowed, but wages haven’t plunged — couldn’t happen. So to believe the inflation scare stories you have to be not just a theorist but a theorist who believes his theories, not his own lying eyes.

This seems wrong. No theory will be right in every detail across the board, particularly in economics, and there has to be some judgment as to what evidence is relevant and what evidence is not relevant. I’m often frustrated as well by how economists cling to (what I feel are) insane theories, but still - I’m sure Thomas Sargent could have sounded sensible defending the theories Krugman dismisses (see here, for instance), even though I also feel (?) pretty certain Sargent is wrong.

However, this reminds me of an interesting study which showed that exposing people to imperfect and non-conclusive evidence  (which is almost all evidence in economics, particularly macro-economics) can lead people with opposing views to move in different directions: An (old) experiment done on psychology undergraduates exposed people opposed and in favor of the death penalty to (fake) studies (emphasis added below):

[…] subjects supporting and opposing capital punishment were exposed to two purported studies, one seemingly  confirming and one seemingly disconfirming their existing beliefs about the deterrent efficacy of the death penalty. As predicted, both proponents and opponents of capital punishment rated those results and procedures that  confirmed their own beliefs to be the more convincing and probative ones, and they reported corresponding  shifts in their beliefs as the various results and procedures were presented. The net  effect of such evaluations and opinion  shifts was [an] increase in  attitude polarization.