tag:blogger.com,1999:blog-83293577864010127772024-03-14T01:38:36.840-07:00FreakynomicsJumbled thoughts on stuff - mostly economics
ole.rogeberg@gmail.comAnonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.comBlogger111125tag:blogger.com,1999:blog-8329357786401012777.post-22897753961635811332014-01-16T09:35:00.001-08:002014-01-16T09:35:27.399-08:00Is there no racial bias precisely because it seems like there is?<p>Consider a criminal activity that is equally prevalent in two groups, but police arrest a larger share of group A than group B. Is this evidence <em>for </em>or <em>against</em> discrimination? </p> <p>In the US debate on drug policy, this is seen as evidence of racial bias. Ezra Klein <a href="http://www.washingtonpost.com/blogs/wonkblog/wp/2013/12/01/racism-isnt-over-but-policymakers-from-both-parties-like-to-pretend-it-is/">pointed out recently</a> that similar shares of african-americans and whites use cannabis, but that african-americans are arrested far more often for marijuana possession. He sees this as clear evidence of racial bias: More arrests despite equal crime rates shows this clearly.</p> <p align="left">In the academic economics literature, a 2001 paper by John Knowles and Nicola Persico in a leading economics journal (ungated <a href="http://nicolapersico.com/files/kpt.pdf">here</a>) presents an “empirical test” of “racial bias in motor vehicle searches” that flips this interpretation 100% upside down. I’ll explain the reasoning in detail below, but the model basically presents <em>the same kind of statistical fact</em> as evidence of a <em>lack </em>of discrimination: If the groups are equally law abiding despite one being searched more often, then this means that the police have targeted the “crime prone” group <em>only </em>up to the point where this targeting made them break the law at the same rate as others. The police do not care about color, only arrests, and they have only used color to the extent that it helped them predict probability of criminal activity (statistical discrimination). Equal underlying crime rates despite more arrests from one group shows this clearly.</p> <p align="left">Why the difference in interpretation? Well – the crucial underlying assumption in the economics paper is that groups perceive their risk of being stopped and searched, and that they respond rationally to the risk that their law breaking will be observed and punished. Given this, the argument goes through – without it, the whole thing breaks down. The paper is aware of this, writing in a “discussion” section that</p> <blockquote> <p align="left">Our model assumes that motorists respond to the probability of being searched. This assumption is key to obtaining a test for prejudice that can be applied without data on all the characteristics police use in the search decision. If motorists did not react to the probability of being searched, testing for prejudice would require data on c [defined as  “all characteristics other than race that are potentially used by the officer in the decision to search cars”].</p> </blockquote> <p align="left">Interestingly, though, while the paper does note the central importance of this assumption, it does not find it necessary to present any empirical evidence for it. This seems odd to me: It may well be a standard <em>theoretical </em>assumption, but this paper presents an <em>empirical </em>test that relies on this assumption being true. Empirical claims consistent with abstract theory, in other words, seems to have gotten a free pass from the referees in a top-5 journals in economics (Journal of Political Economy). The researchers even turn the tables and put the kind of argument that Ezra Klein represents on trial:</p> <blockquote> <p align="left">The argument that infers racism from this evidence relies on two very strong assumptions: (1) that motorists of all races are equally likely to carry drugs and (2) that motorists do not react to the probability of being searched. Relaxing these assumptions, as we do in this paper, leads to a very different kind of test.</p> </blockquote> <p align="left">The free pass that some economists give empirical claims provided they derive from “standard theory” is, <a href="https://docs.google.com/document/d/1voxY7EU-dHkgg3UAg_wrTVn8ixAvBIhHS5F03XU0o1Q/edit?usp=sharing">admittedly, one of my pet peeves</a>. Still – it seems kind of ballsy to say that they’re merely “relaxing” this assumption when they actually seem to be making two other claims that are equally strong:</p> <ol> <li> <div align="left">Assumption: Motorists of all groups perceive <em>levels and changes in the objective probability that they will be stopped</em>, and respond rationally to these so that, as a result</div> </li> <li> <div align="left">Motorists of all races, ages, looks etc. are equally likely to carry drugs in equilibirum, which is where we as economists should think they are.</div> </li> </ol> <p align="left">Now – a major caveat: the paper has already been cited more than 300 times, and for all I know the large literature may have kicked the tires and empirically tested all kinds of assumptions and implications from this paper. My own prior, however, would be that assumption 1 may be too strong – and I would like to see how robust the argument is to changes in this assumption. I’ll gladly admit to not having deep familiarity with this literature, but if I recall correctly from Reuter and MacCoun’s excellent book “Drug war heresies” that I read some ten years ago, people grossly exaggerate their risk of being detected for lawbreaking (e.g., for speeding violations and so on). It also seems to be difficult to find evidence that the intensity of drug law enforcement efforts has any strong effect on the prevalence and intensity of use – which has become a common argument against strict drug law enforcement and in favor of decriminalization. The “behavioral” literature and work by economists such as Kip Viscusi likewise suggests that people are poor at perceiving small risks accurately. </p> <p align="left">Even on an everyday level, it is unclear to me how individuals would get information on the probability that they will be stopped – most of us will be stopped so rarely that it is hard to estimate the risk based on our own experience, and we rarely pool the relevant quantitative information with others (“I drove a total of 70 hours last year, and was stopped by the police zero times. How about you? I’m trying to get enough observations to identify my risk of being stopped – and we seem similar enough that our data can be pooled.”)</p> <p align="left">As regards the second claim, that all subgroups in the population will break the law at the same rate, this seems too strong – but is what makes the paper’s “empirical test” for discrimination so simple: When the police have the same cost of stopping and searching single cars from different groups, then:</p> <blockquote> <p align="left">If the returns to searching [roughly: probability of a motorist being a lawbreaker] are equal across <em>all </em>subgroups distinugishable by police, they must also be equal across <em>aggregations </em>of these subgroups, which is what we can distinguish in the data. [emphasis in original]</p> </blockquote> <p align="left">The prediction, in other words, is that <em>any group defined in terms of characteristics observable by the police has the same probability of breaking the law in equilibrium. </em>Middle aged white dads with a station wagon full of kids, elderly ladies on their way to Bingo, etc., would all carry drugs in their car with the same probability as anyone else. This seems implausible and highly unlikely to be true. (Even less plausible, the model has <em>every individual carrying drugs in their car some of the time,</em> though this implication is “fixed” in the discussion section of the paper by introducing unobservable factors within each group.)</p> <p align="left">I’m (hopefully obviously) not saying that I’ve in any way <em>disproved </em>this theory based on these remarks, but chalk me up as unconvinced of the validity of the empirical test suggested in this particular instance.</p> <p align="left">This basically concludes my comments on the paper itself, which I present mostly as an interesting example of how economists’ belief in their theory can make them interpret things completely different from other people. Most readers may want to quit here (or wish they’d never begun reading, for all I know), but for those who want to understand the theory used by the economists I’ll try to get the main ideas across in a non-technical way:</p> <p>The article argues using a model built on a simple basic idea: If you were 100% certain to be stopped and have your vehicle searched, you wouldn’t carry illegal contraband. If you were 0% certain that you will <em>not </em>be stopped, you would be certain to carry illegal contraband (because Homo Oeconomicus: “Hey! Profit opportunity with no risk! Why not?”).  Consequently, there is a search probability between 0 and 1 where you are indifferent between carrying and not carrying contraband, and this can be thought of as a “flip point”  where you switch from one choice to the other. </p> <p>Different people will belong to different groups, which can obviously be in different situations: The payoff from carrying contraband and the cost of being punished may differ, and as a result the flipping points of different groups will differ as well. To keep the language neutral, we’ll call the groups high risk and low risk, where the high risk group needs to face a high search rate before they are discouraged from carrying contraband, whereas the low risk individuals require only a small search risk before they bow out of the illegal activity.</p> <p>To see how this model plays out, imagine that you are a police officer whose <em>only </em>goal is to maximize the number of arrests you make. Assume also that the cost and time spent on a search is independent of who you stop and search. Somewhat unrealistically, we can imagine that we start from a situation with no search risk for any of the two groups.  </p> <p>You begin by searching a little in both groups and find high rates of lawbreaking in both groups. In fact, in the model you find that everyone you stop is a lawbreaker and can be arrested. Consequently, you spend ever more of your policing time on stopping drivers. However – this alters the incentives of the drivers. As you stop ever larger shares of motorists, the low-risk motorists stop carrying contraband. The time you spend stopping them becomes worthless as you find nothing that justifies arrest. The high risk group still breaks the law, as your search rate is still below their flip point. Consequently, you don’t increase your search intensity towards low-risk motorists, but keep stopping more and more high risk motorists instead. This keeps on until you reach their flip point – at which point you stop searching higher numbers (if you did – they would all stop and you’d have no arrests). </p> <p>The outcome is that the only stable equilibrium is one where each group is searched at a rate equal to their flip point. Stop them less than this, and they become more criminal and you would get a high probability of arrests per individual stopped. This would cause you to stop them more often again. Stop them more often than their flip point, and they would become less criminal and your efforts would yield a low hit-rate. The police thus act in a way that causes groups with different “criminal inclinations” to act identically in equilibrium. (This conclusion requires that the costs to the police of searching a car is the same across groups, but this is assumed throughout most of the paper and is needed for their test to work on the data they use)</p> <p>A reasonable question to this equilibrium is how the contraband-carrying rate within a group is determined. In the baseline model each individual will now be indifferent between carrying and not carrying contraband in their car. How do they decide how often to do it? </p> <p>If the motorists carry drugs too often, the police will find that the value of stopping more of them is positive (value of an arrest times probability of being guilty would be higher than the stop-and-search cost). If they carry drugs too rarely, the police will want to stop them less often and the motorists would basically be leaving money on the table (because Homo Oeconomicus – positive expected return from crime). Consequently, in equilibrium, the motorists in each group carry drugs with a probability that makes the police indifferent between stopping and not stopping them: The expected value of stopping them (i.e., contraband probability times value of arrest) is then equal to the expected cost (marginal stopping-and-searching cost for the relevant group).</p> <p>As noted in the beginning of the post – the baseline model assumes that all individuals “flip a coin” or “toss a dice” every morning to decide whether or not to traffic drugs that day, but this can be altered by assuming unobservable characteristics that differ within each group making some of them more and some of them less crime prone. </p> Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-41964293445835704732013-03-08T05:38:00.000-08:002013-03-08T05:45:13.091-08:00Cannabis, IQ and socio-economic status in the Dunedin data - an update<b id="internal-source-marker_0.10264351614750922" style="font-weight: normal;"><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">I’ll start with a short recap: Researchers </span><a href="http://www.pnas.org/content/109/40/E2657.full.pdf+html?sid=0bf8c0d7-24e8-492d-8262-d07de4ad13d6"><span style="color: #1155cc; font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">published article</span></a><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;"> august 2012 arguing that adolescent-onset cannabis smoking harms adolescent brains and causes IQ to decline. I responded with </span><a href="https://www.dropbox.com/s/33bqjmw4t5slu7h/PNAS-2013-Rogeberg-1215678110.pdf"><span style="color: #1155cc; font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">an article available here </span></a><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">arguing that their methods were insufficient to establish a causal link, and that non-cognitive traits (ambition, self-control, personality, interests etc) would influence risks of adolescent-onset cannabis use while also potentially altering IQs by influencing your education, occupation, choice of peers etc. For various reasons, I argued that this could show up in their data as IQ-trends that differed by socioeconomic status (SES), and suggested a number of analyses that would help clarify whether their effect was biased due to confounding and selection-effects. In </span><a href="http://www.pnas.org/content/early/2013/02/28/1300618110.full.pdf+html?sid=e8e5357d-39a4-4b94-937b-2654fa6cec04"><span style="color: #1155cc; font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">a reply this week</span></a><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;"> (gated, I think), the researchers show that there is no systematic IQ-trend difference across three SES groups they’ve constructed. However, as I note in my reply (</span><a href="https://www.dropbox.com/s/uyx7cnr897zipf2/PNAS-2013-Rogeberg-1301881110.pdf"><span style="color: #1155cc; font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">available here</span></a><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">), they still fail to tell us </span><span style="font-family: Arial; font-size: 15px; font-style: italic; vertical-align: baseline; white-space: pre-wrap;">how different </span><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">the groups of cannabis users (never users, adolescent-onset users with long history of dependence etc) were </span><span style="font-family: Arial; font-size: 15px; font-style: italic; vertical-align: baseline; white-space: pre-wrap;">on other dimensions</span><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">, and they still fail to control for non-cognitive factors and early childhood experiences in any of the ways I proposed</span><span style="font-family: Arial; font-size: 15px; font-style: italic; vertical-align: baseline; white-space: pre-wrap;">. </span><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">In fact, none of the data or analyses that my article asked for have been provided, and the researchers conclude with a puzzling claim that randomized clinical trials only show “potential” effects while observational studies are needed to show “whether cannabis actually is impairing cognition in the real world and how much.” </span></b><br />
<b style="font-weight: normal;"><span style="font-family: Arial;"><span style="font-size: 15px; white-space: pre-wrap;"><br /></span></span><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">In light of the response, it seems I have made a very poor job of communicating my point. The researchers reduce my entire argument to a temporary effect of schooling on low-SES children, so let me try one last (?) time:</span></b><br />
<b style="font-weight: normal;"><span style="font-family: Arial;"><span style="font-size: 15px; white-space: pre-wrap;"><br /></span></span><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">Are you </span><span style="font-family: Arial; font-size: 15px; font-style: italic; vertical-align: baseline; white-space: pre-wrap;">strongly confident </span><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">that cannabis is the </span><span style="font-family: Arial; font-size: 15px; font-style: italic; vertical-align: baseline; white-space: pre-wrap;">only </span><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">thing that systematically affects IQ after the age of 13? If so, the original research design may seem OK: Look at IQ trends for those who used a lot of cannabis and compare to IQ trends of those who used little. If </span><span style="font-family: Arial; font-size: 15px; font-style: italic; vertical-align: baseline; white-space: pre-wrap;">nothing else </span><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">systematically affects IQ, there is no need to know how similar or different these groups are in other ways. It is irrelevant, just as we don’t need to know the color of falling ball to calculate its speed.</span></b><br />
<b style="font-weight: normal;"><span style="font-family: Arial;"><span style="font-size: 15px; white-space: pre-wrap;"><br /></span></span><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">However, if you think other things </span><span style="font-family: Arial; font-size: 15px; font-style: italic; vertical-align: baseline; white-space: pre-wrap;">may </span><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">or </span><span style="font-family: Arial; font-size: 15px; font-style: italic; vertical-align: baseline; white-space: pre-wrap;">are likely to </span><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">affect IQ after the age of 13, such as education, genes, early childhood experiences, then we need to know more about the people who used a lot and the people who used only a little cannabis or none at all. In </span><a href="https://www.dropbox.com/s/33bqjmw4t5slu7h/PNAS-2013-Rogeberg-1215678110.pdf"><span style="color: #1155cc; font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">my original article </span></a><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">I gave several references supporting the claim that IQ-trends are affected by environment. I also noted that past research has found the heritability of IQ to increase with age. A common interpretation of this is that our genes influence our non-cognitive traits. As long as we are young, we usually have to live with our parents and attend the neighborhood school. As we age, our non-cognitive traits have an increasingly strong effect on where we end up - what environment we are in, what friends we have, what activities we participate in etc. Our genes thus influence our future environment, and the cognitive challenges from our environment influence our IQ.</span></b><br />
<b style="font-weight: normal;"><span style="font-family: Arial;"><span style="font-size: 15px; white-space: pre-wrap;"><br /></span></span><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">In light of this, it seems (to me) reasonable to ask for information on other differences between their groups. What we know, we have to glean from other research based on the same data. Some of this was referenced in my original article, but to give two simple examples: One of the researchers </span><a href="http://www.nzherald.co.nz/nz/news/article.cfm?c_id=1&objectid=200666"><span style="color: #1155cc; font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">once described </span></a><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">the cannabis-dependent 21-year olds in their data as having </span></b><br />
<blockquote class="tr_bq">
<b style="font-weight: normal;"><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">had a long history of anti-social behavior, going right back to when they were three years old. They were being naughty, beating up other kids in the sandpit, being disruptive, then they went to stealing milk money, then they went to beating up bigger kids in the schoolground, then they converted a car … It goes on and on and on [...] When stuff doesn’t work out right they just resort to violence. </span></b></blockquote>
<b style="font-weight: normal;"><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">More recently, the Dunedin data were used </span><a href="http://link.springer.com/article/10.1007%2Fs10508-012-0053-1"><span style="color: #1155cc; font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">in research </span></a><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">that found women with more sexual partners far more likely to become dependent on alcohol or cannabis. Women reporting 2.5 or more partners per year when 18-20 years old, were almost 10 times more likely to be dependent on alcohol or cannabis at 21. My point is that this indicates that the early-onset cannabis users who go on to dependence </span><span style="font-family: Arial; font-size: 15px; font-style: italic; vertical-align: baseline; white-space: pre-wrap;">do differ </span><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">systematically from those who start later or never use, and that these differences </span><span style="font-family: Arial; font-size: 15px; font-style: italic; vertical-align: baseline; white-space: pre-wrap;">may </span><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">be related to underlying “non-cognitive traits” that would also affect their lives, environments and thus IQ independently of their cannabis use. </span></b><br />
<b style="font-weight: normal;"><span style="font-family: Arial;"><span style="font-size: 15px; white-space: pre-wrap;"><br /></span></span><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">The Dunedin group apparently see such traits as irrelevant to their argument. At times, they even underplay the numbers they presented in their own article on the subject: In their response to my article, they write that “Many young cannabis users opted out of education, but that did not account for their IQ drop.” However, their original numbers indicated that education substantially affected the size of the cannabis-use effect: The differences between non-users and adolescent-onset cannabis users with long-term dependence was markedly different for people with different educational levels. This lead the authors to write that “among the subset with a high-school education or less, persistent cannabis users experienced greater decline.” As I noted in my article, the magnitude of the “effect” (IQ change of highest-exposure group minus IQ change of no-exposure group) was twice as large for those with high-school or less compared to the same effect for those with more education. </span><br /><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">How important you think these selection issues are will of course differ with your prior beliefs about the importance of various IQ-determinants. As long as the Dunedin data remains difficult to access for other researchers, there is little I can do to examine these things myself. I suggested a number of analyses and robustness checks, but the researchers were not interested in pursuing these and reduced my argument to “school temporarily raises low-SES IQ”. </span></b><br />
<b style="font-weight: normal;"><span style="font-family: Arial;"><span style="font-size: 15px; white-space: pre-wrap;"><br /></span></span><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">This misinterpretation of my article’s argument is to some (a large?) extent my own fault: While my article does discuss non-cognitive traits, rising heritability of IQ, and proposes a number of analyses to cope with the complications these raise - I too often use the shorthand “SES” rather than “non-cognitive traits correlated with SES.” It would have been clearer and better if I had first discussed the importance of non-cognitive traits in general, and then introduced the hypothesis that this would show up as differing IQ-trajectories across SES groups. That would have made my alternative causal model (non-cognitive traits have increasing influence over environments as people age, and the environment you end up in influences you IQ) clearer. My bad. I tried to remedy this by running the </span><a href="https://www.dropbox.com/s/uyx7cnr897zipf2/PNAS-2013-Rogeberg-1301881110.pdf"><span style="color: #1155cc; font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">new 500-word reply in PNAS</span></a><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;"> by a number of colleagues and friends before publishing, rewriting extensively to try and make my causal model clearer while also a) explaining why I thought (wrongly, it now seems) that this would show up as differing IQ-trends for different SES groups, and b) clarifying my more general methodological points and the extent to which they still remain relevant. </span><br /><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">While some of the cause for misinterpretation is likely due to my own communicative skills, there is also a difference in methodological attitudes at work: In empirical labor economics, researchers are very concerned with selection effects, and you need to have credible, “plausibly exogenous” variation in causal variables for your effects to be accepted as causal. In contrast, the Dunedin researchers write that randomized clinical trials only show “potential” effects while observational studies are needed to show “whether cannabis actually is impairing cognition in the real world and how much.”</span></b><br />
<b style="font-weight: normal;"><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;"><br /></span></b>
<b style="font-weight: normal;"><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">To me, this sounds very odd:. We have several instances of randomized clinical trials </span><span style="font-family: Arial; font-size: 15px; font-style: italic; vertical-align: baseline; white-space: pre-wrap;">contradicting </span><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">effects identified in a large number of observational studies. Three of the most famous ones are described </span><a href="http://www.bmj.com/content/325/7378/1437"><span style="color: #1155cc; font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">here </span></a><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">(possibly gated): Hormone replacement therapy was thought to reduce female coronary heart disease risk but may actually increase it, beta-carotene seemed to reduce cancer risk in observational studies but actually increased it, and vitamin C had no effect on heart disease risk while observational studies indicated it was protective. Closer to the subject matter at hand, a </span><a href="http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(07)61162-3/fulltext"><span style="color: #1155cc; font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">2007 meta-review of observational studies</span></a><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;"> in the Lancet indicated a strong causal effect of cannabis use on schizophrenia risk. </span><a href="http://www.ncbi.nlm.nih.gov/pubmed?db=pubmed&cmd=Search&term=Addiction%5BJour%5D+AND+102%5BVolume%5D+AND+597%5Bpage%5D"><span style="color: #1155cc; font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">Some researchers pointed </span></a><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">out that since increasing shares of young people had been using cannabis, this implied that the number of UK schizophrenia cases should rise strongly , but </span><a href="http://www.medwirenews.com/47/83306/Psychiatry/Increased_cannabis_use_%E2%80%98not_matched_by_psychosis_prevalence_rise%E2%80%99.html"><span style="color: #1155cc; font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">this didn’t happen </span></a><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">and the importance of the link is now </span><a href="http://f1000.com/prime/reports/m/5/2/"><span style="color: #1155cc; font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">again in doubt</span></a><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">.</span></b><br />
<b style="font-weight: normal;"><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;"><br /></span></b>
<b style="font-weight: normal;"><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">What all these cases have in common is that there seemed to be convincing evidence from observational studies that there was an effect, but it turned out that the effect was largely due to subtle forms of confounding. The examples certainly do not show that, e.g., beta-carotene has a “potential” negative effect that is “actually” positive in everyday life - though that is what the argument from the Dunedin researchers seems to state. Instead, these cases show that causal inference from observational data is difficult. This is the perspective from which my argument comes. I don’t claim that the correlation observed in the Dunedin data is </span><span style="font-family: Arial; font-size: 15px; font-style: italic; vertical-align: baseline; white-space: pre-wrap;">actually </span><span style="font-family: Arial; font-size: 15px; vertical-align: baseline; white-space: pre-wrap;">fully accounted for by non-cognitive traits. I argue that they have yet to tell us how groups defined by cannabis use patterns differ on other dimensions, and that they have yet to show us how robust their effect estimates are to “controls for causal back channels unrelated to neurotoxicity, simultaneous inclusion of multiple potential confounders, and changes to their statistical model.” </span></b>Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com3tag:blogger.com,1999:blog-8329357786401012777.post-82079762758461668072013-01-15T05:21:00.001-08:002013-01-15T10:12:38.083-08:00Did pre-release harm the research process? And what is up with this cannabis and IQ research?<em>Did pre-release of my PNAS paper (<a href="http://www.pnas.org/content/early/2013/01/09/1215678110.abstract?sid=2376b516-baa1-4ad1-8ba2-98582af25378" target="_blank">here</a>, but gated) on methodological problems with Meier et al’s 2012 paper on cannabis and IQ reduce the chances that it will have its intended effect? In my case, serious methodological issues related to causal inference from non-random observational data became framed as a conflict over conclusions, forcing the original research team to respond rapidly and insufficiently to my concerns, and prompting them to defend their conclusions and original paper in a way that makes a later, more comprehensive reanalysis of their data less likely.</em><br />
<em><br /></em>I understand that pre-releasing papers to journalists raises interest in the research work and allows reporters to hit the ground running. But does it also hurt science? In my case, I think the answer may be “yes”. <br />
<br />
Others have discussed how embargoed papers affects science journalism (Ed Yong has a good write-up <a href="http://phenomena.nationalgeographic.com/2009/07/04/does-science-journalism-falter-or-flourish-under-embargo/">here</a>), but my question is whether the research process itself might suffer - at least for some types of papers: In my case, I wrote a research paper that discussed methodological issues in a previous study and suggested that their study failed to control for possible confounders in an appropriate way. I also suggested a number of methods and analyses that might help to address these shortcomings. As the press received the embargoed paper, some of them called the original researchers and told them some guy claimed their results were wrong, and what did they have to say about that? Rather than reflect on my reasoning and suggestions, this forced the original researchers to react in step with the news cycle: Within something like 24 hrs of the time they first saw my study, they released a statement to the press (available <a href="http://www.moffittcaspi.com/sites/moffittcaspi.com/files/field/publication_uploads/Response_re_Rogeberg.pdf">here</a>) where they brushed off my points with reference to some new analyses. In my opinion, the new analyses they referred to were both insufficient to address my points and difficult to assess since they were presented with no details. Some of them, such as the claim that average IQ change was zero within each of three SES levels they constructed, were quite interesting and merit closer review. Other points, such as the claim that there was a relationship even within mid-SES individuals (they didn’t report whether the effects were the same or smaller) have more limited relevance (see below). However, it seemed (at the time) urgent and important to respond before the journalists “went to press,” and I ended up <a href="https://docs.google.com/document/d/1cdtDZom_6Syn7LBs5quT80auk0NhxyR9YCJOa1yC9IE/edit">writing a hastily written reply</a> so that I had a response I could make available to the journalists. This - it seems to me - is not conducive to good scientific dialogue. Not only should it be possible to breathe and think about things before pressing “send,” but the discussion can easily veer off into the issues of concern to journalists rather than the more important methodological issues that should be of concern. <br />
<br />
Before continuing, let me be clear that my point is not to criticize the science journalists: It is natural (and correct) for them to ask the original authors for a response, and several of the reporters I was in touch with pleasantly surprised me with their level of detail, intellectual curiousness and incisive questions. To some extent it may well be that the embargo just exacerbates the issue, and that the main “problem” (or challenge) is the massive media interest and the quick-response demands that this creates. I don’t have any clear conclusions as to how this could be improved, but I want to note how this may have affected the research debate on IQ and cannabis use.<br />
<br />
The various claims flying around are reported in a number of news articles, now that the press embargo has lifted (just google: meier rogeberg cannabis). The response from the original research team has been <a href="http://www.moffittcaspi.com/sites/moffittcaspi.com/files/field/publication_uploads/Response_re_Rogeberg.pdf">made available on-line</a> by two of the original researchers, and the lead author of the original study has used it as the basis for <a href="https://theconversation.edu.au/teen-cannabis-use-lowers-iq-despite-claims-to-the-contrary-11568">an online piece stating that I am flat-out just wrong</a>. There is thus a risk that the researchers have painted themselves into a corner psychologically: By defending their original claim and methodology rather than being open to a proper re-examination of the evidence, it has become more difficult for them to do a fair analysis later without losing face if their original effect estimates were exaggerated or turn out to be non-robust.<br />
<br />
I find this a bit disappointing, as well as sad. If the original conclusions were correct, they would hold up in the new analyses I proposed - leaving their conclusions all the more strong as a result. If their effect was overestimated (due to confounding) or even negligible or zero after better controls, surely that should be seen as a positive outcome as well: More important than what results we get is, after all, making sure that our results are as correct and credible as we can make them.<br />
<br />
To explain why this matters, let me try to get the important methodological issues across in a clear way to those who are interested: Basically, the original paper (which is available <a href="http://www.pnas.org/content/early/2012/08/22/1206820109.abstract">here</a>) used a simple variant of a difference-in-differences analysis. The researchers sorted people into groups according to whether or not they had used cannabis and according to the number of times they had been scored as dependent. They then compared IQ-changes between age 13 and 38 across these groups, and found that IQ declined more in the groups with heavier cannabis-exposure. The effect seemed to be driven by adolescent-onset smokers, and it seemed to persist after they quit smoking.<br />
<br />
The data used for this study was stunning: Participants in the <a href="http://dunedinstudy.otago.ac.nz/">Dunedin Study</a>, a group of roughly 1000 individuals born within 12 months of one another in the city of Dunedin in New Zealand, had been followed from birth to age 38. They had been measured regularly and scored on a number of dimensions through interviews, IQ tests, teacher and parent interviews, blood-samples etc, and are probably amongst the most intensively researched people on the planet: The study website states that roughly 1100 publications have been based on the sample so far, which is more than one publication by participant on average ;)<br />
<br />
Despite this impressive data, there were some things I found wanting in the analysis. My own experience with difference in differences methods comes from empirical labor economics, and this experience had led me to expect a number of robustness checks and supporting analyses that this article lacked. This is not surprising: Different disciplines can face similar methodological issues, yet still develop more or less independently of each other. In such situations, however, there will often be good reasons for “cross-pollination” of practices and methods. For instance, experimental economics owes a large debt to psychology, and the use of randomized field trials in development and labor economics owes a large debt to the use of randomized clinical trials in medicine.<br />
<br />
The cannabis-and-IQ analysis basically compares average changes in IQ across groups with different cannabis use patterns. Since we haven’t randomized “cannabis use patterns” over the participants, we have an obvious and important selection issue: The traits or circumstances that caused some people to begin smoking pot early, and that caused some of these to become heavily dependent for a long time, can themselves be associated with (or be) variables that also affect the outcome we are interested in. The central assumption, in other words, is that the groups would have had the same IQ-development if their cannabis use had been similar. Since this is the central assumption required for this method to validly identify an effect of cannabis, it is crucial that the researchers provide evidence sufficient to evaluate the appropriateness of this assumption. To be specific, and to show what kind of things I wanted the researchers to provide, you would want to:<br />
<ul>
<li>Establish that the units compared were similar prior to the treatment being studied - e.g., provide a table showing how the different cannabis-exposure groups differed prior to treatment on a number of variables. </li>
<li>Establish a common trend - Since the identifying assumption is that the groups would have had the same development if they had had the same “treatment”, then clearly the development prior to the treatments should be similar. In the Dunedin study, they measured IQ at a number of ages, and average IQ changes in various periods could be shown for each group of cannabis users.</li>
<li>Control for different sets of possible confounders. To show that the estimates that are of interest are robust, you would want to show estimates for a number of multivariate regressions that control for increasing numbers (and types) of potential confounders. The stability of the estimated effect and their magnitude can then be assessed, and the danger of confounding better evaluated: What happens if you add risk factors that are associated with poor life outcomes (childhood peer rejection, conduct disorders etc), or if you include measures of education, jailtime, unemployment, etc.? If the effect estimate of cannabis on IQ changes a lot, then this suggests that selection issues are important- and that confounders (both known and unknown) must be taken seriously. Adding important confounders will also help estimation of the effect we are interested in: Since they explain variance within each group (as well as some of the variance between the groups), they help reduce standard errors on the estimates of interest. </li>
<li>Establish sensitivity of results to methodological choices. Just as we want to know how sensitive our results are to the control variables we add, we also want to know how sensitive they are to the specific methodological choices we have made. In this instance, it would be interesting to allow for pre-existing individual level trends: Assume that people have different linear trends to begin with. To what extent are these differing pre-existing trends shifted in similar ways by later use patterns of cannabis? By adding in earlier IQ-measurements for each individual (which are available from the Dunedin study), such “random growth estimators” would be able to account for any (known or unknown) cause that systematically affected individual trajectories in both pre- and post-treatment periods. Another example is the linear trend variable they use for cannabis exposure, which presumably gives a score of 1 to never users, 2 to users who were never dependent, 3 to those scored as dependent once and so on. This is the variable that they check for significance - and it would be </li>
<li>Provide other diagnostic analyses, for instance by considering the variance of the outcome variable within each treatment group (how much did IQ change differ within each treatment group?). In this way, we could tell whether we seemed to be dealing with a very clear, uniform effect that affects most individuals equally, or whether it was a very heterogeneous effect whose average value was largely driven by high-impact subgroups. </li>
<li>Discuss alternative mechanisms. What potential mechanisms can be behind this, and what alternative tests can we develop to distinguish between these? For instance, let us say you identify what seems to be a causal effect of cannabis use and dependency, but its magnitude is strongly reduced (but not eliminated) when you add in various potential confounders. For instance, educational level. As the authors of the original paper note (when education turns out to affect the effect size), education could be a mediating factor in the causal process whereby cannabis affects IQ. However, this would mean that the permanent, neurotoxic effect they are most concerned with would be smaller, because part of the measured effect would be due to the effect of cannabis on education multiplied by the effect of this education on IQ. The evidence thus suggests that the direct “neurotoxic” effect is only part of what is going on. It also suggests that we might want to look for evidence to assess how strongly cannabis use causally affects education, to better understand the determinants of this process. For instance, even if there was only a temporary effect of cannabis on cognition, ongoing smoking would do more poorly in school or college, which might then influence later job prospects and long term IQ. The effect doesn’t even have to be through IQ: If pot smoking makes you less ambitious (either because of stoner subculture or psychological effects), the effect may still have long term consequences by altering educational choices and performance. Put differently: If the mechanism is via school, then even transitory effects of cannabis becomes important when they coincide with the period of education.</li>
</ul>
When I originally started looking into this last August, I sent an e-mail to the corresponding author asking for a couple of tables with information on “pre-treatment” differences between the exposure groups. I did not receive this. This is quite understandable, given that they were experiencing a media-blitz and most likely had their hands full. I therefore turned to past publications on the Dunedin cohort to see if I could find the relevant information there. <br />
<br />
It turned out that I could - to some extent. Early onset cannabis use appeared to be correlated with a number of risk factors, and these risk factors were also correlated with poor life outcomes (low and poor education, crime, income etc.). The risk factors were also correlated with socioeconomic status. <br />
<br />
The next question was whether these factors could affect IQ. One recent model of IQ (the <a href="http://mentalhealth.about.com/library/sci/0401/bliq401.htm">Flynn-Dickens model</a>) strongly suggested they would. The model sees IQ as a style or habit of thinking - a mental muscle, if you like - which is influenced by the cognitive demands of your recent environment. School, home environment, jobs and even the smartness of your friends are seen as in a feedback loop with IQ: High initial IQ gives you an interest in (and access) to the environments that in turn support and strengthen IQ. Since the risk factors mentioned above would serve to push you away from such cognitively demanding environments, it seemed plausible that they would affect long term IQ negatively by pushing you into poorer environments than your initial IQ would have suggested. <br />
<br />
A couple of further parts to this potential mechanism can be noted (both discussed <a href="http://blogs.discovermagazine.com/gnxp/2011/01/when-genes-matter-for-intelligence/#.UPVLL6LO3zg">here</a>): It seems that high-SES kids have a higher heritability of IQ than low-SES kids, which researchers often interpret as due to environmental thresholds: If your environment is sufficiently good, variation in your environment will have small effects on your IQ. If, however, your environment is poorer, similar variation will have larger effects. Put differently: The IQ of low-SES kids is more affected by changes to their environment than that of high-SES kids. <br />
<br />
Also, there is a (somewhat counterintuitive, at first glance) result which shows that average IQ heritability increases with age. One interpretation of this is that our genetic disposition causes us to self-select or be sorted into specific environments as we age. The environment we end up with is therefore more determined by our genetic heritage than our childhood environment, where our family and school were, in some sense, “forced environments.” <br />
<br />
In my research article, I refer to various empirical studies supporting these mechanisms and their effects. For instance, past studies that find SES, jailtime, and education to be associated with the rate of change in cognitive abilities at different ages. Putting these pieces together, the risk factors that make you more likely to take up pot smoking in adolescence, and that raise your risk of becoming dependent, also shift you into poorer environments than your initial IQ would predict in isolation. Additionally, these shifts are more likely for kids in lower-SES groups (since the risk factors are correlated with SES), and these also have an IQ more sensitive to environmental changes. Finally, for the same reason, the forced environment of schooling is likely to raise childhood IQ more for the low SES kids (because it is a larger improvement on their prior environments, and because their IQs are more sensitive to environmental influences). SES, then, is in some sense a summary variable that is related to a number of the relevant factors, in that low SES <br />
<br />
<br />
<ol>
<li>correlates with risk factors that influence, on the one hand, adolescent cannabis use and dependency and, on the other hand, poorer life outcomes, and</li>
<li>signals a heightened sensitivity to environmental factors (the SES-heritability difference in childhood)</li>
<li>probably reflects the magnitude of the extra cognitive demands imposed by school relative to home environment</li>
</ol>
<br />
For these reasons, SES seemed like a good variable to use in a mathematical model to capture these relationships. However, it should be obvious from my description of this mechanism that we should expect the mechanism to work even within a socioeconomic group: Even within this group, those with high levels of risk factors will experience poorer life outcomes, which may reduce their IQs. They will also most likely have higher probabilities of beginning cannabis smoking. At the same time, we would expect a smaller effect within a specific socioeconomic group than we would across the whole population. <br />
<br />
However, I simplified this by using SES in three levels and created a mathematical model with these effects, using effect sizes drawn from past research literature where I could find it. Using the methods used in the original study, I tested my simulated data and found the statistical methods identified the same type and magnitude of effects here as they had in the actual study data. This, of course, does not prove or establish that there is no effect of cannabis on IQ. What it does is to show that the methods they used were insufficient to rule out other hypotheses, that the original effect estimates may be overestimated, and that we need to look more deeply into the matter, using the kind of robustness checks and specification tests I discussed above. <br />
<br />
In my mind, this should be just the normal process of science - an ongoing dialogue between different researchers. We know that replication of results often fail, and that acting on flawed results can have negative consequences (see here for an an interesting popular science account of one such case). A <a href="http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124">statistical model by medical researcher Ioannides</a> (at the centre of <a href="http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/308269/">this entertaining profile</a>) suggests that new results based on exploratory epidemiological studies of observational data will be wrong 80% of the time. The Dunedin study on cannabis and IQ would, it seems, fit into this category. After all, by the time you’ve published more than 1100 papers on a group of individuals, it seems relatively safe to say that you have moved into “exploratory” mode. <br />
<br />
In light of this, critically assessing results and methods and proposing alternative explanations and further tests should be an everyday and expected part of research work. Such work is particularly important in cases like the Dunedin study, where the data involved is both costly and time consuming to construct, and thus very rare. As noted recently by Gary Marcus in a G+ comment (second comment <a href="https://plus.google.com/110118190827072885811/posts/cYcu6aeEAgj">here</a>), flawed results based on such data is likely to persist for a “really long time” if we are to wait for other researchers to replicate the analyses on other data. <br />
<br />
And that, finally, brings us back to the end. I remain hopeful that the original researchers will return to their data and address my methodological points properly: How robust and credible is the effect, and how sensitive is the effect magnitude to different sets of controls and methodological choices. However, I am wary that the pre-release to the press and the quick back-and-forth exchanges and position-taking this seems to have caused have reduced the likelihood of this taking place. Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com2tag:blogger.com,1999:blog-8329357786401012777.post-64638181407602045392011-11-23T06:17:00.000-08:002011-11-23T06:17:49.011-08:00Economic models in theory and practiceMichael Woodford has a <a href="http://ineteconomics.org/blog/inet/michael-woodford-response-john-kay" target="_blank">nice essay at INET</a> where he responds to <a href="http://ineteconomics.org/blog/inet/john-kay-map-not-territory-essay-state-economics" target="_blank">John Kay’s plea for a changed economics</a>. In it, Woodford presents a number of arguments in favor of economic models that I think are valid and useful, but I don’t think he successfully defends the way economists use models in practice. Instead, he defends a <em>different way </em>of using models that <em>would be </em>more defensible.<br />
Let’s consider his arguments in favor of mathematical models.<br />
<h5>
Argument 1: Precision</h5>
<blockquote>
Models allow the internal consistency of a proposed argument to be checked with greater precision;</blockquote>
True, if the argument allows for translation into mathematical form. There’s an <a href="http://books.google.com/books?id=lAroCMH20YgC&lpg=PA246&dq=%22Much%20economic%20theorizing%20to-day%20suffers%2C%20I%20think%2C%20because%20it%20attempts%20to%20apply%20highly%20precise%20and%20mathematical%20methods%20to%20material%20which%20is%20itself%20much%20too%20vague%20to%20support%20such%20treatment.%22&pg=PA246#v=onepage&q=%22Much%20economic%20theorizing%20to-day%20suffers,%20I%20think,%20because%20it%20attempts%20to%20apply%20highly%20precise%20and%20mathematical%20methods%20to%20material%20which%20is%20itself%20much%20too%20vague%20to%20support%20such%20treatment.%22&f=false" target="_blank">old Keynes quote</a> on this:<br />
<blockquote>
Much economic theorizing to-day suffers, I think, because it attempts to apply highly precise and mathematical methods to material which is itself much too vague to support such treatment. </blockquote>
Sometimes, surely, the “vagueness of the material” is a shortcoming that makes an argument sound more sensible than it is. In that case, forcing it into mathematical form forces us to clarify what we actually mean and makes it harder to “weasel.” In other cases, however, formal methods force us to “sharpen” assumptions in a way that changes the argument itself. There’s a difference between saying that people have some thoughts on how gasoline prices will be in the future, and saying that people have subjective beliefs about future gas price trajectories that can be defined as a probability distribution over all possible price paths. <br />
<h5>
Argument 2: Differentiation/clarification</h5>
Back to Woodford:<br />
<blockquote>
they allow more finely-grained differentiation among alternative hypotheses</blockquote>
This is true – in so far as all the alternative hypotheses can be translated into this common language. For instance, the philosopher Jon Elster <a href="http://www.reocities.com/hmelberg/elster/ar97mte.htm" target="_blank">has written on Gary Becker’s rational addiction work</a> that <br />
<blockquote>
Although I disagree sharply with much of it, it has raised the level of discussion enormously. Before Becker, most explanations of addiction did not involve choice at all, much less rational choice. By arguing that addiction is a form of rational behavior, Becker offers other scholars the choice between agreeing with him or trying to identify <i>exactly </i>where he goes wrong. Whatever option we take (I'm going to take the second), our understanding of addiction will be sharpened and focused.</blockquote>
This sounds fine, until you try to read the literature and discussions and realize that economists rarely find it interesting (or possible?) to discuss the specification of a model <em>taken as a serious hypothesis about a causal mechanism in the world. </em>As Woodford says elsewhere in his essay, when you want to conduct economic analysis with a mathematical model,<br />
<blockquote>
An assessment of the realism of the assumptions made in the model is essential --- not, of course, an assessment of whether the model literally describes all aspects of the world, which is never the case, but an assessment of the realism of what the model assumes about <em>those aspects of the world </em>that the model pretends to represent. It is also important to assess the robustness of the model’s conclusions to variations in the precise assumptions that are made, at least over some range of possible assumptions that can all be regarded as potentially of empirical relevance. These kinds of critical scrutiny are crucial to the sensible use of models for practical purposes.</blockquote>
However, in many parts of economics, <em>this kind of discussion is seen as irrelevant or silly. </em>If you persist in trying to discuss “the realism of what the model assumes about <em>those aspects of the world </em>that the model pretends to represent” you’ll just be a person who doesn’t “get” economics. <br />
Consider the rational addiction model of Becker and Murphy and the others that followed in its wake. As I’ve written with a colleague <a href="http://www.mendeley.com/profiles/ole-rogeberg/document/4267287632/#highlighted" target="_blank">here</a>, <br />
<blockquote>
The core of the causal insight claims from rational addiction research is that people behave in a certain way (i.e. exhibit addictive behavior) because they face and solve a specific type of choice problem. Yet rational addiction researchers show no interest in empirically examining the actual choice problem – the preferences, beliefs, and choice processes – of the people whose behavior they claim to be explaining. Becker has even suggested that the rational choice process occurs at some subconscious level that the acting subject is unaware of, making human introspection irrelevant and leaving us no known way to gather relevant data</blockquote>
In addition: Trying to examine whether the causal mechanisms described are at all plausible or consistent with evidence, is seen as irrelevant or weird. It’s an exercise only philosophers like the above quoted Jon Elster and oddballs like myself seem to be interested in (I first wrote on this in an article called <a href="http://www.mendeley.com/profiles/ole-rogeberg/document/3552172402/#highlighted" target="_blank">“Taking absurd theories seriously”</a>). “Real” economists seem content to wave their hands and say “as-if” or “these are just standard assumptions.” <br />
<h5>
Argument 3: Enables complexity</h5>
<blockquote>
[Models] allow longer and more subtle chains of reasoning to be deployed without both author and reader becoming hopelessly tangled in them.</blockquote>
This claim by Woodford is similar to Krugman’s claim in <a href="http://web.mit.edu/krugman/www/formal.html" target="_blank">“Two cheers for formalism”</a> (a piece originally published in the Economic Journal):<br />
<blockquote>
Most of the topics on which economists hold views that are both different from "common sense" and unambiguously closer to the truth than popular beliefs involve some form of adding-up constraint, indirect chain of causation, feedback effect, etc.. Why can economists keep such things straight when even highly intelligent non-economists cannot? Because they have used mathematical models to help focus and form their intuition.</blockquote>
This sounds sensible: One could argue that individuals face relatively simple problems (“how much milk do I feel like drinking now?”), but that we need formal tools to understand what happens when they interact in markets or firms or whatever. However, this wouldn’t really be true. The argument is particularly off if you’re into rational expectations – in which case you want to impose the requirement that the agents in the model understand the model they are in and optimize in light of the real constraints they face. In that case, they need the same tools you do. <br />
The argument is also off in other contexts. As I’ve argued <a href="http://www.mendeley.com/profiles/ole-rogeberg/document/3552172402/#highlighted" target="_blank">elsewhere</a>, it is actually an argument against <em>all </em>sophisticated, mathematical theories of individual choice. If mathematical modeling is a tool necessary for economists to reason their way through, say, rational addiction theory, how on earth do they expect “even highly intelligent non-economists” to discover that becoming a junky is their best shot at happiness? You might want to avoid the question by saying that they are able to do this “subconsciously”, but even that is a testable claim (hint: <a href="http://onlinelibrary.wiley.com/doi/10.1111/1467-9442.00127/abstract;jsessionid=BF7DBDB4CD61725C9952F2CDD0585828.d04t03" target="_blank">it’s empirically false</a>). Also – if we really did solve such problems easily in our subconscious – wouldn’t these models seem intuitive and in line with our gut feelings? Put differently, it seems odd to develop a tool to overcome human cognitive frailty and then claim that this tool used at “full power” <br />
<h5>
Argument 4: Enables critical evaluation</h5>
Woodford’s essay again:<br />
<blockquote>
Often, reasoning from formal models makes it easier to see how strong are the assumptions required for an argument to be valid, and how different one’s conclusions may be depending on modest changes in specific assumptions. And whether or not any given practitioner of economic modeling is inclined to honestly assess the fragility of his conclusions, the use of a model to justify those conclusions makes it easy for others to see what assumptions have been relied upon, and hence to challenge them.</blockquote>
Here I’ll just refer back to the discussion on argument 2. Once again, I agree with Woodford in principle – but would argue that this is descriptively inaccurate in terms of how academic discussions in economics are actually conducted. When I examined the justification of specific assumptions in rational addiction theories (in <a href="http://www.mendeley.com/profiles/ole-rogeberg/document/3552172402/#highlighted" target="_blank">“Taking absurd theories seriously”</a>), I found that this was a lackadaisical affair: The most weird and unbelievable stuff was left uninterpreted in the model, other weird assumptions were justified by telling whatever anecdote would support it, or by giving loose evidence that would support a different but related assumption. The models, to sum up, were<br />
<blockquote>
poorly interpreted, empirically unfalsifiable, and based on wildly inaccurate assumptions selectively justified by ad-hoc stories.</blockquote>
<h5>
Conclusion</h5>
Time to wrap up. <br />
I realize that I’ve sounded critical of Woodford, but hope the “in principle”/”in practice” distinction is clear. My problem with his essay is that it’s framed as a defense of current economic practice. Evaluated in that regard, the arguments fails: He actually defends a form of “best practice” in modeling that is neither widespread nor widely recognized as such in economics today (as far as I can tell).Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-70583312996113781932011-11-21T04:46:00.000-08:002011-11-21T05:24:51.978-08:00Hamermesh: Macro is rubbish, but the academic market is working and selecting for usefulness. It selected for Gary Becker, didn’t it?Labor economist <a href="https://webspace.utexas.edu/hamermes/www/" target="_blank">Daniel Hamermesh</a> is <a href="http://thebrowser.com/interviews/daniel-hamermesh-on-economics-fun?page=1" target="_blank">interviewed at the Browser</a> and asked for five books showing that economics is fun. At one point, the following exchange occurs:<br />
<blockquote>
<strong>With the economics profession, in the aftermath of the financial crisis, being somewhat in disrepute…</strong><br />
Stop! Stop, stop, stop. The economics profession is <em>not</em> in disrepute. Macroeconomics is in disrepute. The micro stuff that people like myself and most of us do has contributed tremendously and continues to contribute. Our thoughts have had enormous influence. It just happens that macroeconomics, firstly, has been done terribly and, secondly, in terms of academic macroeconomics, these guys are absolutely useless, most of them. Ask your <a href="http://www0.gsb.columbia.edu/faculty/pbolton/">brother-in-law</a>. I’m sure he thinks, as do 90% of us, that most of what the macro guys do in academia is just worthless rubbish. Worthless, useless, uninteresting rubbish, catering to a very few people in their own little cliques.<br />
<strong>I’m not sure most people in the outside world would make a distinction between macro and microeconomists.</strong><br />
I know. It’s up to us to educate them. I got this line from a friend in architecture the other day. He said exactly the same thing. I went through the same litany, trying to disabuse him of this notion. It’s like pushing a stone up a giant hill. It’s not going to get me very far, I agree. But nonetheless it is the case that most of us, and most of what we do, remains tremendously useful, tremendously relevant, and also fun!</blockquote>
He also names names. While Sargent, for instance, is a good guy, <br />
<blockquote>
Not all the macro guys who won the Nobel are good. The guy who won it in 2004 was one of the main culprits in the nonsense, Ed Prescott.</blockquote>
At the same time, Hamermesh is an optimist, in that he believes the academic market selects, over time, for usefulness:<br />
<blockquote>
I do believe in markets. We had some useless macro guys here who just left, thank God, and we’re now looking for replacements. I do think the failure of these people is conditioning how we search for a replacement. I’m quite sure the journals in academe are going to reflect this too. People <em>are</em> interested in being useful in this profession. It doesn’t mean the people who were the bad guys from the last 20 years in macro are going to be doing anything different. They’re incapable of doing anything different! But markets do work and the dead and useless get shoved aside by the young and useful. I’m a tremendous optimist. I do believe markets work and that people run to fill niches. There’s an obvious niche here, and you’re already starting to see it being filled.</blockquote>
I think this is interesting: Macroeconomics the last few decades has basically been run by guys that Harmermesh charges with doing mostly “worthless rubbish. Worthless, useless, uninteresting rubbish, catering to a very few people in their own little cliques.” These are the guys who’ve dominated top journals and top economics departments and who have won Nobel Prizes for their macro work. Yet he still sees the academic market as a well-functioning mechanism selecting for usefulness.<br />
There’s a second thing I find interesting about this: The “top economists” that Harmermesh mentions to show that micro (as opposed to macro) is useful, is the same economist that I trot out to show how absurd nonsense is accepted in economics.<br />
Together with Hans Melberg, I <a href="http://goo.gl/hUv7s" target="_blank">recently argued</a> that there is a “market failure” in (at least a large part of) the academic market for economists: If you have a model that is theoretically consistent and in line with “standard theory” (rational choice, equilibrium, etc.), and if the model matches some stylized facts and can reproduce regularities in market data – then you’re more or less given free reign to make causal claims and say that the “theory” can support strong and important claims regarding the welfare effects of actual real world policies. <br />
In this work, Melberg and I looked at the kind of claims made in the literature on rational addiction theory. We argue that this is a literature featuring claims so obviously unsupported (we call them “absurd”), that their acceptance into good journals is a clear indication of a “broken market.” <br />
The funny thing is: The whole literature on rational addiction theory – which we see as a clear example of how the “academic market” in economics allows policy-useless nonsense claims to rise to the top - is based on the work of Gary Becker. This same economist is one of two economists that Hamermesh mentions as examples of <em>good economics</em> that, presumably, show how well-functioning the market is.<br />
<blockquote>
There have been some great economists since then, in the last 30 to 40 years. [..] There’s Gary Becker, who in my view is the top economist of the last 50 years. His notions of family bargaining and how families behave are terribly important, and affect how, in the end, we all think.</blockquote>
To me, the rise of Gary Becker and his theories does not illustrate the usefulness (in the sense of credible, well-supported insights into the real world and the effects of actual policy choices on real people) of his work, but more that it “opened new markets” for economists: He showed them ways to build theories of the kind they were familiar with within a host of new areas (education, family, crime, addiction), in ways that seamlessly fit the criteria of “rational choice” and standard micro-economic practice. He provided innovative, creative, exciting strategies for economic imperialism. His work allows you to interpret all sorts of things using the <a href="http://freakynomics.blogspot.com/2011/11/invisible-hand-is-everywhere-you-just.html" target="_blank">universal acid</a> of economic theory. Some of it may be truly useful and correct, some of it is very clearly not, yet all of it has been very successful within the discipline. To me, that makes it unlikely that “usefulness” was the selection criteria involved.<br />
<br />
UPDATE: Came across a <a href="http://lemire.me/blog/archives/2011/09/06/science-is-self-regulatory-really/" target="_blank">nice blogpost by Daniel Lemire</a> who also doubts that science is successfully self-regulatory, though he argues from a different angle (he asks: how well does peer-review filter out bad research? To what extent does citation levels reflect quality?). I could also add <a href="http://lemire.me/blog/archives/2011/09/06/science-is-self-regulatory-really/" target="_blank">this post</a> which discusses a recent result that rebuttals don't affect how often a paper is cited, nor how well it is regarded.Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-61256173602762480812011-11-18T01:06:00.000-08:002011-11-19T05:50:39.482-08:00The invisible hand is everywhere… you just need to notice every little detail!In the book “Darwin’s Dangerous Idea” the philosopher Daniel C. Dennett called Darwin’s theory of evolution a form of “universal acid”:<br />
<blockquote>
it eats through just about every traditional concept, and leaves in its wake a revolutionized world-view, with most of the old landmarks still recognizable, but transformed in fundamental ways.</blockquote>
The same thing is true of economic choice theories: The logic of equilibria based on rational actors making marginal adjustments started as a description of the market, ate its way into “above market”-institutions such as regulatory agencies (<a href="http://en.wikipedia.org/wiki/Regulatory_capture" target="_blank">regulatory capture</a>) and government (<a href="http://en.wikipedia.org/wiki/Public_choice_theory" target="_blank">public choice</a>), as well as “non-market”-institutions such as <a href="http://en.wikipedia.org/wiki/Gary_Becker#Families" target="_blank">families</a> and – in a nice little satirical <a href="http://en.wikipedia.org/wiki/Ouroboros" target="_blank">Ourobos</a>-move – the <a href="http://www.nakedcapitalism.com/2011/04/blacklisted-economics-professor-found-dead-nc-publishes-his-last-letter.html" target="_blank">discipline of economics</a>:<br />
<blockquote>
<em><strong>The way I would describe Academic Choice theory is that it is “the sociology of economists, without romance.” Is this right?</strong></em> What an insightful comment. As you say, Academic Choice theory is a descriptive project, with no normative orientation. We apply a critical approach in order to counterbalance pervasive earlier notions of economists as scientific heroes struggling against popular ignorance in order to serve the common good.<br />
<strong><em>What would you identify as the central insights of Academic Choice theory?</em></strong> The theory begins by identifying three principal ways in which economists try to maximize their utility. First, they receive salaries from universities, which can be increased if their course enrollment increases. Course enrollment is primarily driven by students with future careers in business and the financial sector, so an economist has an incentive to propound theories that CEOs and financial institutions find attractive. Even if adoption of these theories leads to substantial public costs, these costs will not be shouldered by the economist personally. Second, by developing such theories an economist can open the door to future wealth as a lobbyist or consultant. Third, the support of economists is critical to creating and maintaining special privileges for the financial services industry and for top corporate officers. By threatening to withdraw this support, economists can engage in rent-seeking. I call this last practice academic entrepreneurship.</blockquote>
The post is <strike>wroth</strike> worth reading in full. Remember – no matter what objection someone raises, you can always turn the firehose of economic acid on them and reduce them to yet another selfishly motivated rational agent. And when the economic worldview has eaten its way through everything and laid bare the underlying logic and structure of the world in all its stark, brutal detail? Then, perhaps we’ll all meet up in the “Invisible Hand Society” of Robert Anton Wilson’s novel “Schrodinger’s Cat Trilogy”:<br />
<blockquote>
Dr. Rauss Elysium had summed up the entire science of economics in four propositions, to wit:<br />
1. <i>Find out who profits from it.</i><br />
This was merely a restatement of the old Latin proverb-a favorite of Lenin's-<i>cui bono?</i><br />
2. <i>Groups never meet together except to conspire against other groups.</i><br />
This was a generalization of Adam Smith's more limited proposition "Men of the same profession never meet together except to defraud the general public." Dr. Rauss Elysium had realized that it applies not just to merchants, but to groups of all sorts, including the governmental sector.<br />
4. <i>Every system evolves and expands until it encroaches upon other systems.</i><br />
This was just a simplification of most of the discoveries of ecology and General Systems Theory.<br />
4. <i>It all returns to equilibrium, eventually.</i><br />
This was based on a broad Evolutionary Perspective and was the basic faith of the Invisible Hand mystique. Dr. Rauss Elysium had merely recognized that the Invisible Hand, first noted by Adam Smith, operates everywhere. The Invisible Hand, Dr. Rauss Elysium claimed, does not merely function in a free market, as Smith had thought, but continues to control everything no matter how many conspiracies, in or out of government, attempt to frustrate it. Indeed, by including Propositions 2 and 3 inside the perspective of this Proposition 4, it was obvious-at least to him-that conspiracy, government interference, monopoly, and all other attempts to frustrate the Invisible Hand were themselves part of the intricate, complex working of the Invisible Hand itself.<br />
He was an economic Taoist.<br />
The Invisible Hand-ers were bitterly hated by the orthodox old Libertarians. The old Libertarians claimed that the Invisible Hand-ers had carried Adam Smith to the point of self-contradiction.<br />
The Invisible Hand people, of course, denied that.<br />
"We're not telling you <i>not </i>to oppose the government," Dr. Rauss Elysium always told them. "That's your genetic and evolutionary function; just as it's the government's function to oppose you."<br />
"But," the Libertarians would protest, "if you don't join us, the government will evolve and expand indefinitely."<br />
"Not so," Dr. Rauss Elysium would say, with supreme Faith. "It will only evolve and expand until it creates sufficient opposition. Your coalition is that sufficient opposition at this time and place. If it were not sufficient, there would be more of you."<br />
Some Invisible Hand-ers, of course, eventually quit and returned to orthodox Libertarianism.<br />
They said that, no matter how hard they looked, they couldn't see the Invisible Hand.<br />
"You're not looking hard enough," Dr. Rauss Elysium told them. "You've got to notice <i>every little detail."</i><br />
Sometimes, he would point out, ironically, that many had abandoned Libertarianism to become socialists or other kinds of Statists because <i>they </i>couldn't see the Invisible Hand even in the Free Market of the nineteenth century.<br />
All <i>they </i>could see, he said, were the conspiracies of the big capitalists to prevent free competition and to maintain their monopolies. <i>They, </i>the fools, had believed government intervention would stop this.<br />
Government intervention was, to Dr. Rauss Elysium, just like the conspiracies of the corporations, merely another aspect of the Invisible Hand.<br />
"It all coheres wonderfully," he never tired of repeating. "Just notice <i>all </i>the details."</blockquote>Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-83006825022212899532011-11-17T02:21:00.000-08:002011-11-17T02:21:00.409-08:00Rational models are NOT more constrained than irrational onesOne more comment on the Raquel Fernández <a href="http://www.thestraddler.com/20118/piece4.php" target="_blank">conversation at the Straddler</a> that I mentioned in <a href="http://freakynomics.blogspot.com/2011/11/google-post-from-al-roth-alerted-me-to.html" target="_blank">a previous post</a>. I thought she had several good points that she formulated well, but there was one comment that I’ve often seen economists make and that I think is wrong or at least misleading:<br />
<blockquote>
There <em>is</em> a beauty to the models in and of themselves. You assume, for example, that people are rational. I don’t think any really good economist thinks that people are perfectly rational, but, on the other hand, if you want to model people as not rational, all of a sudden it’s not clear what choice you should make. There are a million and one ways to be non-rational; there’s only one way to be rational within the confines of a model. Rationality means one thing: you’re maximizing your welfare subject to constraints. Now, if you say people don’t always maximize, and they’re beset by this and that, then all of a sudden you can have a million models. And that’s a little bit unsatisfactory too.</blockquote>
Yes, “there’s only one way to be rational within the confines of a model,” but so what? <em>Within the confines of a specific model of irrationality </em>there would be <em>only one way </em>to be irrational too. And yes, “there are a million and one ways to be non-rational,” but there’s also <em>a million and one ways to specify a utility function</em> – and this gives us a million and one ways to act that are <em>all rational</em>.<br />
There are actually three points (at least) here:<br />
<ol>
<li>Strictly speaking, “utility maximization” is empirically empty. We start with a preference relation that summarizes observed choice between pairs of consumption bundles, and which is “rational” in the sense of being complete, reflexive and transitive. We can then <em>represent </em>this with an ordinal utility function <em>constructed</em> to capture the choices described by this preference relation. <em>Any preference relation – </em>that is, any systematic set of choices fulfilling these conditions – can be represented by such a utility function. If you always did what <em>hurt you the most</em>, your choices could still be captured by such a utility function – and saying that you “maximize utility” means nothing more than saying that you “choose the one option within the choice set that would be selected <em>no matter what other alternative </em>in the choice set you set it against in a pairwise choice”. This makes no claims concerning <em>why </em>this option is selected – it may be because it benefits you, is best for the world (but not for you selfishly), is the most brightly colored, was most recently advertised or whatever.</li>
<li>Economists then commonly make the <a href="http://freakynomics.blogspot.com/2011/01/fundamental-leap-of-welfare-economics.html" target="_blank">“great leap of welfare economics”</a> by assuming that all choices actually made aim to maximize the welfare of the choosing agent. “Utility” now measures “welfare” in some way.To be “rational” means to be “smart and selfish” – and arguments about whether or not A or B or C “is rational” quickly becomes a tiresome exercise in discussing <a href="http://www.iep.utm.edu/psychego/" target="_blank">psychological egoism</a>. “Yes, he gave away his money to the beggar – but this gave him a warm glow which was the most welfare-maximizing item he could purchase for that sum of money” </li>
<li>People are obviously not 100% selfish in terms of money and goods for themselves, so such utility functions need to be defined over non-observable goods as well as observable goods. This means that the “one” model of fully rational choice is actually a million models, due to the many degrees of freedom <em>within </em>the model. You do what maximizes your “utility,” but that can be anything. Take Gary Becker’s work: In his work, your utility function can be defined over “capital stocks” that refer to addictive capital, imagination capital, human capital etc. Looking at the different variants of rational addiction theory that have been developed within Becker’s framework, economists are happy to assume different numbers of such stocks and different cross-derivatives between stocks and other goods. Out, as a result, comes “rational consumption” that is rising, falling, cyclical, chaotic, or involves cold-turkey quitting.</li>
</ol>
I really don’t understand why (some) economists think “utility maximization” is such a “hard constraint” on theorizing in light of this. If you think it is – let me know <em>one </em>consumption pattern or human behavior that <em>if it were observed repeatedly </em>would be inconsistent with “rationality” or “utility maximization”. If it’s a hard constraint this should be simple – there should be long lists of possible, observable behaviors that <em>could not occur </em>if people were actually rational and maximized utility in some substantive sense and that <em>would not occur </em>if the hypothesis of “rational selfish maximization” was correct. <br />
In actuality, I think you’ll find that there is no behavior weird enough to make rational choice economists doubt there being <em>some rational utility-maximizing </em>explanation out there provided we look long and hard enough. As Stigler and Becker wrote in their <a href="http://www.jstor.org/stable/1807222" target="_blank">De Gustibus Non Est Disputandum article</a>:<br />
<blockquote>
On our view, one searches, often long and frustratingly, for the subtle forms that prices and income take in explaining differences among men and periods. […] we are proposing the hypothesis that widespread and/or persistent human behavior can be explained by a generalized calculus of utility-maximizing behavior, without introducing the qualification “tastes remaining the same".</blockquote>
Put differently: If you see human action that doesn't look rational - doubt not! Rationality works in mysterious ways... Believe, think, pray and tinker with your model - and if you are wise enough all will be revealed and the Invisible Hand will publish your paper in a top-ranked journal...Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-29197388610301422822011-11-16T04:27:00.001-08:002011-11-16T04:27:58.877-08:00The “canonical model” and the importance of default models<p>A <a href="https://plus.google.com/u/0/108638251908107706165/posts/ViL6mRdq3Wf" target="_blank">Google+ post from Al Roth</a> alerted me to <a href="http://www.thestraddler.com/20118/piece4.php" target="_blank">an interesting conversation</a> at The Straddler with Raquel Fernández. She has some great ways of making some nice points, such as her statement that a:</p> <blockquote> <p>problem [in economics] is that methodology frequently trumps the question. Once you have a way to model things, much of the research becomes very self-referential; that is, it becomes more about how the model  behaves and less about the question. I think the question really matters, but a lot of economists believe the methodology matters more than the question. And this leads to very elaborate models of very many things without much of an outside reality check.</p> </blockquote> <p>Another interesting impression I get from her talk, which is not explicit and may be a misreading on my part, flows from this point and concerns the importance of default models: The “default” or “canonical” model of economics describes a perfect-competition well-functioning market. We know that this is an incorrect description of the world, but it frequently shapes our “gut reaction,” and because we understand it fully we feel more comfortable arguing about this model than about the world. As a result, economists who give policy advice are treated more leniently by fellow economists if their advice is consistent with the standard model. </p> <blockquote> <p>[…] the people who go and give advice usually end up with a very bad rap in economics. I am amazed at how much hatred—and I will say hatred—Paul Krugman evokes from some fellow economists. But one of the reasons for this is that he says things for which there is not “scientific” support and which go against what these people believe is "good" economics. Now, people on the other side also say things for which they do not have "scientific" support incidentally, and they don’t get the same amount of hatred.[…]</p> <p>Take the argument we’ve been having recently. Should we be trying to increase aggregate demand or should we be reducing the deficit? […] Well, a model is not going to give you the answer because it depends on whether you write the model in such a way that getting aggregate demand up is a good idea, or whether you write it in such a way that people are really worried about future deficits that are coming around the road and they won’t invest because they know that taxes are going to be high in the future.</p> <p>These things are rigged into the model from the beginning when it’s such an unsettled question, and we don’t really have an exact science-based way to answer it, which is why we argue about history. […]</p> <p>Economists don’t have to be free-marketers. But that ends up being the canonical model, and then everything else ends up being a departure from the canonical model, which you’ve then got to explain why you’re departing from. It’s not because the canonical model is right, it’s because you ask most economists and they’ll say, “At least we understand how that economy works very, very well. So you want to tell me that we’re going to move away from this one and move to something else, that’s fine, but you have to explain why you’re putting in all of these imperfections.” So it’s not that you can’t write those things down, it’s just that there is less of a standard way of doing it.</p></blockquote> Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-17703501088226399342011-07-01T13:28:00.000-07:002011-07-01T13:28:22.678-07:00Have a nice summer!Had some posts I needed to get out on the <a href="http://josefsen.org/">Norwegian-language blog of a colleague</a> regarding housing prices in Norway. As a result I never got around to finishing up my thoughts on Gigerenzer´s criticism of behavioral economics (I really thought the paper was a good, enjoyable and interesting read, but my comments on this blog have so far been on things I didn´t like so much... use the search bar underneath the twitter box on the right to find the posts), nor some things I kind of want to think through regarding popularized economics, nor some ideas I would like to explore regarding the strong demand for assurance and its implications in politics and debate and academia.<br />
<br />
Hope some of you readers will still come by once in a while next season.<br />
<br />
<br />
Have a nice summer!Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-45622508634084485182011-06-18T01:54:00.001-07:002011-06-18T01:56:05.178-07:00Should parenting and drugs affect economic theory?<div xmlns="http://www.w3.org/1999/xhtml">Would economic theory be different if economists were parents when initially taught it? Freakonomics-blogger Justin Wolfer <a href="http://www.freakonomics.com/2011/06/17/why-economics-falls-down-in-the-face-of-fatherhood/">says yes</a>, because that´s what the experience of becoming a father has told him. Overcoming Bias-blogger Robin Hanson <a href="http://www.overcomingbias.com/2011/06/wolfers-gets-loopy.html">says no</a>, and says becoming a father is like having a mystical experience on drugs: It doesn´t inform us about the real structure of the world.<br />
<br />
I´m wondering if the difference between these two may reduce to one thing: How “religiously” they´ve believed in the “ultimate truth” of the economic model of rational decision making. If we take Wolfer at his own word, he always saw it as<br />
<blockquote>the basic idea informing economics—that people are purposeful, analytic decision makers. And this idea just seemed entirely natural to me. I had always believed in the analytic self; I was rational, calculating, and tried to make smart decisions. Of course real people don’t use math, but I figured that we’re still weighing costs and benefits just as our models say. Or at least that was my understanding of the world.</blockquote>In other words, he sounds like the kind of guy who believed all behavior could be explained by economic theory as optimal, even if some of it would require complex choice models that assume people take subtle feedback effects, strategic “he knows that I know that he knows that I know X” issues and complicated delayed consequences of present actions into account in an optimal “rational” manner. After having a kid, this no longer seems to describe his own experience of himself:<br />
<blockquote>My feelings toward my daughter Matilda aren’t easily expressed in analytic terms. I struggle to express it, just as I struggle to understand it. <br />
<br />
…<br />
<br />
There’s something new and strange about all this. Today, I feel the powerful force of biology. It’s visceral; it’s real; it’s hormonal, and it’s not in our economic models. I’m helpless in the face of feelings that overwhelm me. Yes, I know that a twenty-something reader will cleverly point out that I just need to count kids as a good which yields utility, or perhaps we need to add a state variable to the utility function as in rational addiction models. But that’s not the point. I’m surprised by how little of this I’ve consciously chosen. While the economic framework accurately describes how I choose an apple over an orange, it has had surprisingly little to say about what has been the most important choice in my life.</blockquote>Hanson, in contrast, seems to see economic models as attempts to capture some of the regularities in human behavior:<br />
<blockquote>First, econ makes sense of a complex social world by leaving important things out, on purpose – that is the point of models, to be simple enough to understand. More important, econ models almost never say anything about consciousness or emotional mood – they don’t at all assume people choose via a cold calculating mindset, or even that they choose consciously. As long as choices (approximately) fit certain consistency axioms, then some utility function captures them. So how could discovering emotional and unconscious choices possibly challenge such models.</blockquote>Given Hanson´s view of economic theory, there is no need to redefine everything after having a kid. People will still tend to buy less as the price rises, avoid risk, and so on. It surprises me somewhat, though, that Hanson doesn´t see that there are a number of economists with a more fundamentalist belief in the neoclassical model. I´ve met several, and I bet I´ve met fewer economists in general than Hanson. I´ll admit this is pure speculation, but I´ve wondered if some economists feel threatened by behavior that deviates from the “rational choice” model they hold. They don´t say "Well, this is a simplified model, sure there´ll be deviations, but we`re capturing some regularities and that´s what we´re aiming for. Explaining something is better than not explaining anything and we´ll never be able to explain everything.” Instead, they try to twist their brains into coming up with ad-hoc assumptions that would reveal these deviations to be full, sophisticated optimization. At times, this means that increasingly stupid and shortsighted behavior is explained as increasingly subtle and complex optimization. Maybe it´s a fear of letting non-rational explanations get a foot in the door, maybe it´s because the "welfare effects" often tacked on at the end of choice models would no longer be "valid" (not that they are valid today, but if you truly believe all choices always maximize the ultimate good of importance to the acting agents, then I guess they might seem valid to you).<br />
<br />
Hanson concludes that<br />
<blockquote>Having an emotional parenting experience is as irrelevant to the value of neoclassical econ as having a mystical drug experience is to the validity of basic physics. Your subconscious might claim otherwise, but really, you don’t have to believe it.</blockquote> I´m not sure. If a person sees economic theory as Hanson describes it, then I agree with him. But if a person thinks his way of seeing the world is the only one that is valid and possible (in the sense of consistent with past experiences), then having a child or a <a href="http://en.wikipedia.org/wiki/Marsh_Chapel_Experiment" target="_blank">high dose of psilocybin in a controlled setting</a> may both be ways of learning otherwise?</div>Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-9174964871823331532011-06-14T12:55:00.000-07:002011-06-14T12:55:36.715-07:00Tim Harford´s "Adapt" - a book reviewTim Harford´s new book “Adapt” is a wonderful read but difficult to pigeonhole. There´s interesting stuff about the Iraq war, the finance crisis, development aid, randomized experiments, skunk works, the design of safety systems, whistleblowers, overconfidence and groupthink phenomena, not to mention a truly wonderful explanation of how a carbon tax would work and why environmentalists should embrace it. Even when he covers topics that have been ably covered by others elsewhere, he does so in a light and enjoyable way and manages to dig up new, cool anecdotes. It´s partly a popularization of science, partly a business book, at times it almost moves into self-help territory, and at times it seems to present new and interesting perspectives on big topics (such as financial regulation). Still - though it may sound sprawling, I didn´t really find it so when reading it. At one level it reads like a series of interesting pieces of journalism on different topics, but on another, there´s an underlying thread of ideas that gradually emerges.<br />
<br />
The way I read it, the main point of the book is that the problems we face are too complex for us to understand and figure out the solutions to from behind a desk. Evidence from the failed predictions of experts to the extinction records of firms and the failure of high-level military strategies support this. There´s a number of reasons why this is so, ranging from the difficulty of capturing and aggregating information at a sufficiently finely grained level to psychological tendencies to trust in our (frequently false) beliefs and suppress possible evidence that they´re wrong. Still - we do solve problems - but this happens through an evolutionary process: We make lots of bets - each one of which is small enough that failure is acceptable - and the winning bets identify “good enough for now” solutions that we replicate and grow. The best examples of this (as a method for human problem solving) are market economies and science. Lots of entrepreneurs who hope to strike it big, some of whom combine the factors of production in a way that better creates value than others - thus making a profit (to put the point in an Austrian way). Lots of scientists stating hypotheses, some of whom are able to better predict the outcomes of experimental and quasi-experimental data than others - thus having their hypotheses strengthened (on a related note - I recently made the argument together with a colleague that this process is broken in economics - see more on that <a href="http://freakynomics.blogspot.com/p/economics-bug-report-and-possible-patch.html">here</a>).<br />
<br />
Harford also discusses a host of implications that follow from this - the need to “decouple” systems so that failure in a single component (such as a bank in the financial system) doesn´t bring down the entire system, the need to finance both “highly certain” research ideas as well as “long shot” ideas, avoiding groupthink by including people likely to disagree (thus creating room for disagreement in the group) and demanding disagreement, and using prizes to elicit experiments. He also discusses how such evolutionary processes can be exploited better in policy- which is where he gets to his beautiful explanation of how a carbon tax works by tilting the playing field (there are two chapters here that should be reworked into a pamphlet and handed out in schools and parliaments). <br />
<br />
That´s my brief take on the underlying “storyline” - but it doesn´t do justice to the book, which reads like a string of intellectual firecrackers. The wide-ranging topics, however, also means that they are necessarily touched on lightly - it´s an appetizer for a lot of ideas more than a fully satisfying meal. For instance, if success in the market (and elsewhere) consists of being the “lucky” winner who made a bet that - ahead of time - had no stronger claim to being right than others, how does this factor into our views on entitlements and redistributive taxation? If prizes (such as the prize for a space-going flight) actually elicit large-scale, expensive experiments that we only need to pay for when they succeed - does this mean that they exploit some irrational overconfidence in the competitors? If people were sensible and unbiased in their estimate of success, would they spend more than their expected reward? And if not - wouldn´t that mean the prize money would have to be sufficient to finance all the experiments - in which case it doesn`t save us any money? To what extent does the desire for control play into the desire for top down planning and control? (Imagine you were the prime minister - would you feel comfortable if loads of schools were allowed to try out whatever they felt like, risking the chance that some of them would beat kids or indoctrinate them in some way that blew up in the media?) In an online <a href="http://boingboing.net/2011/05/31/adapt.html">interview by Cory Doctorow</a>, Harford states that <br />
<br />
<blockquote>I also looked at the banking crisis and big industrial accidents such as Deepwater Horizon, and found that there were almost always people who could have blown the whistle — and sometimes did — but the message didn’t get through. So those communication lines need to be opened up and kept open.</blockquote><br />
Yes - but no…. After all, if there´s a host of signals coming up, most of them wrong, it might well be rational to have some filtering mechanism in place that also weeds out many of the correct signals in order to avoid being swamped and misguided by wrong ones.<br />
<br />
While we´re on the topic of whistleblowers - I also wish he´d said a word or two about some of the biggest transparency cases of recent years. On the one hand, the whistleblower-friendly candidate Obama who <a href="http://www.salon.com/news/opinion/glenn_greenwald/2011/05/16/whistleblowers">changed his tune</a> once he got in office. This could have served as a way of discussing how hard it is to actually have people looking over your shoulder and criticizing you, even when you think (or at least see the arguments for) allowing them to do so. Also, I would have been interested in Tim Harford´s views on Wikileaks, which in some ways is the biggest attempt to increase transparency in modern times - as well as his views on the <a href="http://www.salon.com/news/opinion/glenn_greenwald/2010/11/30/wikileaks">conflicts it generated</a> (a book championing the cause of whistle-blowers should also at least mention <a href="http://www.salon.com/news/opinion/glenn_greenwald/2011/03/05/manning">the awful treatment of claimed whistleblower Bradley Manning</a>). Given the many stories from the Iraq war and the US military about the dangers of a strictly enforced official partyline/strategy/story, the potential value in Wikileaks shining a light on what is actually going on seems pretty clear. Or at least worthy of discussion.<br />
<br />
Given the number of topics covered in the book there are obviously quibbles you may have with certain facts that are wrong or the way some of them are treated, but that´s to be expected. More importantly, there were parts of the argument that I felt were missing - especially concerning how difficult it is to learn from experience. As documented in for instance Robyn Dawes´ excellent “House of cards” (in the context of psychology and the misguided beliefs of treatment professionals), there are clear cases where statistical decision rules consistently outperform human judgments, without this being enough to convince the experts who could gain from them. Or consider <a href="http://youarenotsosmart.com/2011/06/10/the-backfire-effect/">this post</a> on the backfire effect from the you are not so smart blog, which discusses experiments suggesting that people can react to evidence that they were wrong by being even more convinced in their wrongness. The way politicians respond to arguments about the surprisingly weak effect of drug decriminalization on usage levels is another example. In terms of Harford´s argument - adaption and evolution not only requires us to test things and find out what works - it also requires us to accept what works and implement it more broadly. Taking into account the number of things covered, he probably covered this too - but if he wants his ideas to be taken up in policy circles I think (that is, my gut-feeling is) that this would be perhaps the hardest part.<br />
<br />
Finally, the book could also have been tempered by applying its thesis to the thesis itself: Has “planned evolution” been attempted, and did it actually work? As the book argues, the devil is often in the details and seemingly good ideas based on solid case stories may turn out to work quite differently in practice from what we expected.Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-43954554378397566792011-06-13T02:38:00.000-07:002011-06-13T08:27:19.453-07:00Economics, math and science - Krugman 1996 vs. Krugman 2008<div xmlns="http://www.w3.org/1999/xhtml">Recently came across a <a href="http://www.slate.com/id/1911/" target="_blank">slate essay by Krugman from 1996</a> where he basically asserts that economics is a hard-core sciencey discipline because it uses mathematical models, and that those who dislike modern economics do so because they would prefer literary criticism style blah-blah. He starts his piece by discussing criticism of him (which I haven´t read) from Bob Kuttner. Krugman states that the disagreement has nothing to do with politics:<br />
<blockquote>We are both, after all, liberals. (...) What we are really fighting about is a matter of epistemology, of how one perceives and understands the world.<br />
(...)<br />
<span class="clsDropCap">A </span>strong desire to make economics less like a science and more like literary criticism is a surprisingly common attribute of anti-academic writers on the subject.<br />
(...)<br />
More than 40 years ago, the scientist-turned-novelist C.P. Snow wrote his famous essay about the war between the "two cultures," between the essentially literary sensibility that we expect of a card-carrying intellectual and the scientific/mathematical outlook that is arguably the true glory of our civilization. That war goes on; and economics is on the front line. Or to be more precise, it is territory that the literati definitively lost to the nerds only about 30 years ago--and<br />
they want it back. That is what explains the lit-crit style so oddly favored by the leftist<br />
critics of mainstream economics. Kuttner and Galbraith know that the quantitative, algebraic reasoning that lies behind modern economics is very difficult to challenge on its own ground. To oppose it they must invoke alternative standards of intellectual authority and legitimacy.<br />
In effect, they are saying, "You have Paul Samuelson on your team? Well, we've got Jacques Derrida on ours."<br />
(...)<br />
The literati truly cannot be satisfied unless they get economics back from the nerds. But they can't have it, because we nerds have the better claim.</blockquote>I find this interesting for two reasons. For one thing, economics is about the real world, yet Krugman doesn`t mention empirical evidence with a single word. Based on this piece, the discussion seems to be a theological debate between Pythagorean mystics who believe in the revelatory power of math and the medieval scholastics who want to focus on conceptual distinctions and dialectical reasoning. With both of them seeing themselves as the more scientific.<br />
The second thing is that this belief in divine revelation through algebra is exactly what Krugman <a href="http://www.nytimes.com/2009/09/06/magazine/06Economic-t.html" target="_blank">later attacked</a> when he had had enough of the absurdities of highly regarded, peer-reviewed work in top journals spouting poorly justified empirical claims. After the financial crisis, he wrote,<br />
<blockquote>the fault lines in the economics profession have yawned wider than ever. Lucas says the Obama administration’s stimulus plans are “schlock economics,” and his Chicago colleague John Cochrane says they’re based on discredited “fairy tales.” In response, Brad DeLong of the <a href="http://topics.nytimes.com/topics/reference/timestopics/organizations/u/university_of_california/index.html?inline=nyt-org" title="More articles about the University of California.">University of California, Berkeley</a>, writes of the “intellectual collapse” of the Chicago School, and I myself have written that comments from Chicago economists are the product of a Dark Age of macroeconomics in which hard-won knowledge has been forgotten. What happened to the economics profession? And where does it go from here?</blockquote><blockquote>As I see it, the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.<br />
(...)<br />
the central cause of the profession’s failure was the desire for an all-encompassing, intellectually elegant approach that also gave economists a chance to show off their mathematical prowess.<br />
(...)<br />
what’s almost certain is that economists will have to learn to live with messiness. That is, they will have to acknowledge the importance of irrational and often unpredictable behavior, face up to the often idiosyncratic imperfections of markets and accept that an elegant economic “theory of everything” is a long way off. In practical terms, this will translate into more cautious policy advice — and a reduced willingness to dismantle economic safeguards in the faith that markets<br />
will solve all problems.</blockquote>My point with this is not that it`s wrong to use mathematics (Krugman <a href="http://krugman.blogs.nytimes.com/2009/09/11/mathematics-and-economics/" target="_blank">made sure to clarify this as well</a>). My point is that it`s wrong to think that you can reason your way to empirical truth without getting involved with the messy reality around us. <a href="http://freakynomics.blogspot.com/p/economics-bug-report-and-possible-patch.html" target="_blank">Claims about reality need evidence from reality.</a> The claims about reality that you start out with could derive from a formal model or a verbal argument or even a diagram - but to analyze empirical evidence requires quantification of phenomena and statistics, so this is not an argument against either numbers, mathematical methods or hard-to-understand algebra. It`s an argument against theology in science and the belief that you can dispense with empirical evidence provided you`ve thought "logically" enough from a priori "truths" using some method or other - whether based on mathematics, literary criticism-style discussion, or symbology. <br />
<br />
<br />
</div>Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-18062164202247408222011-06-12T02:23:00.001-07:002011-06-12T02:23:43.306-07:00Bitcoin - the newest e-money, internet threat and speculative bubble - all in one?<div xmlns='http://www.w3.org/1999/xhtml'>The idea of untraceable, privately controlled electronic currency is not new. Remember Kevin Kelley made a big deal about it in "Out of control" (<a href='http://www.kk.org/outofcontrol/ch12-f.html' target='_blank'>chapter 12</a>), his breathlessly enthusiastic book on the accelerating digital age published back in 1995: <br/><blockquote><b>The nature of e-money --</b> invisible, lightning quick, cheap, globally<br />penetrating -- is likely to produce indelible underground economies, a<br />worry way beyond mere laundering of drug money. In the net-world, where<br />a global economy is rooted in distributed knowledge and decentralized<br />control, e-money is not an option but a necessity. Para-currencies will<br />flourish as the network culture flourishes. An electronic matrix is<br />destined to be an outback of hardy underwire economies. The Net is so<br />amicable to electronic cash that once established interstitially in the<br />Net's links, e-money is probably ineradicable.<br/></blockquote>Kelley didn`t discuss Bitcoin, as even a futurist would be hard pressed to discuss by name something that would be developed 13 years later. But it`s the same thing: Untraceable, outside government control, loved by libertarians and kind of geeky. A good description is <a href='http://blogs.discovermagazine.com/80beats/2011/06/10/everything-you-want-to-know-about-bitcoin-the-digital-currency-worth-more-than-the-dollar/' target='_blank'>here</a>, a recent "oh-my-god-they-sell-drugs-with-this" article from wired <a href='http://www.wired.com/threatlevel/2011/06/silkroad/' target='_blank'>here, </a>and a very bullish "this-is-where-i´m-gonna-place-all-my-savings" post on the appreciation trend of the bitcoin <a href='http://falkvinge.net/2011/05/29/why-im-putting-all-my-savings-into-bitcoin/' target='_blank'>here</a>.<br/>Off the cuff, my guess would be that a simple, safe on-line currency that was as easy to use as cash would be quite useful. If you`re buying some one-off good or service online, buying a piece of software directly from the vendor, want to leave something in a blogger´s tip jar, etc. - rather than using a number of different services (visa, paypal, google checkout, tipjar etc) a simple e-cash would be nice. Based on my very cursory look, bitcoin is not quite there. Most importantly, it seems like a chore to get money into and out of bitcoins (partly because paypal, mastercard and visa don`t want to help). If they fix this problem, there would also seem to be a user interface issue: They need to make this integrated into browsers or some ubiquitous tool (facebook? google account?) so that it truly became as easy as pulling a bill out of your pocket. As it stands, my guess would be that it might keep appreciating for a while as gold-standard devotees and Ayn Rand fans discover it, there may be a slight influx of blackmarket funds, and the resulting appreciation may attract people who see it as an investment vehicle. Unless "currency exchange" becomes easier (so you can get the money into and out of the real world) and usability improves, I don´t quite see why this would become big. And if you can´t get your money out without a lot of bother (it might even get worse if governments see the money laundering issue as a problem) - then the investment aspect of it is going to suffer as well.<br/></div>Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-41675974850981172432011-06-07T02:06:00.001-07:002011-06-07T02:06:42.281-07:00The “flaw” in modern economics – and how to fix it?<p>Why do economists produce such sophisticated, intelligent work and yet end up supporting claims about the real world that seem – at times – insane, absurd and clearly unsupported by evidence? (We realize you might disagree that this is ever a problem, but (as the quotes below will show) we are not alone in making this observation.)</p> <p>A colleague and I have tried to understand why this happens in <a href="http://www.mendeley.com/c/4267287632/p/2237761/rogeberg-2011-acceptance-of-unsupported-claims-about-reality-a-blind-spot-in-economics/" target="_blank">a recently published paper</a>. An essay presenting the same ideas in a shorter, simpler, and more readable form is <a href="http://www.scribd.com/doc/57195981/A-blind-spot-in-economics-Unjustified-claims-about-reality" target="_blank">here</a>, and for those who prefer to get “the gist of it” through a video, you can do so <a href="http://youtu.be/MsBbWwKvF4c" target="_blank">here</a>. An even shorter version follows in this blogpost… ;-)</p> <p>The puzzle that we try to explain is this frequent disconnect between high-quality, sophisticated work in some dimensions, and almost incompetently argued claims about the real world on the other. DeLong recently <a href="http://delong.typepad.com/sdj/2011/05/milton-friedman-we-curtsy-to-marshall-but-we-walk-with-walras.html" target="_blank">blogged about this</a> as the “Walrasian” mindset (as opposed to the more pragmatic and empirically oriented Marshallian) he feels characterizes some macroeconomists: </p> <blockquote> <p>The microfoundation-based theoretical framework is not to be tested, but simply applied. It is not an "engine for the discovery of concrete truth" but rather a body of truth itself. Once a Walrasian has pointed out some not-wholly-implausible microfoundation-based mechanisms, his work here is done.</p> </blockquote> <p>The implied claim is that some economists are seduced-by-theoretical-beauty and talk about the real world even though their gaze is fixed almost exclusively on the Platonic ideal of their equations and models. This is similar to Olivier Blanchard`s recent statement that</p> <blockquote> <p><a href="http://www.washingtonpost.com/wp-dyn/content/article/2011/03/07/AR2011030704560.html">Before the crisis, we had converged on a beautiful construction" to explain how markets could protect themselves from harm […] But beauty is not synonymous with truth.</a></p> </blockquote> <p>This, again, was similar to Krugman’s claim in the 2009  essay “<a href="http://www.nytimes.com/2009/09/06/magazine/06Economic-t.html">How did economists get it so wrong?</a>”:</p> <blockquote> <p>As I see it, the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.</p> </blockquote> <p>I`d also note the recent <a href="http://noahpinionblog.blogspot.com/2011/04/what-i-learned-in-econ-grad-school.html" target="_blank">reflections of blogger noahpinion on his graduate economics courses</a>, where</p> <blockquote> <p>the course [… in macroeconomics] didn't discuss <i>how we knew if these theories were right or wrong</i>. We did learn Bob Hall's test of the PIH. That was good. But when it came to all the other theories, empirics were only briefly mentioned, if at all, and never explained in detail. When we learned RBC, we were told that the measure of its success in explaining the data was - get this - that <i>if you tweaked the parameters just right, you could get the theory to produce economic fluctuations of about the same size as the ones we see in real life</i>. When I heard this, I thought "You have <i>got</i> to be kidding me!" Actually, what I thought was a bit more...um...colorful.</p> </blockquote> <p>and (in <a href="http://noahpinionblog.blogspot.com/2011/05/what-i-learned-in-econ-grad-school-part.html" target="_blank">part 2</a>)</p> <blockquote> <p>all of the mathematical formalism and kludgy numerical solutions of DSGE give you <a href="http://www.voxeu.org/index.php?q=node/6158">basically zero forecasting ability</a> (and, in almost all cases, no better than an SVAR). All you get from using DSGE, it seems, is the opportunity to puff up your chest and say "Well, MY model is fully microfounded, and contains only 'deep structural' parameters like tastes and technology!"...Well, that, and a shot at publication in a top journal.</p> </blockquote> <p>Though these observations seem related, they still don`t explain <strong>how</strong> this happens and <strong>why</strong> – and that makes it hard to find a good way to fix things.</p> <p>Our explanation can be put in terms of the research process as an “evolutionary” process: Hunches and ideas are turned into models and arguments and papers, and these are “attacked” by colleagues who read drafts, attend seminars, perform anonymous peer-reviews or respond to published articles. Those claims that survive this process are seen as “solid” and “backed by research.” If the “challenges” facing some types of claims are systematically weaker than those facing other types of claims, the consequence would be exactly what we see: Some types of “accepted” claims would be of high standard (e.g., formal, theoretical models and certain types of statistical fitting) while other types of “accepted claims” would be of systematically lower quality (e.g., claims about how the real world actually works or what policies people would actually be better off under). </p> <p>In <a href="http://www.mendeley.com/c/4267287632/p/2237761/rogeberg-2011-acceptance-of-unsupported-claims-about-reality-a-blind-spot-in-economics/" target="_blank">our paper</a>, we pursue this line of thought by identifying four types of claims that are commonly made – but that require very different types of evidence (just as the Pythagorean theorem and a claim about the permeability of shale rock would be supported in very different ways). We then apply this to the literature on rational addiction and argue that this literature has extended theory and that, to some extent, it is “as if” the market data was generated by these models. However, we also argue that there is (as good as) no evidence that these models capture the actual mechanism underlying an addiction or that they are credible, valid tools for predicting consumer welfare under addictions.  All the same – these claims have been made too – and we argue that such claims are allowed to piggy-back on the former claims provided these have been validly supported. We then discuss a survey mailed to all published rational addiction researchers which provides indicative support – or at least is consistent with – the claim that the “culture” of economics knows the relevant criteria for evaluating claims of pure theory and statistical fit better than it knows the relevant criteria for evaluating claims of causal or welfare “insight”. To see this, just compare <a href="http://en.wikipedia.org/wiki/Bradford-Hill_criteria" target="_blank">the Bradford-Hill criteria</a> for establishing causality in medicine/epidemiology with the evidence presented in modern macro or rational addiction theory or a game-theoretic model of the climate treaty negotiation process.</p> <p>If this explanation holds up after further challenges and research and refinement, it would also provide a way of changing things – simply by demanding that researchers state claims more explicitly and with greater precision, and that we start discussing different claims separately and using the evidence relevant to each specific one. Unsupported claims about the real world should not be something you`re allowed to tag on at the end of a work as a treat for competently having done something quite unrelated.</p> <p>Anyway, this is also an experiment in spreading research – and in addition to this blogpost you can pick from three different levels of interest: The <a href="http://www.mendeley.com/c/4267287632/p/2237761/rogeberg-2011-acceptance-of-unsupported-claims-about-reality-a-blind-spot-in-economics/" target="_blank">full paper</a>, the <a href="http://www.scribd.com/doc/57195981/A-blind-spot-in-economics-Unjustified-claims-about-reality" target="_blank">essay</a> or the <a href="http://youtu.be/MsBbWwKvF4c" target="_blank">video</a>.</p> <p>Comments welcome :-)</p> <div style="padding-bottom: 0px; margin: 0px; padding-left: 0px; padding-right: 0px; display: inline; float: none; padding-top: 0px" id="scid:5737277B-5D6D-4f48-ABFC-DD9C333F4C5D:a5c3b78a-1cc7-4ad8-8604-6706f0e824a6" class="wlWriterEditableSmartContent"><div id="816fafc4-673d-4936-8db6-39cd9b755aae" style="margin: 0px; padding: 0px; display: inline;"><div><a href="http://www.youtube.com/watch?v=MsBbWwKvF4c" target="_new"><img src="http://lh5.ggpht.com/-y-L68YbHlrU/Te3qIHvLnHI/AAAAAAAAElQ/Mbjg0h8eC8k/videof3fdc23bd48d%25255B6%25255D.jpg?imgmax=800" style="border-style: none" galleryimg="no" onload="var downlevelDiv = document.getElementById('816fafc4-673d-4936-8db6-39cd9b755aae'); downlevelDiv.innerHTML = "<div><object width=\"425\" height=\"355\"><param name=\"movie\" value=\"http://www.youtube.com/v/MsBbWwKvF4c&hl=en\"><\/param><embed src=\"http://www.youtube.com/v/MsBbWwKvF4c&hl=en\" type=\"application/x-shockwave-flash\" width=\"425\" height=\"355\"><\/embed><\/object><\/div>";" alt=""></a></div></div></div> <a style="margin: 12px auto 6px; display: block; font: 14px helvetica,arial,sans-serif; text-decoration: underline; font-size-adjust: none; font-stretch: normal; -x-system-font: none" title="View A blind spot in economics? Unjustified claims about reality on Scribd" href="http://www.scribd.com/doc/57195981/A-blind-spot-in-economics-Unjustified-claims-about-reality">A blind spot in economics? Unjustified claims about reality</a><iframe style="height: 1905px" id="doc_22602" class="scribd_iframe_embed" height="600" src="http://www.scribd.com/embeds/57195981/content?start_page=1&view_mode=list&access_key=key-vp8tv4gh07q3jm7xwmw" frameborder="0" width="100%" scrolling="no" data-aspect-ratio="0.772727272727273" data-auto-height="true" data-auto-resized="true"></iframe><script type="text/javascript">(function() { var scribd = document.createElement("script"); scribd.type = "text/javascript"; scribd.async = true; scribd.src = "http://www.scribd.com/javascripts/embed_code/inject.js"; var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(scribd, s); })();</script> Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-53430687367413374652011-06-02T04:34:00.001-07:002011-06-02T04:34:02.521-07:00Bob Lucas – believe the vision, belie the evidence<p><a href="http://noahpinionblog.blogspot.com/2011/06/architect-of-modern-macroeconomics.html">Noahpinion</a> has a nice “<a href="http://delong.typepad.com/sdj/2011/05/milton-friedman-we-curtsy-to-marshall-but-we-walk-with-walras.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+BradDelongsSemi-dailyJournal+%28Brad+DeLong%27s+Semi-Daily+Journal%29">Marshallian</a>” take on the recent talk by Robert “Rational-Expectations” Lucas, the Godfather of modern macro. He shows easily available empirical evidence that strikingly goes against each of the three main assertions Lucas made about the US macroeconomic woes.</p> <blockquote> <p>In <a href="http://www.econ.washington.edu/news/millimansl.pdf">this recent lecture</a> at the University of Washington, Lucas makes the following assertions:</p> <p>1. The persistent gap in income levels among rich economies is due to the costs of European welfare states.</p> <p>2. The length of the Great Depression was due in part to the emergence of strong unions.</p> <p>3. The reason for our current ongoing weakness in employment and business investment is the recent expansion of the U.S. welfare/regulatory state.</p> <p>All three of these assertions are baldly contradicted by history.</p> </blockquote> <p>Head over to <a href="http://noahpinionblog.blogspot.com/2011/06/architect-of-modern-macroeconomics.html">Noahpinion</a> to read the smack-down (well worth reading). What I`d like to do here is just to add a relevant and telling anecdote from Lucas`s <a href="http://www.cenet.org.cn/download/10762-1.pdf">professional memoir</a> that I came across in one of the <a href="http://delong.typepad.com/sdj/2011/05/milton-friedman-we-curtsy-to-marshall-but-we-walk-with-walras.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+BradDelongsSemi-dailyJournal+%28Brad+DeLong%27s+Semi-Daily+Journal%29">comments on DeLong</a>:</p> <blockquote> <p>"'Crossing over' was a term introduced to us to describe a discrepancy between Mendelian theory and certain observations. No doubt there is some underlying biology behind it, but for us it was presented as just a fudge-factor, a label for our ignorance. I was entranced with Mendel’s clean logic, and did not want to see it cluttered up with seemingly arbitrary fudge-factors. “Crossing over is b—s—,” I told Mike.</p> <p>In fact, though, there was a big discrepancy between the Mendelian prediction without crossing over and the proportions we observed in our classroom data, too big to pass over without comment.</p> <p>My report included a long section on experimental error.... Mike...replaced my experimental error section with a discussion of crossing over. His report came back with an A. Mine got a C-, with the instructor’s comment: “This is a good report, but you forgot about crossing-over.”</p> <p>I don’t think there is anyone who knows me or my work as a mature scientist who would not recognize me in this story. The construction of theoretical models is our way to bring order to the way we think about the world, but the process necessarily involves ignoring some evidence or alternative theories—setting them aside. That can be hard to do—facts are facts—and sometimes my unconscious mind carries out the abstraction for me: I simply fail to see some of the data or some alternative theory. This failing can be costly and embarrassing to me, but I don’t think it has any effect on the advance of knowledge. Others will see the blind spot, as Mike did with crossing-over, keep what is good and correct what is not."</p> <p>From Robert Lucas, Professional Memoir, pp. 4-5</p> </blockquote> <p>This may also be an appropriate time to call attention to the classic old Solow quote about Lucas that you can find <a href="http://freakynomics.blogspot.com/2009/08/its-ok-to-laugh-at-robert-lucas.html">here</a>. </p> Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-42933032705545009392011-06-01T08:32:00.001-07:002011-06-01T08:32:23.792-07:00Friedman`s schizophrenic legacy in economic methodology<div xmlns='http://www.w3.org/1999/xhtml'>Brad DeLong had <a href='http://delong.typepad.com/sdj/2011/05/milton-friedman-we-curtsy-to-marshall-but-we-walk-with-walras.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+BradDelongsSemi-dailyJournal+%28Brad+DeLong%27s+Semi-Daily+Journal%29' target='_blank'>an unexpected take</a> on Friedman`s methodological legacy in economics, highlighting his desire to stay close to data when theorizing rather than his defense of "as-if" theorizing. In DeLong`s words, Friedman was a (pragmatic) Marshallian rather than a (purist) Walrasian:<br/><blockquote><span style='border-collapse: separate; color: rgb(0, 0, 0); font-family: 'Times New Roman'; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; font-size: medium;' class='Apple-style-span'><span style='color: rgb(51, 51, 51); font-family: 'Trebuchet MS',Verdana,sans-serif; line-height: 19px; text-align: left; font-size: small;' class='Apple-style-span'><p style='margin-top: 10px; margin-bottom: 10px; text-align: left; '>are the theoretical mechanisms we are studying things that we can see? Are their predictions consistent with the gross features of reality? Supply curves slope up: if we say that demand has changed and pushed us along a supply curve, is it in fact the case that both quantities and prices have risen (or fallen)? Demand curves slope down: if we say that supply has changed and pushed us along a demand curve, is it in fact the case that quantities have risen and prices have fallen (or fallen and risen)?</p><p style='margin-top: 10px; margin-bottom: 10px; text-align: left;'>If the first-order predictions of our theories are not visible in the first-order movements of the data--quantities, prices, asset values, and expectations--then, Friedman (and Marshall) would say, our theory is broken and we need to fix it.</p></span></span><span style='border-collapse: separate; color: rgb(0, 0, 0); font-family: 'Times New Roman'; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; font-size: medium;' class='Apple-style-span'><span style='color: rgb(51, 51, 51); font-family: 'Trebuchet MS',Verdana,sans-serif; line-height: 19px; text-align: left; font-size: small;' class='Apple-style-span'/></span><span style='border-collapse: separate; color: rgb(0, 0, 0); font-family: 'Times New Roman'; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; font-size: medium;' class='Apple-style-span'><span style='color: rgb(51, 51, 51); font-family: 'Trebuchet MS',Verdana,sans-serif; line-height: 19px; text-align: left; font-size: small;' class='Apple-style-span'/></span></blockquote>I`ve often been puzzled by examples Friedman`s pragmatic, close-to-the-data, uncover-the-actual-mechanisms approach and its mismatch with the message economists took away from his essay on methodology. In <a href='http://books.google.com/books?id=F7Vf3sb-ck4C&lpg=PA164&dq=recalls%20economists%20in%20the%201950s%20reacting%20to%20Friedman%60s%20essay&pg=PA164#v=onepage&q&f=false' target='_blank'>a footnote in Hausman's book on "the inexact and separate science of economics"</a> he mentions that Lee Hansen <br/><blockquote>recalls economists in the 1950s reacting to Friedman`s essay with a sense of <i>liberation.</i> They could now get on with the job of exploring and applying their models without bothering with objections to the realism of their assumptions. <br/></blockquote>More recently, Nathan Berg and Gerd Gigerenzer wrote <a href='http://econpapers.repec.org/paper/pramprapa/26586.htm' target='_blank'>a paper </a>where they set up the "as if" methodology associated with Friedman as the great big flaw of behavioral as well as neoclassical economics:<br/><blockquote>For a research program that counts improved empirical realism among its primary goals, it is startling that behavioral economics appears, in many cases, indistinguishable <br/>from neoclassical economics in its reliance on as-if arguments to justify ―psychological models that make no pretense of even attempting to describe the psychological processes that underlie human decision making.<br/></blockquote>This image of Friedman as the staunchest defend of absurdly speculative rational choice fiction always seemed at odds with other stories about the man`s research. As I understand it, he pored through meeting minutes from the Fed together with Anna Schwartz to understand why the Fed did what it did during the Great Depression, and he was sceptical of data-fitting and overly complex theoretical models. Also, when the Economic Journal had a 100 year anniversary issue (January 1991, vol 101 no 404) and asked a number of famous economists for their predictions about the "next 100 years" of our discipline, Friedman went back to the early issues to actually see what (if anything) had changed. As far as I remember, the other contributions I read were mainly <br />economists saying that in the future the discipline would finally move <br />towards what they themselves had been doing for a long time. Friedman concluded that the core subjects of the late 1800s would still be present, some new topics (e.g., property rights, crime, public choice) would probably be present, along with some new topics. The methods would be an updated but recognizable mix of pure theory, descriptive statistics and econometrics. And to conclude he quoted a conclusion Ashley had made after a similar exercise in 1907:<br/><blockquote>When one looks back on a century of economic teaching and writing, the chief lesson should, I feel, be one of caution and modesty, and especially when we approach the burning issues of our own day. We economists...have been so often in the wrong!<br/></blockquote><br/><br/></div>Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-61276680966657819342011-05-25T02:06:00.000-07:002011-05-25T02:06:00.111-07:00What graduate school economics did and did not teach some random dude<p>I`ve got no idea who this guy is – found links to these posts from Tyler Cowen’s blog – but I found <a href="http://noahpinionblog.blogspot.com/2011/04/what-i-learned-in-econ-grad-school.html">his reflection on his graduate economics education</a> (see also <a href="http://noahpinionblog.blogspot.com/2011/05/what-i-learned-in-econ-grad-school-part.html">part 2</a>)  insightful and interesting.</p> <p>Some highlights (that is to say – things that remind me of my own opinions ;-)</p> <blockquote> <p>coming as I did from a physics background, I found several things that annoyed me about the course (besides the fact that I got a B). One was that, in spite of all the mathematical precision of these theories, very few of them offered any way to <i>calculate</i>any economic quantity. In physics, theories are tools for turning quantitative observations into quantitative predictions. In macroeconomics, there was plenty of math, but it seemed to be used primarily as a descriptive tool for explicating ideas about how the world might work. At the end of the course, I realized that if someone asked me to tell them what unemployment would be next month, I would have <i>no idea</i> how to answer them.</p> <p>As Richard Feynman once said about a theory he didn't like: "I don’t like that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation - a fix-up to say, 'Well, it might be true.'"</p> <p>That was the second problem I had with the course: it didn't discuss <i>how we knew if these theories were right or wrong</i>. We did learn Bob Hall's test of the PIH. That was good. But when it came to all the other theories, empirics were only briefly mentioned, if at all, and never explained in detail. When we learned RBC, we were told that the measure of its success in explaining the data was - get this - that <i>if you tweaked the parameters just right, you could get the theory to produce economic fluctuations of about the same size as the ones we see in real life</i>. When I heard this, I thought "You have <i>got</i> to be kidding me!" Actually, what I thought was a bit more...um...colorful. </p> <p>(This absurdly un-scientific approach, which goes by the euphemistic name of "moment matching," gave me my bitter and enduring hatred of Real Business Cycle theory, about which Niklas Blanchard and others have teased me. I keep waiting for the ghost of<a href="http://en.wikipedia.org/wiki/Francis_Bacon">Francis Bacon</a> or Isaac Newton to appear and smite Ed Prescott for putting <a href="http://www.minneapolisfed.org/research/qr/qr1042.pdf">theory ahead of measurement</a>. It hasn't happened.)</p> <p>[…]</p> <p>DeLong and Summers are right to point the finger at the economics field itself. Senior professors at economics departments around the country are the ones who give the nod to job candidates steeped in neoclassical models and DSGE math. The editors of <i>Econometrica</i>, the <i>American Economic Review</i>, the <i>Quarterly Journal of Economics</i>, and the other top journals are the ones who publish paper after paper on these subjects, who accept "moment matching" as a standard of empirical verification, who approve of pages upon pages of math that tells "stories" instead of making quantitative predictions, etc. And the <a href="http://en.wikipedia.org/wiki/Committee_for_the_Prize_in_Economic_Sciences_in_Memory_of_Alfred_Nobel">Nobel Prize committee</a> is responsible for giving a <a href="http://en.wikipedia.org/wiki/Nobel_Memorial_Prize_in_Economic_Sciences">(pseudo-)Nobel Prize</a> to Ed Prescott for the RBC model, another to Robert Lucas for the Rational Expectations Hypothesis, and another to Friedrich Hayek for being a cranky econ blogger before it was popular. </p> </blockquote> <p>And from <a href="http://noahpinionblog.blogspot.com/2011/05/what-i-learned-in-econ-grad-school-part.html">the follow-up blog-post</a> which discusses the field-courses he chose (which, AFAIK are the courses he voluntarily chose):</p> <blockquote> <p>The field course addressed some, but not all, of the complaints I had had about my first-year course. There was more focus on calculating observable quantities, and on making predictions about phenomena other than the ones that inspired a model's creation. That was very good.</p> <p>But it was telling that even when the models made wrong predictions, this was not presented as a reason to reject the models (as it would be in, say, biology). This was how I realized that macroeconomics is a science in its extreme infancy. Basically, we don't have any macro models that really <i>work</i>, in the sense that models "work" in biology or meteorology. Often, therefore the measure of a good theory is whether it<i>seems to point us in the direction</i> of models that might work someday.</p> <p>[…]</p> <p>all of the mathematical formalism and kludgy numerical solutions of DSGE give you <a href="http://www.voxeu.org/index.php?q=node/6158">basically zero forecasting ability</a> (and, in almost all cases, no better than an SVAR). All you get from using DSGE, it seems, is the opportunity to puff up your chest and say "Well, MY model is fully microfounded, and contains only 'deep structural' parameters like tastes and technology!"...Well, that, and a shot at publication in a top journal.</p> <p>Finally, my field course taught me what a bad deal the whole neoclassical paradigm was. When people like Jordi Gali found that RBC models didn't square with the evidence, it did not give any discernible pause to the multitudes of researchers who assume that technology shocks cause recessions. The aforementioned paper by Basu, Fernald and Kimball uses RBC's own framework to show its internal contradictions - it jumps through all the hoops set up by Lucas and Prescott - but I don't exactly expect it to derail the neoclassical program any more than did Gali.</p></blockquote> Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-35079502981946837682011-05-24T01:39:00.001-07:002011-05-24T01:39:53.756-07:00The source of our policy views – an honest opinion from Steven Levitt<p>Freakonomics-author Levitt <a href="http://www.freakonomics.com/2011/05/09/the-%E2%80%9Cdaughter-test%E2%80%9D-of-government-prohibitions-and-why-im-so-angry-about-the-u-s-internet-poker-crackdown/">recently posted</a> on why he <em>strongly </em>opposed the US ban on internet poker, while weakly preferring drug prohibition (despite the good arguments against it) and legalized abortion. </p> <blockquote> <p>I’ve never really understood why I personally come down on one side or the other with respect to a particular gray-area activity.  […]</p> <p>It wasn’t until the U.S. government’s crackdown on internet poker last week that I came to realize that the primary determinant of where I stand with respect to government interference in activities comes down to the answer to a simple question: How would I feel if my daughter were engaged in that activity?</p> <p>If the answer is that I wouldn’t want my daughter to do it, then I don’t mind the government passing a law against it. I wouldn’t want my daughter to be a cocaine addict or a prostitute, so in spite of the fact that it would probably be more economically efficient to legalize drugs and prostitution subject to heavy regulation/taxation, I don’t mind those activities being illegal.</p> </blockquote> <p>Some <a href="http://econlog.econlib.org/archives/2011/05/steven_levitts.html">express disappointment</a> in Levitt for this comment:</p> <blockquote> <p>What's missing in Levitt? The whole idea of <em>tolerance</em>. It's easy to tolerate people doing what you would do and approve of. It's harder to tolerate what you don't approve of. It's even harder to tolerate activities and behaviors that you find disgusting. Levitt has just confessed that he's intolerant or, at least, that he won't object to a government that's intolerant. That's disappointing. I had expected better of him.</p> </blockquote> <p>Personally, I find this a misreading of his point. I don`t think he`s saying that he believes this is how it should be – just that this seems to be the way it is. If anything, the fact that he has tried to reflect on the source of his opinions and their possible basis in emotions makes me trust the guy more.</p> <p>Seems to me that we often have  a strong feeling or “intuition” that something is good or bad, and that the smarter we are the better we`re able to convince ourselves that this is due to logical arguments. There`s a host of good stuff on the psychological mechanisms driving our attitudes towards sources of risk in Dan Gardner`s book “The science of fear.” There`s a host of good stuff on how easily we trick ourselves in Kurzban`s “Why everyone (else) is a hypocrite”. Who hasn`t been in a discussion with intelligent, informed people who dig themselves deeper and deeper into a hole while trying to defend some ridiculous opinion. (And who hasn`t at times been that very same person themselves?)</p> <p>Note: I`m not making the argument that we can`t learn and modify our views when confronted by evidence. But I am making the claim that this is frequently difficult to do, and that someone able to reflect on their feelings and biases (as Levitt does here) seems more open to changing his views than somebody who ignorantly imagines him- or herself to be a rational, evidence-based and principled logic machine.</p> Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-56725551535125189692011-05-23T04:09:00.001-07:002011-05-23T04:09:49.134-07:00“As-if behavioral economics” – puzzle: How can an as-if theory be normative?<p>Although I enjoyed it, I’ve spent the last few days on this blog noting some issues where I disagree with the paper <a href="http://www.google.no/url?sa=t&source=web&cd=1&ved=0CBsQFjAA&url=http%3A%2F%2Fwww.utdallas.edu%2F%7Enberg%2FBerg_ARTICLES%2FBergGigerenzerBEisNeoclassicalInDisguiseHEI.pdf&rct=j&q=As-if%20behavioral%20economics%3A%20Neoclassical%20economics%20in%20disguise%3F&ei=mVbITeudIYueOtfCne0B&usg=AFQjCNHqr_3H0F8cEi03Q8AXKc0rgBbZ7g&sig2=D3rkIN6J82GWce5m-mlPcQ&cad=rja">”As-if behavioral economics”</a>. Today I want to reflect on something they touch upon without fully resolving.</p> <p>Some economists argue that their assumptions can`t be questioned because their models are “as-if” - they are merely tools that allow you to successfully predict market data, and the realism of the assumptions is irrelevant. If that is so - why are there so many norms and criteria apart from prediction that a “good” model should fulfill? And why - if they are mere “as-if” prediction-generating machines - are the neoclassical models held up as a normative ideal we should strive to aim for in our own decision making?</p> <p>Berg and Gigerenzer touch on this puzzle in a couple of places. For one thing, two of the points they emphasize is that </p> <ul> <li>behavioral economics suffers from subscribing to the as-if method, which ignores the realism of the assumptions (similarity of model to the real-world mechanism/process), and that </li> <li>behavioral economics has grown to see behavioral “heuristics” as “biases” that violate the normatively correct neoclassical rules</li> </ul> <p>Later, they also note that the</p> <blockquote> <p>the normative interpretation of deviations as mistakes does not follow from an empirical investigation linking deviations to negative outcomes. The empirical investigation is limited to testing whether behavior conforms to a neoclassical normative ideal.</p> </blockquote> <p>Consider - if the model is nothing but a black box that spits out impressive predictions:</p> <ul> <li>Why is it important that agents inside the model are optimizing and rational? </li> <li>Why is it important that the agents are well informed? </li> <li>Why is it important that preferences are “standard” (thus generating well behaved utility functions and nice indifference curves)? </li> <li>Why does it matter whether or not your prediction is based on an “equilibrium” inside the model?</li> <li>How can the utility and welfare effects of a model imply anything about real people`s welfare?</li> </ul> <p>This is particularly odd since, as far as I can tell, rational optimizers can behave in all sorts of ways depending on their preferences and the choice problem they face. When assumptionsdon’t need to be supported by empirical evidence, this means that any observable behavior pattern can be modelled as rational behavior given some hypothetical choice problem. If you don't believe me, ask yourself whether you can describe any specific behavior pattern that could not be the result of rational choice. Note that this has to be a pattern, that is to say that it has to be stated in terms of observables without reference to “underlying” but non-observable preferences. You can refer to prices, consumption goods, patterns across time and between goods, etc., and using such categories I don`t think it is possible to find any “non-rationalizable consumption pattern” that would be accepted as that by most economists.</p> <p>So what?</p> <p>Well - if anything can be rationalized by such a theory, and assumptions can be as unrealistic as you want - then any stable pattern can be “explained” by such a “theory.” In actuality, though, you would just be describing the pattern using a different format (the “rational choice model” format). Which raises the question of why it is so important to use that format.</p> <p>After all - if all you want to do is to predict, then it shouldn`t matter whether you assumed people to behave “as if” they were maximizers or not. <em>Any model would be just as good if it predicted equally well.</em></p> <p>Also - if the rational choice model is just a format - a way of describing behavior by identifying some “story” that would generate it - then <em>why should it have normative power?</em></p> <p>This is extra puzzling if you consider the old-school style Chicago-economics that sees all behavior as rational. If this is so, then there is no normative power beyond “do whatever you do cause that’<code>s what’</code>s optimal.” Taken at face value, this view of the world would also lead to apathy: There’<code>s </code></p> <p>no point in criticizing politicians or engaging with the world, because everyone knows what they’re doing and are doing what’s best for themselves. Politicians – that’s public choice. Regulators - they`ve been captured by special interests. Economists? Well - I guess their doings could be made endogenous as well. </p> <p>I don’t have an answer to this puzzle - but I wonder if it may have something to do with politics. By <em>both</em> claiming that everyone is rational  <em>and</em> that this rationality represents the normative ideal for action, then a world of unfettered markets seems like a good idea: It would be a world of informed, self-interested people generating huge benefits to each other through their selfish doings. If so - then behavioral economics becomes the “interventionist” response: Yes - a neoclassical paradise would be great – however, unfortunately, we’re just evolved apes with lots of biases and flaws. With a little carefully designed policy, though, we can regulate and nudge people in the direction of the truly rational agent.</p> <p>Does anyone know of a survey that would make it possible to correlate policy views and politics with economists`attitudes towards behavioral and old-school rational choice theory?</p> Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-52636331295984490152011-05-20T00:29:00.001-07:002011-05-21T05:42:49.624-07:00Strauss-Kahn and rational assaultTyler Cowen generated a bit of discussion recently with his <a href="http://marginalrevolution.com/marginalrevolution/2011/05/what-makes-you-an-economist.html">blog-comment</a> on Dominique Strauss-Kahn<br />
<blockquote>Dominique Strauss-Kahn <a href="http://www.nytimes.com/2011/05/15/nyregion/imf-head-is-arrested-and-accused-of-sexual-attack.html?hp">has been arrested, taken off a plane to Paris, and accused of a shocking crime</a>. When I hear of this kind of story, I always wonder how the “true economist” should react. After all, DSK had a very strong <em>incentive</em> not to commit the crime, including his desire to run for further office in France, not to mention his high IMF salary and strong network of international connections. So much to lose.<br />
Should the “real economist” conclude that DSK is less likely to be guilty than others will think? </blockquote>Let`s try to answer the question:<br />
A bad economist would think : Strauss-Kahn clearly has more to lose and thus less of an incentive to sexually assault – which makes it unlikely that he did. So he is probably innocent. <br />
A better economist would go one step further: Strauss-Kahn realizes that we would think this way, which makes crime relatively risk-free for him. This makes it likely that he did perform the crime. So he is probably guilty.<br />
The even better economist would go even further: Since we realize that Strauss-Kahn would realize this, and that he would want to exploit this mechanism, we can conclude that he is probably guilty. <br />
The “real economist,” finally, would realize that this infinite loop would lead Strauss-Kahn to play his part in implementing a randomized, mixed-strategy equilibrium by throwing a dice to decide whether or not to run naked down hallways assaulting hotel staff. The economist would then write up the model, derive suitably generalized solutions for various assumptions of payoffs and attitudes towards risk, and publish it in a high ranking journal, using the Kahn-Strauss story as a motivating example in the introduction.Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-25640573477496484682011-05-18T11:00:00.000-07:002011-05-18T11:00:00.899-07:00“As if behavioral economics” - flaw 3: Adding a parameter is not all behavioral economists have doneI´m writing through some issues raised by the paper <a href="http://www.google.no/url?sa=t&source=web&cd=1&ved=0CBsQFjAA&url=http%3A%2F%2Fwww.utdallas.edu%2F%7Enberg%2FBerg_ARTICLES%2FBergGigerenzerBEisNeoclassicalInDisguiseHEI.pdf&rct=j&q=As-if%20behavioral%20economics%3A%20Neoclassical%20economics%20in%20disguise%3F&ei=mVbITeudIYueOtfCne0B&usg=AFQjCNHqr_3H0F8cEi03Q8AXKc0rgBbZ7g&sig2=D3rkIN6J82GWce5m-mlPcQ&cad=rja">As-if behavioral economics</a>. I have one more annoyance I want to raise with the paper before I move on to some of its strong points.<br />
<br />
The annoyance I want to note today is one that disappoints me. Berg and Gigerenzer write:<br />
<br />
<blockquote>Behavioral models frequently add new parameters to a neoclassical model, which necessarily increases R-squared. Then this increased R-squared is used as empirical support for the behavioral models without subjecting them to out-of-sample prediction tests.</blockquote><br />
This is silly. Yes, adding a parameter does increase R-squared (the share of the variation in the data that your statistical model captures), but this way of phrasing it makes it sound as though any variable added to a statistical model would increase R-squared by the same amount. That´s not the case: A randomly picked variable that is irrelevant would (if we ignore time trends and that sort of data) on average have zero explanatory power. The standard test is to check the significance level of the variable. This answers the following question: If the variable actually has no explanatory power for the data - how likely is it that it would “by chance” seem to explain whatever it seems to explain in the current dataset? The normal significance level to test at is 5%, and if you use that significance level the “irrelevant” variable will seem relevant in your data only 5% of the time. I´m pretty sure Berg and Gigerenzer know this.<br />
<br />
A related flaw shows up in their discussion of Fehr and Schmidt´s model of inequality aversion (which assumes that some people dislike inequality, especially inequality in their own disfavor). Berg and Gigerenzer write:<br />
<br />
<blockquote>In addition, the content of the mathematical model is barely more than a circular explanation: When participants in the ultimatum game share equally or reject positive offers, this implies non-zero weights on the “social preferences” terms in the utility function, and the behavior is then attributed to “social preferences.”</blockquote><br />
This, too, is weak. What Fehr and Schmidt´s model assumes is that there is a specific structure to the inequity aversion: That your dislike of how much better (or worse) off someone else is than you is a linear function of <i>how much</i> better off than you they are. And, second, that it´s worse being behind someone than in front of someone, even if you´d prefer most of all that you were equal. It may be this model is "wrong," but it is more than circular and there is a variety of competing models that others have promoted as better ways of capturing typical patterns in experimental data on various economic games (off the top of my head, Charness and Rabin (2002), Bolton and Ockenfels (2000) and Engelmann and Strobel (2004)).<br />
<br />
Having said that, it might well be that Fehr and Schmidt is a crude model that fails to capture and process the relevant data in the best way. However, it does so well enough to be useful and interesting. If you found a model that did better and that could also predict well for new experiments, as well as in different settings - using less information that could more credibly be related to actual pscyhological processes - then I´m pretty sure you would be published quickly in a good journal. That´s not to say that “you shouldn´t criticize unless you can do better,” but it is to say that the current model captures <i>something</i> interesting in a simple way - even if it is clearly imperfect. Clarifying its weaknesses is fair game - but Berg and Gigerenzer should do better than brushing it off as though its fit with data was no better than any random model thrown up.Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-17230455596873788942011-05-17T11:00:00.000-07:002011-05-17T11:00:02.986-07:00"As if behavioral economics" - flaw 2: "Neglecting the process is always wrong"I´m writing through some issues raised by the paper <a href="http://www.google.no/url?sa=t&source=web&cd=1&ved=0CBsQFjAA&url=http%3A%2F%2Fwww.utdallas.edu%2F%7Enberg%2FBerg_ARTICLES%2FBergGigerenzerBEisNeoclassicalInDisguiseHEI.pdf&rct=j&q=As-if%20behavioral%20economics%3A%20Neoclassical%20economics%20in%20disguise%3F&ei=mVbITeudIYueOtfCne0B&usg=AFQjCNHqr_3H0F8cEi03Q8AXKc0rgBbZ7g&sig2=D3rkIN6J82GWce5m-mlPcQ&cad=rja">As-if behavioral economics</a>. It´s a sprawling paper with some very good arguments and some… not so good ones. I´m hoping to get through both good and bad this week.<br />
<br />
The paper opens with a reasonable goal - evaluating whether behavioral economics has achieved its (sometimes) stated goal of improved empirical realism:<br />
<br />
<blockquote>Insofar as the goal of replacing these idealized assumptions with more realistic ones accurately summarizes the behavioral economics program, we can attempt to evaluate its success by assessing the extent to which empirical realism has been achieved.</blockquote><br />
This is an OK idea for a paper: Some tradition has aimed to achieve X, and we want to see how successful they´ve been in this. However, Berg and Gigerenzer also imply in much of the paper that this aim (empirical realism in the assumed decision making process) is always an important aim, and that any economic theory that fails in this regard is wrong. They call behavioral economics a “repair program” for the flaws of neoclassical “rational choice” economics, and have a long section on how “empirical realism” was sold, bought and re-sold (i.e. they had it in mainstream economics, lost it due to Pareto and his friends, started getting it back with behavioral economics, but then lost it as these strayed from the path):<br />
<br />
<blockquote>perhaps after discovering that the easiest path toward broader acceptance into the mainstream was to put forward slightly modified neoclassical models based on constrained optimization, the behavioral economics program shed its ambition to empirically describe psychological process, adopting Friedman‘s as-if doctrine.</blockquote><br />
So why is this empirically accurate process description so important in Berg and Gigerenzer´s view? The reason seems to be that they give different implications for how we can aid and improve human choice. After an (interesting) explanation of how ball-players catch balls through a simple heuristic (“run so that the ball up in the air is at a constant angle to you”) rather than through “intuitive” application of Newtonian mechanics, they write:<br />
<br />
<blockquote>Thus, process and as-if models make distinct predictions (e.g., running in a pattern that keeps the angle between the player and ball fixed versus running directly toward the ball and waiting for it under the spot where it will land; and being able to point to the landing spot) and lead to distinct policy implications about interventions, or designing new institutions, to aid and improve human performance.</blockquote><br />
This is a good and valid argument <i>in its relevant context</i> but it surely fails to apply to all types of economics. It seems particularly relevant as a criticism of welfare economics, which often involves nothing more substantial than the argument that “all choices are always welfare-maximizing, so any new choice option that is chosen improved welfare.” However, not all of economics is (or should be) dealing with this.<br />
<br />
To my mind, at least part of what economics is about is the study of interactions: What happens when many people interact in a given institutional context (market, negotiation or whatever) and there are mechanisms (prices, norms, whatever) that introduce various positive and negative feedback effects? To study this you need a method, and one such method is to create a “toy world” where “toy people” act in a way that captures relevant behavioral regularities in real people. If people tend to buy less of a good when the prices rises, then you need a toy person who responds like this. If you think it may be important that people in some market want to buy the same thing as some other person or group (e.g. fashion), then you need a toy person who exhibits this response. However, you don´t need a psychologically realistic model of a person because all you want (in this context) is to see what the outcome of various interaction effects would be.<br />
<br />
Sometimes (usually, I would guess), economists will do this in a closed, simple mathematical model with utility maximizing agents and profit maximizing firms. However, since utility maximizing agents can behave in almost any conceivable way (just change their preferences and introduce state variables as in Becker´s extended utility approach), this “rationality postulate” doesn´t really constrain the kinds of behavior you can study that much. You are likely more constrained by the expectations of other economists that the toy people and firms in the model should have “model consistent expectations” (i.e., they should expect the consequences of their actions that actually occur), and that it is important and interesting to study the subtle mechanisms that are created when these toy agents consequently marginally adjust their behavior for all sorts of reasons (Hotelling´s rule, the green paradox, smokers responding to expectations of future tax hikes by smoking less today, etc.).<br />
<br />
Another way of doing this is agent based modelling, where you create small “ant people” in a computer program and let them interact based on simple rules. You do this again and again and see what “typically happens” and so on. This is related to evolutionary game theory where the shares of “agents” living by some simple strategy grows or shrinks depending on the average payoff it produces given the current mix of strategies in the population.<br />
<br />
Anyway - though none of these ways of studying interaction are sufficient to credibly examine the social world around us, they don´t seem completely valueless. Granted - some (many?) economists do take the welfare of the toy people a bit too seriously as a proxy for real world consumer welfare, and some seem to think that tweaking a toy person to act like a real person means that the real person “is similar” pscyhologically to the toy person. But these are errors in interpretation and use, not in the method as such.<br />
<br />
In short: If you want to show how simple behavioral patterns at the individual level could combine to create various higher-level patterns in groups and markets and other contexts, then what you want is the simplest, most tractable representation of those behavior patterns. Psychological realism is irrelevant - because your argument is “several people interacting in this specific way, each of whom exhibit this simple behavior patterns, would generate these and these aggregate patterns and would - in aggregate - respond in this and this way to various external shocks in the environment”.Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-75565937331679908762011-05-16T11:00:00.000-07:002011-05-16T11:00:01.366-07:00"As if behavioral economics" - flaw 1: The “true tradition” argumentI recently read <a href="http://www.google.no/url?sa=t&source=web&cd=1&ved=0CBsQFjAA&url=http%3A%2F%2Fwww.utdallas.edu%2F%7Enberg%2FBerg_ARTICLES%2FBergGigerenzerBEisNeoclassicalInDisguiseHEI.pdf&rct=j&q=As-if%20behavioral%20economics%3A%20Neoclassical%20economics%20in%20disguise%3F&ei=mVbITeudIYueOtfCne0B&usg=AFQjCNHqr_3H0F8cEi03Q8AXKc0rgBbZ7g&sig2=D3rkIN6J82GWce5m-mlPcQ&cad=rja">As-if behavioral economics</a>, a paper critical of behavioral economics written by Nathan Berg and Gerd Gigerenzer. Last week I presented the <a href="http://freakynomics.blogspot.com/2011/05/is-behavioral-economics-flawed-band-aid.html">underlying narrative</a> that they seem to imply. This week I hope to have time to discuss some of the more substantial good and bad points of their paper.<br />
<br />
However, before we move on to substance I have one annoyance that I want to get off my chest: What I call the “true tradition” argument. I´ve <a href="http://freakynomics.blogspot.com/2010/05/adam-smith-and-invisible-hand-who-gives.html">touched on this before</a> - regarding the “Holy Scripture” view that some people seem to have of Smith´s Wealth of Nations, but this paper does it again and I find it silly and annoying.<br />
<br />
The “structure” of the argument (if you can even call it an argument) is one of two: <br />
* <i>“Somebody I disagree with has fallen from the true and pure tradition”</i><br />
* <i>“I may seem to be an outsider, but I´m actually the true carrier of the true and pure tradition”</i><br />
<br />
You see this in religion and alternative movements such as meditation or NLP- where people trace their guru or Kung-Fu teacher or whatever back to some original figure. “My teacher studied under X, who studied under Y, who studied under Z in a pure unbroken line back to (idolized figure or text)” or the long "X begat Y who begat Z who begat.." sections of the old Testament.<br />
<br />
You also see this in quasi-scientific practices such as Freudian psychoanalysis. It´s probably even more pronounced in some parts of Austrian economics, where the discussion of what Hayek or Mises or Böhm-Bawerk or Menger “truly” meant seems to be a huge thing. Followers of Ayn Rand are the same or worse. You see it in people who make a big ado about how their claims are foreshadowed in Aristoteles or some ancient philosopher´s speculative musings as if that should somehow count as relevant evidence for an empirical claim.<br />
<br />
Amongst people opposed to “standard economics” there seems to be a similar thing going on - to me, the <a href="http://www.othercanon.org/papers/tree.html">family tree</a> of the “other canon” project seems a clear example. <br />
<br />
And in Berg and Gigerenzer´s paper, the “wrong turn” of economics is identified as the<br />
<blockquote>fundamental shift in economics which took place from the beginning of the twentieth century: the ̳Paretian turn‘. This shift, initiated by Vilfredo Pareto and completed in the 1930s and 1940s by John Hicks, Roy Allen and Paul Samuelson, eliminated psychological concepts from economics by basing economic theory on principles of rational choice.</blockquote>You could choose to ignore this kind of stuff - see it as narratives that help provide groups of people with a feeling of connection to a larger tradition and that places their work and struggles into a larger storyline of good and bad. But seriously… it´s just stupid.<br />
<br />
More than stupid, I see this as a real problem in that it raises as a significant and important issue something which is irrelevant to the evaluation of scientific claims. <b><i>Nobody has a hotline to truth!</i></b> I don´t care how smart you are or how often you´ve been right before - even the smartest people in the world can be misguided and confused and incorrect. Their claims must be evaluated and confronted with evidence, and if they´re wrong they´re wrong and we move on.Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0tag:blogger.com,1999:blog-8329357786401012777.post-85269054804319185932011-05-10T02:01:00.000-07:002011-05-10T02:01:01.117-07:00Is behavioral economics a flawed band-aid on the neoclassical enterprise?I finally got around to reading the paper “<a href="http://www.google.no/url?sa=t&source=web&cd=1&ved=0CBsQFjAA&url=http%3A%2F%2Fwww.utdallas.edu%2F%7Enberg%2FBerg_ARTICLES%2FBergGigerenzerBEisNeoclassicalInDisguiseHEI.pdf&rct=j&q=As-if%20behavioral%20economics%3A%20Neoclassical%20economics%20in%20disguise%3F&ei=mVbITeudIYueOtfCne0B&usg=AFQjCNHqr_3H0F8cEi03Q8AXKc0rgBbZ7g&sig2=D3rkIN6J82GWce5m-mlPcQ&cad=rja">As-if behavioral economics: Neoclassical economics in disguise?</a>” by Nathan Berg and Gerd Gigerenzer this past Easter holiday. I found it enjoyable, often insightful, and somewhat confused. It contained a lot of stuff, so I´ll split this into several parts.<br />
Today I´ll merely go through the overall “story” they seem to be operating from. This isn´t the “storyline” of the paper, but more the story such as I can piece it back together from the pieces and clues they scatter throughout the paper.<br />
Their story is that economics was a sensible science informed by psychological science until an italian economist called Pareto turned it into the current, neoclassical “monster” we have today. <br />
<blockquote>a fundamental shift in economics which took place from the beginning of the twentieth century: the ̳Paretian turn‘. This shift, initiated by Vilfredo Pareto and completed in the 1930s and 1940s by John Hicks, Roy Allen and Paul Samuelson, eliminated psychological concepts from economics by basing economic theory on principles of rational choice.</blockquote>This new framework assumed that people´s stable preferences can be described by a mathematical utility function such that any good (provided in sufficient quantities) can fully compensate for a reduction in any other good.<br />
<blockquote>If, for example, x represents a positive quantity of ice cream and y represents time spent with one‘s grandmother, then as soon as we write down the utility function U(x, y) and endow it with the standard assumptions that imply commensurability, the unavoidable implication is that there exists a quantity of ice cream that can compensate for the loss of nearly all time with one‘s grandmother.</blockquote>In addition, this framework built up an axiomatic, logical theory of normative rationality centered around internal consistency. That is to say, they argued that people should have transitive preferences, conform to expected utility axioms and have Bayesian beliefs. <br />
This was actually just an unsupported (and in Berg and Gigerenzer´s view, false) assumption, in that they never even attempted to establish that such rules would lead to better outcomes in the real world. <br />
<blockquote>Expected utility violators and time-inconsistent decision makers earn more money in experiments (Berg, Johnson, Eckel, 2009).</blockquote>Because this theory completely misspecified how people make choices and process beliefs, it became necessary to ignore the realism of the assumptions. For this reason, they turned to the “as-if” methodology that they saw Friedman as having preached: All models are only to be evaluated in terms of how well they predict - and the realism of the assumptions is irrelevant. They describe this as<br />
<blockquote>the Friedman as-if doctrine in neoclassical economics focusing solely on outcomes.</blockquote>This did not fully solve the underlying problem: Since people do not choose in this way, predictive ability was poor. Behavioral economists initially wanted to tackle the root of the problem by reintroducing realism (psychology) into the description of consumer behavior. After a while, though, they were instead reduced to adding bells and whistles of various kinds to patch up the existing formal framework so that it would better predict in an as-if sense. <br />
<blockquote>Instead of asking how real people—both successful and unsuccessful—choose among gambles, the repair program focused on transformations of payoffs (which produced expected utility theory) and, later, transformations of probabilities (which produced prospect theory) to fit, rather than predict, data. The goal of the repair program appeared, in some ways, to be more statistical than intellectual: adding parameters and transformations to ensure that a weighting- and-adding objective function, used incorrectly as a model of mind, could fit observed choice data.</blockquote>Their work, by introducing further complications into the choice models, actually made things worse - in that they made the resulting “theory” of human choice even less plausible.<br />
<blockquote>Leading models in the rise of behavioral economics rely on Friedman‘s as-if doctrine by putting forward more unrealistic processes—that is, describing behavior as the process of solving a constrained optimization problem that is more complex—than the simpler neoclassical model they were meant to improve upon.</blockquote>On the normative side, most behavioral “epicycles” that were introduced came to be seen as biases and flaws that needed nudging and paternalistic regulation. <br />
<blockquote>To these writers (and many if not most others in behavioral economics), the neoclassical normative model is unquestioned, and empirical investigation consists primarily of documenting deviations from that normative model, which are automatically interpreted as pathological. In other words, the normative interpretation of deviations as mistakes does not follow from an empirical investigation linking deviations to negative outcomes. The empirical investigation is limited to testing whether behavior conforms to a neoclassical normative ideal.</blockquote>Finally, perhaps in an effort to avoid revealing how poor both the neoclassical and behavioral models actually are, the bar for predictive success was lowered even further by turning it into an exercise in fitting models to existing data rather than an exercise in making successful out-of-sample predictions.<br />
<blockquote>Behavioral models frequently add new parameters to a neoclassical model, which necessarily increases R-squared. Then this increased R-squared is used as empirical support for the behavioral models without subjecting them to out-of-sample prediction tests.</blockquote>That´s the story as I read it, and the authors continue to describe their view of what they think should be done. But that will have to wait for another time.Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com1tag:blogger.com,1999:blog-8329357786401012777.post-67378862282032311832011-05-09T04:58:00.001-07:002011-05-09T04:58:45.179-07:00How convinced should we be of an economic theory that is “consistent with empirical data”?<p>What follows is not rocket science, and probably not 100% correct, but: When we say that “empirical tests” support an economic theory, does this mean the theory is probably right? More specifically, what I want to explore is whether there is a simple way of stating the issue so that we don’t ignore the base-rate .</p> <p>An example of how important the way we state this issue is, comes from medical decision making: There’s a number of screening programs in place to identify people with medical conditions that can be harmful, and research on medical decision-making shows that doctors seriously misinterpret positive results from such tests. Simply put, test results are “misleading” when a test with even a low error rate is used to search for a rare condition in the general population: The small error rate multiplied by the huge number of healthy people gives you the bear share of those flagged as “positive” by the test. </p> <p>An example from a <a href="http://opinionator.blogs.nytimes.com/2010/04/25/chances-are/">nice write-up</a> of this issue shows how difficult the issue is to understand when stated in probabilities:</p> <blockquote> <p>In one study, Gigerenzer and his colleagues asked doctors in Germany and the United States to estimate the probability that a woman with a positive mammogram actually has breast cancer, even though she’s in a low-risk group […]:</p> <p><em>The probability that one of these women has breast cancer is 0.8 percent.  If a woman has breast cancer, the probability is 90 percent that she will have a positive mammogram.  If a woman does </em>not<em> have breast cancer, the probability is 7 percent that she will still have a positive mammogram.  Imagine a woman who has a positive mammogram.  What is the probability that she actually has breast cancer?</em></p> <p>Gigerenzer describes the reaction of the first doctor he tested, a department chief at a university teaching hospital with more than 30 years of professional experience:</p> <p>“[He] was visibly nervous while trying to figure out what he would tell the woman.  After mulling the numbers over, he finally estimated the woman’s probability of having breast cancer, given that she has a positive mammogram, to be 90 percent.  Nervously, he added, ‘Oh, what nonsense.  I can’t do this.  You should test my daughter; she is studying medicine.’  He knew that his estimate was wrong, but he did not know how to reason better.  Despite the fact that he had spent 10 minutes wringing his mind for an answer, he could not figure out how to draw a sound inference from the probabilities.”</p> <p>When Gigerenzer asked 24 other German doctors the same question, their estimates whipsawed from 1 percent to 90 percent.   Eight of them thought the chances were 10 percent or less, 8 more said 90 percent, and the remaining 8 guessed somewhere between 50 and 80 percent.  Imagine how upsetting it would be as a patient to hear such divergent opinions.</p> <p>As for the American doctors, 95 out of 100 estimated the woman’s probability of having breast cancer to be somewhere around 75 percent.</p> <p>The right answer is 9 percent.</p> </blockquote> <p>The twist in the story comes from how easy this is to get right if you phrase the exact same question in a “natural frequencies” format:</p> <blockquote> <p><em>Eight out of every 1,000 women have breast cancer.  Of these 8 women with breast cancer, 7 will have a positive mammogram.  Of the remaining 992 women who don’t have breast cancer, some 70 will still have a positive mammogram.  Imagine a sample of women who have positive mammograms in screening.  How many of these women actually have breast cancer?</em></p> </blockquote> <p>My question is whether this format can be adapted to the case of empirical testing of a theory. We have three main terms that need to be “adapted”:</p> <ul> <li>Risk of false negatives – How likely is it that the theory will be rejected if it is actually true? Let us say this is quite unlikely (2%)</li> <li>Risk of false positives – How likely is it that the theory will be supported if it is actually false? This depends on how “observationally equivalent” it is to the true theory. Take rational addiction theory as an example: <a href="http://flyunderthebridge.blogspot.com/2005/06/got-milk-addiction.html">One article</a> argues that consumption with a trend often will test positive for rational addiction even though there is no rational, forward-looking planned change in tastes going on. I find trended consumption far more plausible, so let us put the likelihood of “trended consumpti0n or some other non-rational addiction mechanism is actually present and testing positive by mistake” at 40%</li> <li>“Base-rate” – In medicine, this is the known prevalence of the disease in the population being tested. In our case it is not easily interpretable – but ask yourself , for instance, “how likely do I think it is that real junkies and cigarette smokers are gradually implementing a forward looking plan for changing their own tastes, and that this is the reason their use of cigarettes, heroin or whatever is gradually increasing?” Let us say we put this at 5%. This does sound both speculative and “science-fiction”ish, but could we interpret this as saying “of all the possible universes that would have unfolded consistently with our current history and experiences – in how many of these do we think real junkies and cigarette smokers [….]”?</li> </ul> <p>If we think this sounds OK, we could try something along the lines of:</p> <p><em>My feeling/guess is that only 20 out of 1000 universes we might be living in would have rational addicts. In all 20 of these universes rational addiction theory would do well in testing. Of the remaining 980 universes that do not contain rational addicts, some 392 will test positive. Imagine that our current test-results indicate that we live in one of the 412 universes that test positive for rational addiction. How likely is it that there really are rational addicts?</em></p> <p>This is (I think) quite basic Bayesian updating, so the whole “new” thing here is the attempt to rephrase it in a way that makes the base-rate point obvious: After positive test-results, the likelihood that we are living in the rational addiction world would be 4,8% – higher than 2% (our starting estimate) – but still very low.</p> <p>(Of course – you may quibble with the numbers I put on it – in fact, so would I – but they’re just there to have something to put into the format I was testing)</p> Anonymoushttp://www.blogger.com/profile/02388415117249020295noreply@blogger.com0