The Mentaculus has a nice list with good explanations/definitions of different types of biases based on a recent article by David Chavalariasa and John Ioannidis. Haven’t read the underlying article, but the list provides a nice summary of potential biases of very different sorts that affect different parts of the scientific process and can help lead a field to report misleading results. Below, I’ve taken the items from the above list and merely shuffled them into a set of categories to reflect different parts of the research process:
- Deciding on design of study/statistical model
- confounding bias = when you think you are measuring the effect of variable X on variable Y, but in reality there is another variable Z that correlates with X and also affects Y, which you haven't considered.
- Deciding on who to collect data from
- selection bias = when you think that all the various sub-groups of the population are proportionally just as likely to be in your sample, but in reality certain groups are more likely to be present than proportional, because of the way you collect your data.
- sampling bias = when you think your sample is representative of the population, but really it is not, because it is skewed in ethnicity, attractiveness, age, gender, and/or etc, casting doubt on your generalizations from the sample to the population. (this is actually a sub-category of selection bias, with the distinction of external vs internal validity that sounds cool but also troublesomely postmodern)
- Evaluating data quality
- response bias = when respondents answer your questions in the way they think you want them to answer, rather than according to their true beliefs; this could also happen in animal research if you reward animals for responding in a certain way outside of the main test.
- recall bias = when respondents are more likely to remember the content of your question if they hold a certain belief on it.
- Researcher’s beliefs
- attention bias = when you focus only on data that supports your hypothesis and ignore data that would make your hypothesis less likely.
- publication bias = when you are more likely to publish or tell others about your results if they 1) conform to what you expect, or 2) are what you think others would prefer to hear.
Seems to me there would be others as well that might be worth considering, such as (off the top of my head)
- Biased research questions – Let us say you are for or against some activity that (like most things) has positive and negative aspects (e.g. tobacco smoking, climate regulation, free trade, cannabis use, the internet). You narrow down and specify your research problem and outcome measure so as to only pick up on effects that go in your preferred direction, and present it as though it is a comprehensive or broad outcome measure or representation of the problem.
- Biased analysis – when you (due to presumably common psychological mechanisms) try new model specifications when you are “not satisfied” (e.g., don’t get the results you like) and keep on running new regressions, new models, new methods until you get significant effects in your desired direction.
- Publication bias type 2 – When editors and referees impose different burdens of proof depending on whether they agree with a piece of research or not (particularly if they do so even when there is a lack of consensus in the discipline, if everyone gets results in one direction and a new submission doesn’t, then it makes sense to ask for extraordinary evidence for extraordinary claims)
- Biased data – If the data was collected for some other reason than research and then employed for research, this may have affected the incentives the ultimate source had for reporting truthfully. E.g., tax reports may underestimate sources of income that it is easy to shield from the IRS. some parts of administrative forms that are filled out are filled out because “they have to be filled out” even though no one uses the information much – making them of poor quality.
No comments:
Post a Comment