Monday, November 30, 2009

Anchoring and Mechanical Turk

Over the last few days, I have been reading the book Predictably Irrational by Dan Ariely, which (predictably) describes many biases that we exhibit when making decisions. These biases are not just effects of random chance but are rather expected and predictable. Such biases and the "irrationality" of human agents is one of the focuses of behavioral economics; these biases have been also extensively studied in the field of cognitive psychology, which examines the ways that human agents process information.

One of the classic biases is the bias of "anchoring". Dan Ariely in his book shows how he got students to bid higher or lower for a particular bottle of wine: He asked students to write down the last digit of their social security number before placing the bid. As the anchoring theory postulated, students that wrote down a lower number, ended up bidding lower than students with a higher last digit in their SSN.

Why? Definitely not because the last digit revealed anything about their character. It was because the students got "anchored" to the value of the last digit they wrote down. I am certain that the experiment could be repeated by using the middle two digits as anchor, and the results would be similar.

Interestingly enough, at the same time that I was reading the book, I got contacted by Gabriele Paolacci, a PhD student in Italy. In his blog, Experimental Turk, Gabriele has been replicating some of these "classic" cognitive psychology experiments that illustrate these biases. As you might have guessed already, Gabriele has been using Mechanical Turk for these experiments. Gabriele tested the theory of anchoring using Amazon Mechanical Turk, replicating a study from a classic paper. In his own words:

We submitted the “african countries problem” from Tversky and Kahneman (1974) to 152 workers (61.2% women, mean age = 35.4). Participants were paid $0.05 for a HIT that comprised other unrelated brief tasks. Approximately half of the participants was asked the following question:

  • Do you think there are more or less than 65 African countries in the United Nations?
The other half was asked the following question:
  • Do you think there are more or less than 12 African countries in the United Nations?
Both groups were then asked to estimate the number of African countries in the United Nations.

As expected, participants exposed to the large anchor (65) provided higher estimates than participants exposed to the small anchor (12), F(1,150) = 55.99, p<.001. Therefore, we were able to replicate a classic anchoring effect - our participants’ judgments are biased toward the implicitly suggested reference points. It should be noted that means in our data (42.6 and 18.5 respectively) are very similar to those recently published by Stanovich and West (2008; 42.6 and 14.9 respectively).

References

Stanovich, K. E., West. R. F. (2008). On the relative independence of thinking biases and cognitive ability. Journal of Personality and Social Psychology, 94, 672-695.

Tversky, A., Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124-1131.

Gabriele has more experiments posted in his blog, and I am looking forward for more experiments.

So, here is a question: Definitely we should take similar biases into consideration when collecting data from humans, and when conducting user studies. In a more general setting, can we use such biases more productively, in order to get users to complete tasks that are useful?