Brendan O'Connor from Dolores Labs posted two excellent visualizations (1, 2) of durations of tasks on Mechanical Turk.
By looking at these images we can see easily how different workers behave on Mechanical Turk. Some are working systematically and fast, others are slower. These visualizations allow us to categorize workers into different categories.
If we have the "correct" response for each HIT (see the excellent post on the topic by Bob Carpenter), then we can augment the visualization with by coloring each HIT accordingly, and we can see easily which workers are fast, systematic, and give correct answers.
Given such data, we can also examine the dual problem: How easy is to work on particular HITs? Some HITs are going to be harder than others, will take longer to complete, and will have higher error rate.
Notice, though, that while for annotators we would like to avoid the ones with higher error rate, for HITs it may be beneficial to get the correct answers for the difficult cases. In fact, it may makse sense to try to allow only high-quality workers to work on the hard HITs, and allow the "low quality" annotators to work on the easy HITs. Devising an optimizal HIT-worker allocation strategy seems to be an interesting problem for future research.