I had finished reading the paper "Inefficient Hiring in Entry-Level Labor Markets" by Amanda Pallais, an assistant professor of Economics at Harvard University.
This is the first paper that I have read that provides experimental evidence that labor markets are "not efficient" in the following way: If we have a new worker, or a worker with no known past history, we do not know what the worker can and cannot do. Most employers will not hire this worker due to this lack of knowledge. And since the worker is never hired, nobody is able to leave feedback about the performance of the worker. This leads to a vicious cycle for the new entrants, that cannot break into the market because they do have feedback, and they cannot get feedback because they cannot get into the market.
While this phenomenon is known, it was not obvious that lack of feedback is causing this inefficiency. The alternative explanation was that good workers will find work to do, and bad workers simply do not get jobs because they do not even know how to apply and enter the market efficiently.
What Amanda did was pretty interesting. She created a randomized experiment. She used oDesk and opened a position for data entry, a position that required pretty much no special skills. She received approximately 3000 job applications. Out of these, she hired randomly 1000 workers. The 2000 non-hired workers formed the "control" group. Within the 1000 workers, she created two groups. One that received a detailed public feedback and evaluation, and another that received a generic, uninformative feedback (e.g., "Good work"). Given the randomized selection, the differences in the future evolution of the workers were pretty much the result of the treatments in this controlled field experiment.
The results were revealing:
- Workers randomly selected to receive jobs were more likely to be employed, requested higher wages, and had higher earnings than control group workers.
- In the two months after the experiment, inexperienced workers' earnings approximately tripled as a result of obtaining a job.
- Providing workers with more detailed evaluations substantially increased their earnings and the wages they requested.
- The benefits of detailed evaluations were not universal: detailed performance evaluations helped those who performed well and hurt those who performed poorly.
Even more notable, the benefit of the workers that received the "you get a job" treatment, did not come at the expense of other workers. Employment increased and the money that were "wasted" to conduct the experiment (the tasks were not useful to anyone) generated enough return to cover the cost.
In principle, oDesk may want to engage into such "wasteful" hiring just to get workers to bootstrap and start with some meaningful feedback in their profiles: When you create an account at oDesk, you get a random job (for which nobody cares) and then the quality of the submitted work is evaluated, to generate some meaningful feedback for the worker (e.g., "great at setting up a map reduce task on Amazon Web Services")
Or, perhaps, they can skip the wasteful part, and use crowdsourcing as a perfectly valid mechanism for generating this valuable public feedback by letting people do actual work.
Crowdsoucing as a solution to the cold start problem
Note how this need for early feedback so that workers can enter the market naturally leads to crowdsourcing as a solution to the entrance problem.
If getting a job is the blocker for starting your career, then crowdsourcing allows new entrants to pick jobs without having to worry about the interview process. Just pick an available task and do it.
The findings of the study also suggest that crowdsourcing by itself is not enough. Any crowdsourcing application that provides jobs should be accompanied by a detailed feedback/scoring system. For example, if the crowdsourcing platform is about, say, translation, then there should be public feedback that will list the tasks that the person completed (what language pairs, etc), and list the corresponding performance statistics (e.g., time taken to complete the task, quality of the outcome, etc.)
In a setting like this, crowdsourcing becomes not a novelty item but an integral part of any labor platform, facilitating entry of the workers. It is not a place where jobs get done on the cheap. It is the place that generates information about the quality of the workers, which in turn makes the workers more valuable to the firms.
Should crowdsourcing firms receive favorable treatment by the government?
So, if crowdsourcing tasks that generate *public* feedback for the performance of the participating workers benefit the workers, the future employers, and the overall society (by decreasing unemployment), the question is why not encouraging companies to make more of their work available in such format. While a service like Mechanical Turk would not qualify (anonymity of workers, plus lack of reputation), other services that generate useful public information could be the focus of favorable legislation and/or tax treatment.
Perhaps it is time to give to crowdsourcing the attention and stature it deserves.