Photo by Pixabay from Pexels
In a recent article, the New York Times published a sensationalist piece bemoaning the low wages one writer managed to earn on Amazon Mechanical Turk (97c per hour).
This is not the first time Amazon Mechanical Turk (MTurk) has come under fire. The participant recruitment platform, Prolific, calls for people to stop using MTurk for research (and use their platform instead), because of a number of factors, including a perceived lack of ethics surrounding low wages.
This debate about wages suffers from a number of problems, inaccuracies and misappropriated blame as eloquently argued in this article by TurkerView whose ultimate conclusion is that MTurk is the most ethical way to recruit crowd workers.
In the article below, I'll introduce some further nuances that reveal complexities beyond simple hourly rates of pay.
Whilst Prolific sets floor prices for minimum wage, the fact that the MTurk platform does not set a floor price can't be used to blame MTurk for underpaying workers. It is the responsibility of the employer (i.e. the researcher) to ensure they are paying fair remuneration.
It's not MTurk who sets the rate, it's the researcher. In sum, it is the ethical responsibility of the researcher who conducts studies on MTurk to offer fair incentives for participation.
Author's note: Positly uses MTurk for recruiting participants (amongst other sources), and also sets participant remuneration, however this is done on a case by case basis.
Good question! That all depends on where you live. The federal minimum wage in the US is 7.25USD per hour. Currently, the consensus seems to be somewhere between 6.50USD (Prolific) and 7.25USD (New York Times, others) is a fair minimum wage. However, this is a very US-centric view.
For example, if an employer in Australia tried to pay 7.25USD per hour, such an action would be labelled wage theft and the employer would be prosecuted (minimum wage in Australia is 19.49AUD per hour, which is currently slightly above 13.00USD at the moment). Conversely, in many countries, the minimum wage is much lower than in the US.
If an hourly rate is set, participants will be paid based on an estimate of how long a task takes. Who estimates how long the task should take? The researcher. It's simply human nature to be biased and underestimate the time it takes to do a task, particularly when paying for that time.
Who pays for the time it takes to move on from one task, to the start of another? No one. Does the minimum wage set by platforms or researchers assume zero latency between tasks? Yes. Most tasks last less than an hour; in fact, many are on the scale of a few minutes, whilst the latency between tasks is regularly on the scale of minutes. Participants are not being paid for this latency. With a latency of minutes between tasks, and tasks that only take minutes to complete, there is a lot of unpaid "between" time with the cost borne by the participants.
Volatility in the flow of tasks available to participants (particularly on the smaller platforms) means that even when all other issues are taken care of, the chance of earning any mandated minimum wage figure is remote.
Outside of research participation (as in non-research tasks on MTurk), volatility still exists unless participants have a highly sought-after skill or where their skills align with business needs at crunch times (e.g., end-of-financial-year business reporting, last-minute data entry and analysis).
Another risk is that of nonnaivete. Professional survey takers may well be nonnaive to the very strategies, questions and behavioral techniques researchers are trying to assess, resulting in a practice effect bias.
Non-wage type incentives have shown to be successful in avoiding the risks mentioned above. These incentives include entry in a prize draw, social/ethical/moral reasons, scientific curiosity (where results/findings are supplied), self-insight, and donations to a charity of choice.
Obviously, most of these alternative incentives are not viable on recruitment platforms. However, they are worth considering, especially if you have access to a convenience sample (e.g., university students).
Participant compensation for those tasked by recruitment services is currently a subject of heated ethical debate. However, there are many hard problems, logistical difficulties, misappropriation of blame, and misinformation surrounding the issue. In this article, I've outlined the factors that I think make the issue far more complex than the current debate might suggest.
Is it time to rethink incentives for research participation? I think so.
What do you think? Let me know on twitter.
Conduct simple psychology tests and surveys, or complex factorial experiments. Increase your sample size and automate your data collection with experiment software that does the programming for you.
Behavioral experiments. Superior stimulus design. No code.