Photo by Pixabay from Pexels
Detect and prevent fraudulent participants from ruining your research by employing these simple, actionable techniques backed up by science.
By using the methods and practices presented throughout this article, you can significantly increase the data quality of your research surveys and behavioral experiments.
This article explains the concept of prescreen lying, why screening fraud occurs, and provides practical, concrete methods for detecting and preventing this type of fraud.
Prescreen lying is a particular type of fraud where participants willingly enter false information in order to a) meet the eligibility criteria of a study, or b) try to maximize their chances of meeting the eligibility criteria.
Sampling error can be significantly affected even with low rates of fraudulent participation and data quality can be damaged spectacularly when prescreen fraudsters are truthful in the study itself (e.g. a male lying about being female yet answers questions truthfully from a male perspective).
Typically the motive for lying during prescreening is financial gain. Fraud of this type is to be expected (Jones, House, & Gao, 2015) and the rate at which it occurs grows when financial reward is higher, and with the rarity of population being recruited because competition between participants to gain access to the study is lower (Chandler & Paolacci, 2017).
Preventing fraudulent participants from entering studies
Preventing participants who lie about their eligibility from entering our studies is the most effective and least onerous way of ensuring we achieve the best data quality where fraud is concerned. Prevention is better than cure, and all that.
Warnings and commitments
Warnings and direct requests not to lie or cheat can help to reduce the incidence of fraud in surveys and experiments. Warnings can be placed in a consent form or on a dedicated screen for more impact. Teitcher et al. (2015) advise warning participants that they will not be compensated if fraudulent behavior is detected. For example:
If fraudulent behavior is detected you will NOT be compensated.
The researchers also suggest warnings that fraud may be reported to authorities could be effective (e.g. to the FBI Internet Crime Complaint Center in the United States).
Methods tested by Clifford and Jerit (2016) to reduce the rate of cheating in online surveys found that warnings and direct requests were much less effective than asking participants for a commitment (requiring a yes or no response). Therefore, the most effective approach in this regard might be to ask participants to commit to telling the truth rather than to demand they do so.
Figure 1. Commitment question example (Psychstudio, 2019)
Timing the disclosure of information
Disclosing incentives, compensation and financial rewards up-front before a participant enters a study provides most of the information needed by a fraudulent actor to decide whether or not a study is worth gaming. Fraudsters are much more likely to bypass a study where compensation information is not immediately available (Teitcher et al., 2015). Two methods that can be used to delay disclosure of financial incentives are:
- Give compensation information only at the end of a study.
- Give compensation information after participants have answered the eligibility criteria.
- Require participants seek documentation and/or permission to enter the study from the researcher.
An additional method that can be used alongside, or in place of the methods mentioned, is to delay disclosing eligibility criteria or to omit them completely. This can be achieved by informing participants of their eligibility for the study only after completion of a prescreening questionnaire (without revealing eligibility criteria).
Questions that require research for fraudulent answers
Because the typical motivation for lying in prescreening is financial gain, questions that require research in order to provide a fraudulent answer are likely to help.
For example, let's suppose that one criterion for eligibility in our study is that the participant uses prescription eye-wear. We could ask follow-up questions such as:
What is the brand of your eye-glasses frame?
What is your prescription strength?
Now let's suppose a fraudulent actor entered our study by lying about their use of prescription eye-wear. Our follow-up questions will likely result in one of three possibilities:
- The participant responds by guessing.
- The participant drops out of the study.
- The participant does some research in order to supply a believable, yet fraudulent answer.
In most cases, those committing fraud for financial gain are likely to guess (thus producing a detectable fraud marker) or to drop out of the study completely because of the unacceptable opportunity cost introduced by the time cost of research (Chandler, Mueller, & Paolacci, 2013; Clifford & Jerrit, 2016). To maximize the effectiveness of this technique, it may be appropriate to introduce a number of these questions throughout the study.
There are many incentives other than financial compensation researchers can offer in return for participation in their study. Options include:
- Entry into a randomly drawn prize or lottery.
- Give a set amount of money to an approved charity on behalf of the participant.
- Self-insight. Provide the participant with personalised analysis of their responses.
- Scientific curiosity. Provide the participant with the data and/or findings of your research.
- Social, moral and ethical reasons. Participation, particularly in health and education related studies, is sometimes due to a participants willingness to help which is incentive enough.
Obviously some of the methods mentioned above will work well in some circumstances and with some studies and not at all in others, however it helps to try to come up with alternative to simply paying participants if you want to deter fraudulent participation.
Author's note: Participant compensation for those tasked by recruitment services is currently a subject of ethical debate. The debate centers on the fact that participants should be classified as workers and paid at least minimum wage for participation in research surveys and experiments. Participant payment has always been tricky area to navigate due to the effect it can have on data quality because of issues such as prescreen lying, self-selection and multiple submissions.
One of the most effective methods for preventing fraudulent eligibility claims is to run an extensive, stand-alone eligibility criteria study to collect detailed demographics on every participant in a pool. This is more likely to produce truthful answers as it is not connected to any particular study where specific eligibility criteria are required. This technique may be somewhat impractical for individual researchers (unless they have some way of managing a participant pool over many studies), however, it is commonly implemented by market research panels and research participant recruitment services who use such data to direct participants to relevant studies.
Detecting prescreen lies
Despite our best efforts in preventing fraudulent participants from entering our studies, some will still make it past all prevention attempts. Luckily there are some methods we can use to try to detect their presence.
Consistency checks (Jones et al., 2015) are an effective method for detecting fraudulent answers to eligibility criteria. These checks are made up of two or more questions that can reveal inconsistencies in responses. For example:
What is your age (in years)?
What is your date of birth?
Post-processing of the responses to the consistency checks can then be used to determine suspected fraud. To maximize the effectiveness of consistency checks, make sure they are relatively distant to one another.
Low-probability screening questions
A low-probability screening question contains response options that are most likely to be false. A range or scale of fraud probabilities can be constructed using a multiple choice question where multiple responses are allowed. With these questions, the probability of fraud rises with the number of options selected. In a study measuring the efficacy of low-probability screening questions, Jones et al. (2015) posed a question about fresh fruit purchases in the past year. Of the options given, it was extremely unlikely that any participant could have purchased more than one of the options presented. The probabilities applied to the options chosen were:
- medium probability of fraud if at least two of three options were selected
- high probability of fraud if all three were selected.
Figure 2. Low-probability question example suggesting medium probability of fraud (Psychstudio, 2019)
Jones et al. (2015) showed that participants recruited from Amazon's Mechanical Turk who selected the most options performed more poorly in engagement and rationality measures than participants who selected fewer options. Their conclusion suggested that some experienced participants may have been trying to increase their chances of entering the study by selecting more options motivated by financial gain.
Unfortunately, where there is money to be made there will inevitably be fraudulent characters looking to take advantage. This article has provided a number of strategies that can be readily applied to almost any study to both detect and prevent fraudulent participants who lie about their eligibility.
With the rise in online participant recruitment platforms, researchers have more opportunity than ever before to target specific demographics and sub-populations via online experiments. However, as with any sub-group, professional survey takers who enter studies fraudulently have their own communities on the internet where they discuss methods for defeating prescreening tests so researchers need to be creative in order to avoid common and guessable screening patterns.
- Chandler, J., Mueller, P., & Paolacci, G. (2013). Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior Research Methods, 46(1), 112–130. doi: 10.3758/s13428-013-0365-7
- Chandler, J., & Paolacci, G. (2017). Lie for a dime: When most prescreening responses are honest but most study participants are impostors. Social Psychological and Personality Science, 8(5), 500–508. doi: 10.1177/1948550617698203
- Clifford, S., & Jerit, J. (2016). Cheating on political knowledge questions in online surveys. Public Opinion Quarterly, 80(4), 858–887. doi: 10.1093/poq/nfw030
- Jones, M., House, L., & Gao, Z. (2015). Respondent screening and revealed preference axioms: Testing quarantining methods for enhanced data quality in Web panel surveys. Public Opinion Quarterly, 79(3), 687–709. doi: 10.1093/poq/nfv015
- Psychstudio (2019). Psychstudio (Version 2019) [Computer software]. Australia. Retrieved from https://www.psychstudio.com
- Teitcher, J., Bockting, W., Bauermeister, J., Hoefer, C., Miner, M., & Klitzman, R. (2015). Detecting, preventing, and responding to “fraudsters” in Internet research: Ethics and tradeoffs. The Journal of Law, Medicine & Ethics, 43(1), 116–133. doi: 10.1111/jlme.12200
Ready to start using the world's easiest online experiment builder?
Conduct simple psychology tests and surveys or complex factorial experiments. Increase your sample size and automate your data collection with experiment software that does the programming for you.
Behavioral experiments. Superior stimulus design. No code.