Why do participants drop out of online surveys and experiments?

Article by Ben Howell

Photo by Ricardo Gomez Angel on Unsplash

Photo by Ricardo Gomez Angel on Unsplash

Knowing why participants drop out of online questionnaires and behavioral experiments can help you implement strategies to reduce dropout rates and increase the power of your studies.

Employing measures to reduce dropout rates leads to better study design and more accurate predictions of the samples required to give your studies better statistical power.

This article discusses dropout and why it occurs, and presents strategies to reduce dropout rate along with the pros and cons of using such measures.


Contents


What is dropout?

Dropout is the non-completion of a study by a participant, and the rate is much higher for online studies compared to studies done in the lab. Factors such as anonymity, split attention, and lack of situational demand all contribute to your dropout rate, and studies with higher levels of difficulty and/or significant time commitments are particularly prone to increased dropout rates (Dandurand, Shultz, & Onishi, 2008).

To determine the dropout rate (a.k.a., attrition rate) of a study, divide the number of incomplete responses by the total number of participants who started the study. For example, in a study where 100 participants started and 40 of those participants failed to complete, the dropout rate would be 40100 or 40%. Similarly, completion rate is calculated as the number of complete responses divided by the total number of participants who started the study.

The results of the third study in Figure 1 show a dropout rate of 112316 or approx 35%.

Figure 1. Study stats (Psychstudio, 2019)

Figure 1. Study stats (Psychstudio, 2019)

These simple calculations don't provide much insight as to when and where participants drop out, or any correlates of dropout behavior between participants. In a future article, we'll take a look at how to conduct a full dropout analysis.


Causes and countermeasures

Situational demand

Reducing situational demand can reduce dropout rates in both in-lab and online studies. Participants are far less likely to dropout of lab-based studies once they've read the consent form in the presence of a researcher for reasons such as social pressure and feelings of obligation. However, situational demand tends to be far less pronounced in online settings because of anonymity and privacy. Therefore, participants are more likely to exercise their free will in online studies, resulting in higher dropout rates (Dandurand et al., 2008).

Dropout due to lack of situational demand is a good thing, as it allows more genuine voluntary participation. All else being equal, the dropout rate in online studies may be a decent proxy for involuntary participation rates in similar, lab-based studies (Hoerger, 2010). However, Crump, McDonnell, and Gureckis (2013) warn that dropout rates can affect the accuracy of dependent measures because low-performing individuals may be more likely to drop out. Therefore it is recommend that experiments conducted online report dropout rates.

Feedback

Michalak and Szabo (1998) suggest that offering feedback on individual performance and overall research findings as an incentive may reduce dropout rates. They also suggest that piloting and refining instructions, as well as providing contact information for questions may lead to further reductions. However, offering feedback can also reduce the accuracy of results in studies where self-insight is a factor because of participants' sensitivity to their own responses (Clifford & Jerit, 2015).

Incentive

Offering financial incentives can reduce dropout rate (Crump et al., 2013), with meta-analysis showing such incentives can improve online survey completion rates by up to 27% (Göritz, 2006). Dropout rates for students can be almost eliminated entirely by offering course credit as an incentive (Dandurand et al., 2008).

Using financial incentives is not without risk, however. Konstan, Simon Rosser, Ross, Stanton, and Edwards (2005) found that offering such incentives increases the chance of multiple submissions, and Chandler and Paolacci (2017) warn that financial incentives increase the risk of random responses and prescreen fraud.

Of 1,150 completed surveys, we rejected 124 (11%), including 119 (10%) that were repeat surveys, 65 (6%) of which came from the same individual participant – the person we call Subject Naught.

Konstan, Simon Rosser, Ross, Stanton & Edwards, 2005.

Engagement

Participants are more likely to abandon studies that are tedious and boring so making experiments a little more fun and engaging may help reduce dropout rates (Crump et al., 2013).

Demographic questions

Asking for personal information (e.g., age, gender, address) at the beginning of a study, rather than at the end, has been shown to reduce the rate of participant dropout. For example, Frick, Bächtiger, and Reips (2001) found that asking for personal information early in an experiment resulted in a significant decrease in dropout rate from, 17.5% to 10.3%.

Frick et al. (2001) also found that the number of unanswered demographic questions dropped from 11.8% to 4.2% when asking for demographic information at the start of the study as opposed to the end. The placement of demographic questions had no effect on answering behavior in the study itself. A breakdown of the unanswered demographic question data is shown in the table below.

Rate of unanswered demographic questions at start and end of study (Frick et al., 2001)
Demographics at startDemographics at end
Email9.5%20.5%
Gender2.1%7.7%
Age2.1%5.9%
Nation4.9%13.0%

Study length

In a study examining the participant dropout rate among 1963 participants completing 1 of 6 online research surveys, Hoerger (2010) found that 10% of participants dropped out of the study almost immediately, with a linear rate of 2% dropout per 100 survey items presented. However, the generalizability of the results may only be relevant to similar study designs and/or content.

Although these results are widely cited in the literature, and are used to predict sample sizes for statistical power, there appear to no other relevant scientific studies of dropout rate in the literature. Thus, the warning by Hoerger (2010) about generalizability still applies.


Conclusion

The higher rate of dropout in online studies compared to lab-based studies can be viewed as a proxy measurement for involuntary participation in lab-based studies caused by situational demand.

A number of countermeasures can be employed to reduce dropout rates in online studies, including:

  • Offering feedback on performance and research findings.
  • Offering incentives in the form of financial reward and course credit.
  • Making experiments more enjoyable.
  • Asking for personal information at the start of a study, rather than at the end.
  • Restricting the length of the study.

There are trade-offs with each of these methods, so careful consideration must be taken when deciding on how critical dropout rate is for the study in question, and what countermeasures to employ. There is still need for much further research on the prevalence of participant dropout, the rates at which dropout occurs in online settings, and the effects of that dropout on generalizability and statistical power.


References

  1. Chandler, J., & Paolacci, G. (2017). Lie for a dime: When most prescreening responses are honest but most study participants are impostors. Social Psychological and Personality Science, 8(5), 500–508. doi: 10.1177/1948550617698203
  2. Clifford, S., & Jerit, J. (2015). Do attempts to improve respondent attention increase social desirability bias? Public Opinion Quarterly, 79(3), 790–802. doi: 10.1093/poq/nfv027
  3. Crump, M., McDonnell, J., & Gureckis, T. (2013). Evaluating Amazon's Mechanical Turk as a tool for experimental behavioral research. PLoS ONE, 8(3), e57410. doi: 10.1371/journal.pone.0057410
  4. Dandurand, F., Shultz, T., & Onishi, K. (2008). Comparing online and lab methods in a problem-solving experiment. Behavior Research Methods, 40(2), 428–434. doi: 10.3758/brm.40.2.428
  5. Frick, A., Bächtiger, M., & Reips, U. (2001). Financial incentives, personal information and drop-out in online studies. In U.-D. Reips & M. Bosnjak (Eds.), Dimensions of internet science (pp. 209–219). Lengerich, Germany: Pabst.
  6. Göritz, A. (2006). Incentives in Web studies: Methodological issues and a review. International Journal of Internet Science, 1, 58–70.
  7. Hoerger, M. (2010). Participant dropout as a function of survey length in Internet-mediated university studies: Implications for study design and voluntary participation in psychological research. Cyberpsychology, Behavior, And Social Networking, 13(6), 697-700. doi: 10.1089/cyber.2009.0445
  8. Konstan, J., Simon Rosser, B., Ross, M., Stanton, J., & Edwards, W. (2005). The story of subject naught: A cautionary but optimistic tale of Internet survey research. Journal of Computer-Mediated Communication, 10. doi: 10.1111/j.1083-6101.2005.tb00248.x
  9. Michalak, E., & Szabo, A. (1998). Guidelines for Internet research. European Psychologist, 3(1), 70–75. doi: 10.1027/1016-9040.3.1.70

Ready to start using the world's easiest online experiment builder?

Conduct simple psychology tests and surveys, or complex factorial experiments. Increase your sample size and automate your data collection with experiment software that does the programming for you.

Behavioral experiments. Superior stimulus design. No code.

Ben Howell
Ben Howell
Founder, Psychstudio