JOURNAL BROWSE
Search
Advanced SearchSearch Tips
The Effect of Incentives on Internet Surveys: Response Rate Changes After the Introduction of Incentives
facebook(new window)  Pirnt(new window) E-mail(new window) Excel Download
 Title & Authors
The Effect of Incentives on Internet Surveys: Response Rate Changes After the Introduction of Incentives
Kennedy, John M.; Ouimet, Judith A.;
  PDF(new window)
 Abstract
Incentives are often included in survey design because they are known to improve response rates, at least moderately. This paper describes the changes in the response rates when incentives were introduced into a longitudinal survey. The National Survey of Student Engagement was conducted annually at Indiana University Bloomington from 2000 through 2012. In 2010, incentives were introduced in an attempt to reverse the declining response rates. The incentives performed as expected, raising the AAPOR Response Rate 3 from 24% in 2009 to 36% in 2010. From 2010 through 2012, different types of incentives were tried but the response rates did not change substantially. The results from the changes in incentives can help survey practitioners decide the number and types of incentives that might be used effectively to increase response rates.
 Keywords
Internet survey;Incentive;Response Rate;
 Language
English
 Cited by
 References
1.
Bosnjak, M.,& Tuten, T.L. (2003). Prepaid and promised incentives in web surveys: An experiment, Social Science Computer Review. 21(2), 208-217. doi:10.1177/0894439303021002006 crossref(new window)

2.
Deutskens, E., Ruyter, K. D., & Wetzels, M. (2004). Response rate and response quality of Internet-based surveys: An experimental study, Marketing Letters. 15(1), 21-36. crossref(new window)

3.
Dillman, D. A., Smyth, J.D., & Christian, L.M. (2009). Internet, mail, and mixed-mode surveys:The tailored design method. Hoboken: John Wiley and Sons.

4.
Groves, R. M., & Peytcheva, E. (2008). The impact of nonresponse rates on nonresponse bias: A meta-analysis, Public Opinion Quarterly, 72(2), 167-189. doi:10.1093/poq/nfn011 crossref(new window)

5.
Groves, R. M., Singer, E., & Corning, A. (2000). Leverage-saliency theory of survey participation description and an illustration. Public Opinion Quarterly, 64(3), 299-308. Retrieved from http://poq.oxfordjournals.org/content/64/3/299 crossref(new window)

6.
Heerwegh, D. (2006). An investigation of the effect of lotteries on web survey response rates, Field Methods 18(2), 205-220. doi:10.1177/1525822X05285781 crossref(new window)

7.
Merkle, D. M., & Edelman, M. (2009). An experiment on improving response rates and its unintended impact on survey error. Survey Practice. Retrieved from http://surveypractice.wordpress.com/2009/03/24/improving-response-rates/

8.
Martin, G. L., & Loes, V.N. (2010). What incentives can teach us about missing data in longitudinal assessment? New Directions for Institutional Research. doi:10.1002/ir.369. crossref(new window)

9.
Millar, M. M., & Dillman, D. A. (2011). Improving response to web and mixed-mode surveys. Public Opinion Quarterly. 75(2), 249-269. doi:10.1093/poq/nfr003 crossref(new window)

10.
Sanchez-Fernandez, J., Munoz-Leiva, F., Montoro-Rios, F. J., & Ibanez-Zapata, J. A. (2010). An analysis of the effect of pre-incentives and post-incentives based on draws on response to web surveys. Quality & Quantity, 44(2), 357-373. crossref(new window)

11.
Sauermanna, H., & Roach, M. (2013). Increasing web survey response rates in innovation research: An experimental study of static and dynamic contact design features. Research Policy, 42(1), 273-286. crossref(new window)

12.
Singer, E., & Ye, C. (2013). The use and effects of incentives in surveys. The Annals of the American Academy of Political and Social Science, 645(1), 112-141. doi:10.1177/0002716212458082 crossref(new window)