Experimental Comparison of Two Data Collection Protocols for Previous Wave Nonrespondents
Jul 27, 11:00
Longitudinal designs offer unique opportunities for using prior data on sample members to tailor and optimize the design of subsequent waves. Targeting specific sample members with survey design features that are informed by previous behaviors has the potential to not only increase participation, but also minimize the threat of nonresponse bias and ultimately, improve overall data quality.
We use the 2016/17 Baccalaureate and Beyond Longitudinal Study (B&B:16/17) field test to assess a design aiming at reducing nonresponse error by targeting previous data collection nonrespondents. B&B:16/17 is the first follow up of sample members from the 2015-16 National Postsecondary Student Aid Study (NPSAS:16). The field test sample consists of NPSAS:16 field test baccalaureate recipients during the 2014-15 academic year, 40 percent of whom are NPSAS:16 nonrespondents. Motivated by the idea of double-sampling for nonresponse to reduce cost burden (Hansen & Hurwitz, 1946; Deming, 1953), we apply a more expensive and aggressive protocol for a subsample of nonrespondents. Specifically, we offer sample members a $10 prepaid incentive, $20 promised incentive, begin CATI interviewing in the early weeks of data collection, and offer an abbreviated survey after 4 weeks of data collection (which lasts 19 weeks). In contrast, the remainder of the base year nonrespondents receive the default protocol, where CATI interviewing does not start until after the 5th week of data collection, an abbreviated survey is offered as a nonresponse conversion strategy, and everyone is promised a $30 incentive upon completion.
Our goal is to evaluate the effect of the aggressive protocol on nonresponse rate and nonresponse bias. Motivated by the possibility that the most reluctant respondents are poor respondents (Cannell and Fowler, 1965), we also examine the effect of bringing a different set of sample members into the respondent pool on measurement error.
Pairwise comparisons of the aggressive and default protocols for nonrespondents show statistically significant differences in response rates (37% for the aggressive, vs. 25% for the default). Investigation of relative nonresponse bias in 23 key survey estimates shows slightly lower relative nonresponse bias in the aggressive protocol condition. Finally, we investigate measurement error by comparing item nonresponse, response distributions, and interview length among the respondent and nonrespondent groups. Preliminary results suggest no adverse impact on data quality.