Innovations in survey management: incorporating experimental designs in the 2014 Survey of Income and Program Participation
Jul 26, 13:45
The 2014 Survey of Income and Program Participation (SIPP) Panel debuted extensive changes in the design of the survey. These changes, coupled with increasingly restrictive budgets and more difficult interviewing conditions, make this an opportune time to focus on innovations in survey management that leverage developments in paradata and adaptive-design methodology.
We conducted two overlapping experiments during the 2014 SIPP panel. First, we used adaptive-design models to test distributing field representatives’ (FRs’) workload at the beginning of the four-month interviewing period. In the 2014 SIPP, FRs get all of their cases all at once, rather than getting a quarter of their cases each month to interview in a staggered fashion. In the 2014 panel, managing work over a single four-month period allowed adaptive and other data-driven procedures to play a more critical role.
Using Contact History Instrument (CHI) information from Waves 1 and 2 (2014 and 2015), we worked during Wave 3 (2016) data collection to develop a model for prioritizing work. In addition to CHI input, we used respondent characteristics, current wave interview appointments, and case geography to refine models to prioritize FR’s workloads. The models focused on balancing priorities to redirect effort to cases that could improve sample balance relative to Wave 1 on key characteristics. Based on adaptive workload models developed during Wave 3 data collection experiments, we implemented refined adaptive workload experiments for Wave 4 (2017) data collection.
Second, an incentive experiment was included in the 2014 SIPP Panel. Incentives are a widely used tool for reducing nonresponse bias, improving respondent cooperation, boosting response rates, and improving data quality. However, they are increasingly under scrutiny as budgets constrict and response rates continue to decline. In the 2014 SIPP panel experiment, we evaluated conditional post-paid incentives and the development of model-based incentives. Based on Wave 1 characteristics and response in Wave 1 and 2, we used a logistic regression model to predict a household’s likelihood of responding in Wave 2 with and without incentives. The experimental design in Wave 2 facilitated evaluation of different model specifications, leading to model-based incentive assignment tests that focused on households where incentives were a critical factor in retention for Waves 3 and 4. The implementation and eventual integration of adaptive-design and model-based incentives in SIPP allows limited resources to be focused where they are most impactful and are based on data-driven decisions.