BigSurv20 program


Friday 6th November Friday 13th November Friday 20th November Friday 27th November
Friday 4th December

Back

A paradata paradise: Exploring the collection, use and analysis of paradata

Moderator: Ana Lucia Cordovar (alcordova@usfq.edu.ec)
Slack link
Quick Zoom

Detailed zoom login information
Friday 27th November, 10:00 - 11:30 (ET, GMT-5)
7:00 - 8:30 (PT, GMT-8)
16:00 - 17:30 (CET, GMT+1)

Putting paradata and visualization to work for field interviewers

Mr Brad Edwards (Westat) - Presenting Author
Ms Victoria Vignare (Westat)
Ms Susan Genoversa (Westat)

Face-to-face data collection has been a preferred mode for implementing the most complex survey designs and achieving the highest response rates. However, managing production, costs, and quality is challenging in field settings, with a dispersed work force that is often not connected to the internet. Recent advances in technology and communication have enabled development of more techniques for controlling field activities, using paradata in ways that marry operations research with survey methods. CARI, supervisor dashboards, smart phones for interviewers, and GPS tracking have been implemented by a number of survey organizations in the past decade. We report on the development of a field interviewer dashboard, extending these techniques to the workforce that has the most direct contact with respondents. Dashboard features include the ability to develop a daily work plan, informed by case location, status, priority or propensity to complete, and lag time since last contact. As the interviewer selects cases to work in a given day, the cases are displayed in a list or on a map. The interviewer can drill down to review case contact history, review future appointments, sort cases by location, and make notes. Visualization includes graphs showing contact times of day and days of week since start of assignment. Paradata available to the supervisor (e.g., alerts about data anomalies such as too-short interviews; out-of-range quality scores from CARI coding) and management metrics (such as average time per completed interview) that the supervisor may review with the interviewer can be displayed on both supervisor and interviewer dashboards to facilitate the conversation. the interviewer can create PDFs from dashboard elements and save for later access in disconnected mode on laptop, tablet, or smart phone while in the field. Past attempts to control field interviewer routing decisions and order of working cases based on paradata models have been plagued by interviewer noncompliance. We believe these efforts have foundered because of communication problems between the project managers and the field interviewers. Providing the interviewer with reasons why managers believe a particular case should be worked first (based on propensity modeling using many paradata sources) and the preferred order for working the remaining cases are expected to help the interviewer decide whether to accept the direction or choose a different case to work first. We report on formative research with interviewers in the field, prototyping and testing experiences, and a research plan for piloting the approach in field conditions.



Increasing survey response rates and decreasing costs by combining numeric and text mining strategies on survey paradata

Professor Sudip Bhattacharjee (University of Connecticut, US Census Bureau) - Presenting Author
Dr Ugochukwu Etudo (University of Connecticut)
Mr Nevada Basdeo (US Census Bureau)

Response rates are dropping and data collection costs are rising in federal surveys. Additionally, field representatives cannot contact a respondent if the response burden exceeds a threshold. As a result, practitioners use response propensity models to mitigate cost and respondent burden, while retaining response rates. We evaluate key determinants of survey completion for the American Community Survey (ACS) using paradata of Contact History Information from the years 2017 and 2018. To our knowledge, unstructured field representatives’ (FR's) notes are omitted in paradata models within the Census Bureau. We believe that incorporating these notes would improve the performance of paradata models. From the notes, we identify themes and terms that are useful for estimating response propensity. In this research, we present the first steps in solving this multi-dimensional optimization problem. We show two findings: (1) combining Contact History Information and FR's notes can significantly improve response propensity estimates at the household level, and (2) FR's notes can be incorporated in calculating burden scores for respondents. Our text mining of the FR's notes may also be useful in training Field Representatives. It can also reveal different refusal patterns to surveys, based on geography and time. Results from our study can be generalized to various other surveys that capture both numeric and textual paradata from survey operations.



Exploring paradata pathways through web surveys to improve survey design

Dr Renee Ellis (U.S. Census Bureau) - Presenting Author

Download presentation

In this study, we used paradata to analyze common users paths through web surveys. Understanding how users navigate online survey instruments may be useful for many reasons. For example, knowing more about these behaviors may alert us to problems with instrument usability. This may help identify problematic questions and common behaviors of survey respondents. One of the challenges of this type of analysis is that the web paradata being used for analysis are unstructured and often voluminous in nature, it has many of the qualities of big data. In this look at how users navigate online survey instruments, we wrangle the paradata in a way that we can visualize user paths. From this we categorize common paths and discuss how they might be used to make survey design decisions.

Detecting difficulty in computer-assisted surveys through mouse movement trajectories: A new model for functional data classification

Dr Amanda Fernández-Fontelo (Chair of Statistics, School of Business and Economics, Humboldt-Universität zu Berlin, Germany) - Presenting Author
Mr Felix Henninger (Mannheim Centre for European Social Research, University of Mannheim, Germany)
Mr Pascal J. Kieslich (Mannheim Centre for European Social Research, University of Mannheim, Germany; Experimental)
Professor Frauke Kreuter (Mannheim Centre for European Social Research, University of Mannheim, Germany; University of Maryland, College Park, Maryland, USA; Institute for Employment Research, Mannheim, Germany)
Professor Sonja Greven (Chair of Statistics, School of Business and Economics, Humboldt-Universität zu Berlin, Germany)

Download presentation

One of the main goals of survey research is to collect robust and reliable data from respondents and, conversely, to reduce sources of measurement error. One source of error stems from respondents’ difficulty in understanding and responding to survey questions in the way the researchers intended. Thus, detecting and mitigating these difficulties promises to improve both the user experience and data quality. In the presence of a human interviewer, difficulty can be assessed by identifying and quantifying paralinguistic cues, and by directly addressing these issues with the respondent. These cues are not available if surveys are conducted online via the browser, which has become one of the predominant modes of data collection. However, by collecting additional paradata while respondents answer a questionnaire, web surveys provide researchers and practitioners with a novel data source that may indicate potential difficulties and confusion the respondent experienced. The current contribution focuses on a particular type of paradata, respondents’ mouse cursor movements, and how these rich data may be processed and analyzed to detect instances when respondents experienced difficulty.

To determine the predictive value of mouse-tracking data for the prediction of participant difficulty, we conducted an online survey assessing participants’ personal and economic background. Throughout the survey, we experimentally manipulated the difficulty of several questions, for example, by using either concise and understandable vs. complex and verbose language, or by ordering response options in an intuitive vs. random order. Using a custom client-side paradata collection framework, we recorded participants’ mouse movements during the survey. From the collected data, we extracted a large set of mouse movement features using the mousetrap R package we developed.

Using features derived from the cursor movements, we predicted whether respondents answered the easy (i.e., the understandable or intuitively ordered) or difficult (i.e., the complex or randomly ordered) version of a question. To do so, we propose a custom machine-learning model that takes into account the time series of participants’ interactions with the survey page. To build this model, we first adapted a range of common distance metrics to the case of multivariate trajectory data. Then, we used these distances to create base classifiers based on the KNN and kernel-based approaches introduced by Fuchs et al. (2015) and Ferraty and Vieu (2003). Finally, we combined the base classifiers into an ensemble using different techniques (linear combination and stacking methods) and evaluate their predictive accuracy. Going beyond these methods, we propose a personalization method to control for the baseline mouse behavior of the survey participants.

We discuss how the methods of the current project can be applied to other online surveys, and provide an R package that implements the presented classification method. This can be applied to mouse-tracking as well as more generally for multivariate functional data and trajectories in any number of dimensions.