BigSurv18 program


Wednesday 24th October Thursday 25th October Friday 26th October Saturday 27th October





Smartphone Sensor Measurement and Other Tasks in Mobile Web Surveys II

Chair Professor Florian Keusch (University of Mannheim)
TimeSaturday 27th October, 14:00 - 15:30
Room: 40.002

Smartphones allow researchers to collect data through sensors such as GPS and accelerometers to study movement, and passively collect data such as browsing history and smartphone and app usage in addition to self-reports. Passive mobile data collection potentially decreases measurement errors and reduces respondent burden. However, respondents have to be willing to provide access to sensor data or perform additional tasks (e.g., download apps, take pictures). If willing respondents differ from nonwilling respondents, results might be biased. This session brings together empirical evidence on the state-of-the-art use of sensor measurement and other additional tasks on smartphones. It combines presentations of results from (large-scale) studies with diverse sensors and tasks from multiple countries and research settings. Presentations discuss current practice in collecting these new types of data focusing on the willingness to allow sensor measurement and perform additional tasks and its implications for nonparticipation bias.

Framing Consent Questions in Mobile Surveys: Experiments on Question Wording

Dr Henning Silber (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Dr Bernd Weiss (GESIS - Leibniz Institute for the Social Sciences)
Professor Florian Keusch (University of Mannheim)
Mr Christoph Beuthner (GESIS - Leibniz Institute for the Social Sciences)
Dr Jette Schroeder (GESIS - Leibniz Institute for the Social Sciences)

Mobile surveys are on the rise in many countries, which is also due to their ability to collect complementary data from built-in sensors. However, in order to collect sensor data, respondents have to give consent to share this data. Our study implements split-ballot design experiments, wherein we test different ways of framing the consent questions by varying the question wording. Among other variations of the question wording, we explore whether respondents are more willing to share sensor data if this reduces their overall response burden. Our study also goes beyond many previous studies on respondents’ consent by asking respondents about their actual willingness to share sensor data instead of their hypothetical willingness. The data is collected with a German online access panel (N = 3,000). Half of the respondents were randomly selected to answer the questionnaire on their smartphone. Our study also allows us to compare respondents’ sensor data consent to other consent questions and respondents’ views on privacy issues.


What Do Researchers Have to Invest to Collect Smartphone Data?

Final candidate for the monograph

Dr Georg-Christoph Haas (University of Mannheim, Institute for Employment Research) - Presenting Author
Professor Florian Keusch (University of Mannheim)
Professor Frauke Kreuter (University of Mannheim, Institute for Employment Research, University of Maryland)
Dr Mark Trappmann (Institute for Employment Research, University of Bamberg)
Dr Sebastian Bähr (Institute for Employment Research)

Smartphone sensor data enables researchers to tackle research questions that cannot be answered with survey data alone. However, smartphone data may include very sensitive information such as GPS location and app usage data which many may perceive as too private to share with researchers. Therefore, sensor data may be more valuable than survey data to participants, and it may be harder to recruit participants for an app study than for a survey. In surveys, monetary incentives are known to increase response rates. However, we do not know if incentives work the same way in studies involving research apps. Our research contributes to answer the question on how high the amount of incentives should be to increase the participation rate in a cost-effective way. Our results will help researchers planning their incentives for app-projects.

We developed the IAB-SMART app that passively collects smartphone data and launches surveys during a six-month field period. Upon installing the app, participants had to decide which of the five functions the app uses for data collection ((a) network quality and location information, (b) interaction history, (c) characteristics of the social network, (d) activity data and (e) smartphone usage) they want to activate. We provided incentives for three tasks: (1) for installing the app, (2) for activating functions for 30 consecutive days, and (3) for answering survey questions. We conducted a 2x2 experiment on the installation and the function incentives. One random group of participants was promised 10 Euro for installing the app and the other group was promised 20 Euro. Independent of the installation incentive, one random group was promised one Euro for each function activated for 30 days and the other group was promised one Euro for each function activated for 30 days plus five additional Euro if all five functions were activated for 30 days. Thus, the first group would receive 5 Euro and the second group 10 Euro per month for activating all five functions. Additionally, participants received up to 20 Euro for answering survey questions in the app over the field period. Therefore, the overall possible incentive varied between 60 and 100 Euro depending on the assigned group. Respondents could cash-out their incentives directly in the app as amazon.de vouchers. We sent invitation letters to 4,300 respondents of a yearly panel survey who own an Android smartphone.
At the time of submission, the participant recruitment is still ongoing. However, our research design enables us to answer the following questions: Does a higher amount of installation and overall incentive increase the take-up rate? Does a higher amount of installation and overall incentive shorten the period between study invitation and app installation? Do higher incentives motivate utility maximizers who download the app and remove it after cashing-out the incentive? Do participants activate more functions when offered an additional incentive? Do we see different behavior in (de)activation of functions over the field period between our experimental groups?


Willingness to Participate in a Metered Online Panel

Dr Melanie Revilla (RECSM-Universitat Pompeu Fabra, Spain) - Presenting Author
Professor Mick P. Couper (University of Michigan)
Mr Ezequiel Paura (Netquest)
Mr Carlos Ochoa (Netquest)

Download presentation

With the development of new technologies, there is a growing interest in combining different sources of data to get a more complete and/or accurate picture of behavior. In particular, the use of passive data is attractive since it does not require any effort from the participants (except, sometimes, to set things up).

Passive data collection can take different forms, such as tracking credit card use or GPS information from mobile devices. In this presentation, we will focus on one particular form: passive data from a tracking application (also called a ‘‘meter’’) installed on participants’ devices (PCs, tablets, and/or smartphones) to register their online behavior. The meter makes it possible to obtain information about all the URLs of the web pages visited by the participants, as well as the time and length of the visit, ad exposure, and content of the pages visited. For mobile devices, it also gives information about the use of apps. This allows mass data collection of users’ online activity. Moreover, once a panelist has installed the meter, his/her data can be used for many different projects/clients without any further effort from the panelist, and even without him/her knowing about it.
However, mainly because of privacy concerns and lack of trust, we expect that obtaining peoples’ cooperation on installing such a meter can be difficult and lead to high selection bias. Nevertheless, there is little research to confirm this. Therefore, the goal of our research is to investigate to what extent this is indeed the case.
In order to do so, we will consider data both about the stated and the observed willingness of people to accept installing a meter which passively tracks their browsing behavior, from the online fieldwork company Netquest. Since Netquest created metered panels in different countries from 2014 on, we will be able to study across time and countries questions like the following: from the panelists invited, how many actually accepted to install the meter, and how many really did the installation? On which devices, and for how long did they continue sending the information? In addition, we will compare the panelists who installed the meter to those who were invited but did not install it, in terms of both socio-demographic characteristics and panel loyalty.


The Impact of Motion Instructions on the Acceleration of Smartphones and Completion Times in Web Surveys

Mr Jan Karem Höhne (University of Göttingen) - Presenting Author
Dr Melanie Revilla (RECSM-Universitat Pompeu Fabra, Spain)
Mr Stephan Schlosser (University of Göttingen)

Download presentation

The tremendous increase of smartphone respondents in web surveys over the last years did not only raise completely new research questions, but also completely new ways to research completion and response behavior. For instance, smartphones have a large number of so-called “wearable sensors”, such as accelerometers and gyroscopes, that can passively register respondents’ physiological states, such as movements and speed. Although the collection of sensor data via smartphones is a common and established method in many scientific fields to research human actions and behaviors, in web survey research it is still in its infancy.

The goal of this study is to explore the usability and usefulness of measuring sensor data (i.e., acceleration) and to get new insights about completion conditions for future mobile web surveys. Therefore, we research how motion instructions in mobile web surveys, such as walking around, affect the total acceleration of smartphones and respondents’ completion times. For this purpose, we use the JavaScript-based tool “SurveyMotion (SM)” and collect client-side response times. We incorporate an experiment in a web survey that is conducted by the opt-in access panel Netquest and randomly assign respondents to one of the following two groups. The first group is instructed to stand at a fixed point and hold the smartphone during the completion of the experimental questions (standing condition). The second group is instructed to walk around and hold the smartphone during the completion of the experimental questions (walking condition).

We expect to observe higher total acceleration data for the respondents of the walking condition than for the respondents of the standing condition. This would indicate a proper total acceleration measurement of SM. Furthermore, we expect higher completion times for the respondents of the walking condition than for the respondents of the standing condition. Performing more than one task simultaneously (i.e., completing a survey and walking around) implies a permanent relocation of mental and physical resources between these activities, which might increase the duration of completion.