BigSurv18 program


Wednesday 24th October Thursday 25th October Friday 26th October Saturday 27th October





The Bigger the Better? Exploring Opportunities and Challenges of Using Big Data for Rapid Ethnography

Chair Dr Frances Barlas (GfK Custom Research)
TimeSaturday 27th October, 09:00 - 10:30
Room: 40.012

A Sample Survey on the Current Level of Awareness Regarding Big Data Among Academics and Practitioners of Statistics in Pakistan

Professor Saleha Naghmi Habibullah (Kinnaird College For Women, Lahore, Pakistan) - Presenting Author

Download presentation

Download presentation

In developing countries such as Pakistan, there is a fairly strong tradition of theoretical development as well as practical application of survey sampling. However, in these countries, a large number of academics and practitioners of Statistics are unfamiliar with the true meaning of terms such as Big Data, Exabyte, Petabyte, Brontobyte, Artificial Intelligence, Machine learning, Data mining, Data warehousing, Distributed processing, Grid computing, Cloud computing and the like. In this paper, we report the results of a survey carried out to ascertain the current level of awareness regarding Big Data among academics and practitioners of Statistics in Pakistan. Respondents to a questionnaire formulated for this purpose include lecturers, assistant professors, associate professors and professors of Statistics working in various universities and colleges of Pakistan, as well as statistical officers working at the Pakistan Bureau of Statistics, the provincial bureaus of Statistics and/or other data-collecting organizations of the country. Results of the survey seem to indicate that there is a need for multi-faceted efforts aimed at creating awareness regarding Big Data, the related technologies, challenges and future prospects among members of the statistical community of Pakistan.


Run Silent, Run Deep: Passive Online Monitoring and Survey Data Fusion

Dr Frances Barlas (GfK Custom Research) - Presenting Author
Dr Mansour Fahimi (GfK Custom Research)
Mr Randall Thomas (GfK Custom Research)

The opportunity to gather massive amounts of data concerning the online activity of individuals has now become possible through the download and use of software or apps that participants can install on their internet-enabled devices. GfK has developed such software that allows for passive monitoring and we have introduced this software to a subset of our probability-based KnowledgePanel. This software enables us to track websites visited, ads viewed, topic searches on search engines, and overall online activity by websites. Many companies have become quite interested in tracking these data to improve their understanding of factors affecting their customers’ behaviors and how they are changing over time. Over the past 3 years, we have developed a sub-panel of our probability-based KnowledgePanel that agreed to enable passive monitoring of their online activity. We will review the processes we evolved to develop our KnowledgePanel Digital and factors affecting its success, including a review of communication and incentive protocols and recruitment and panel maintenance practices that we found to enhance data quality. We then look at differences in the types of people willing to be surveilled versus those who declined to participate. As we had found in panel recruitment generally, it turns out that recruitment success is often affected by demographics like being somewhat older and white. We also found that those who are more technologically oriented were more likely to join and participate as a panel member. As a result of the differences between those who volunteered to be monitored from the larger population, we will summarize methods for sampling and weighting all the data we have developed to enable these data to be more representative. Finally, we focus on summarizing the factors we found that affected panel retention and attrition.


Marketing Research in the Digital Era: A Comparison Between Adaptive Conjoint Analysis Methods

Mrs Catarina Reis da Fonseca (University of Porto - Faculty of Economics)
Dr Manuela Maia (Católica Porto Business School) - Presenting Author
Dr Pedro Campos (University of Porto - Faculty of Economics)

Marketing research methods are evolving fast and literature concerning this area is still dispersed. This work tries to address this gap by systematizing the state of the art regarding digital research tools - not only by describing the existent methods but also by referring to its advantages and limitations -, which could be useful both for academics and professionals in this area. The present research introduces several digital research methods, such as marketing online communities (MROCs), online focus groups, online chat, research games and web-based surveys. This last method is widely used today, but, in an Era when the quantity of information that individuals receive through several devices is starting to be viewed as a burden - the difficulty to keep respondents engaged in studies is already indicated as a problem. Time is considered precious and the need to design and implement effective surveys is increasing. In this context, we funneled this work to a specific survey-based multivariate statistical technique that has already proven to be an important tool for marketeers: Conjoint Analysis. The main objective of this method is to estimate the relative importance that consumers give to product attributes and the utility they associate to the different levels of each attribute. More specifically, this work explores the adaptive methods within Conjoint Analysis, which demand the aid of a computer to be administered. By comparing Adaptive Conjoint Analysis (ACA) and Adaptive Choice-Based Conjoint Analysis (ACBC) through the design of two surveys that consider the same product attributes and were tested in the same sample, we hope to give marketing managers a better understanding of this tool, so that it could be considered more often as a potential research method in future market studies. Our conclusions show that (1) both methods produce the same estimated utilities when considering a small number of attributes, (2) the share of attribute preferences is similar in both cases, with the particularity of ACBC appearing to be more sensitive, detecting even small shares of preference for some attributes, (3) response time is practically the same in both techniques.