Comparing Health Survey Data Cost and Quality Between Amazon’s Mechanical Turk and Ipsos’ KnowledgePanel: Observational Study

Autor: Patricia M Herman, Mary E Slaughter, Nabeel Qureshi, Tarek Azzam, David Cella, Ian D Coulter, Graham DiGuiseppi, Maria Orlando Edelen, Arie Kapteyn, Anthony Rodriguez, Max Rubinstein, Ron D Hays
Jazyk: angličtina
Rok vydání: 2024
Předmět:
Zdroj: Journal of Medical Internet Research, Vol 26, p e63032 (2024)
Druh dokumentu: article
ISSN: 1438-8871
DOI: 10.2196/63032
Popis: BackgroundResearchers have many options for web-based survey data collection, ranging from access to curated probability-based panels, where individuals are selectively invited to join based on their membership in a representative population, to convenience panels, which are open for anyone to join. The mix of respondents available also varies greatly regarding representation of a population of interest and in motivation to provide thoughtful and accurate responses. Despite the additional dataset-building labor required of the researcher, convenience panels are much less expensive than probability-based panels. However, it is important to understand what may be given up regarding data quality for those cost savings. ObjectiveThis study examined the relative costs and data quality of fielding equivalent surveys on Amazon’s Mechanical Turk (MTurk), a convenience panel, and KnowledgePanel, a nationally representative probability-based panel. MethodsWe administered the same survey measures to MTurk (in 2021) and KnowledgePanel (in 2022) members. We applied several recommended quality assurance steps to enhance the data quality achieved using MTurk. Ipsos, the owner of KnowledgePanel, followed their usual (industry standard) protocols. The survey was designed to support psychometric analyses and included >60 items from the Patient-Reported Outcomes Measurement Information System (PROMIS), demographics, and a list of health conditions. We used 2 fake conditions (“syndomitis” and “chekalism”) to identify those more likely to be honest respondents. We examined the quality of each platform’s data using several recommended metrics (eg, consistency, reliability, representativeness, missing data, and correlations) including and excluding those respondents who had endorsed a fake condition and examined the impact of weighting on representativeness. ResultsWe found that prescreening in the MTurk sample (removing those who endorsed a fake health condition) improved data quality but KnowledgePanel data quality generally remained superior. While MTurk’s unweighted point estimates for demographics exhibited the usual mismatch with national averages (younger, better educated, and lower income), weighted MTurk data matched national estimates. KnowledgePanel’s point estimates better matched national benchmarks even before poststratification weighting. Correlations between PROMIS measures and age and income were similar in MTurk and KnowledgePanel; the mean absolute value of the difference between each platform’s 137 correlations was 0.06, and 92% were
Databáze: Directory of Open Access Journals