Assessing scale reliability in citizen science motivational research: lessons learned from two case studies in Uganda
View/ Open
Date
2024Author
Ashepe, Mercy Gloria
Vranken, Liesbet
Michellier, Caroline
Dewitte, Olivier
Mutyebere, Rodgers
Kabaseke, Clovis
Twongyirwe, Ronald
Kanyiginya, Violet
Kagoro-Rugunda, Grace
Huyse, Tine
Jacobs, Liesbet
Metadata
Show full item recordAbstract
Citizen science (CS) is gaining global recognition for its potential to democratize and boost scientific research. As such, understanding why people contribute their time, energy, and skills to CS and why they (dis)continue their involvement is crucial. While several CS studies draw from existing theoretical frameworks in the psychology and volunteering fields to understand motivations, adapting these frameworks to CS research is still lagging and applications in the Global South remain limited. Here we investigated the reliability of two commonly applied psychometric tests, the Volunteer Functions Inventory (VFI) and the Theory of Planned Behaviour (TPB), to understand participant motivations and behaviour, in two CS networks in southwest Uganda, one addressing snail-borne diseases and another focused on natural hazards. Data was collected using a semi-structured questionnaire administered to the CS participants and a control group that consisted of candidate citizen scientists, under group and individual interview settings. Cronbach’s alpha, as an a priori measure of reliability, indicated moderate to low reliability for the VFI and TPB factors per CS network per interview setting. With evidence of highly skewed distributions, non-unidimensional data, correlated errors and lack of tau-equivalence, alpha’s underlying assumptions were often violated. More robust measures, McDonald’s omega and Greatest lower bound, generally showed higher reliability but confirmed overall patterns with VFI
factors systematically scoring higher, and some TPB factors—perceived behavioural control, intention, self-identity, and moral obligation—scoring lower. Metadata analysis revealed that most problematic items often had weak item–total correlations. We propose that alpha should not be reported blindly without paying heed to the nature of the test, the assumptions, and the items comprising it. Additionally, we recommend caution when adopting existing theoretical frameworks to CS research and propose the development and validation of context-specific psychometric tests tailored to the unique CS landscape, especially for the
Global South
Collections
- Research articles [58]