The Internet has opened new ways for behavioral researchers to conduct
human subjects experiments. Yet remote, Internet-based experimentation is not
yet part of the standard research toolkit because of the concerns about data
validity, feasibility of recruiting representative participant samples, and
perceived barriers due to the inconsistencies between contemporary research
practice and the assumptions underlying the design of existing online
experimentation tools (Law, et
al. 2017). We are developing and validating tools and methods to enable
novel ways of conducting large-scale empirical work with human subjects. Our
core goals are to enable a much faster theory-to-experiment cycle, to
facilitate access to larger and more diverse participant populations, and to
enable larger scale experimentation (in terms of numbers of conditions and
experiments) than what is feasible with lab-based methods.
Our LabintheWild
platform is one of our projects designed to overcome these barriers.
Through LabintheWild, we recruit unpaid online volunteers from all over the
world to participate in behavioral studies in exchange for personalized
feedback. Over the past three years, LabintheWild has attracted over 3.5M
distinct visitors from over 200 countries and resulted in over 1.5M completed
experimental sessions. We operationalized core theories of curiosity (Law, et al. 2016) and social
comparison (Huber, Reinecke and
Gajos, 2017) to attract intrinsically motivated participants. Our
validation studies demonstrated that results obtained on LabintheWild match
those obtained in traditional laboratory settings (Reinecke and Gajos, 2015).
LabintheWild has made it possible for us to conduct research that would not
have been feasible with traditional methods (e.g., a cross-sectional study
with 500,000 participants showing how several aspects of motor and cognitive
performance change during the life span, a study with 40,000 participants that
characterized the impact of demographic variables on the perception of
aesthetic appealĀ (Reinecke and
Gajos, 2015) and a study with 16,000 participants that characterized the
influence of the personality traits on the use of adaptive interfaces (Gajos and Chauncey, 2017)).
In our other work, we demonstrated how to use paid crowdsourcing to
perform valid performance evaluations of user interfaces (Komarov, Reinecke and Gajos,
2013) and how to perform accurate measurements of pointing performance
from in situ observations (Gajos,
Reinecke and Herrmann, 2012). Our DERBI system makes it easier to report
back personalized exposure results to the participants of large-scale
biomonitoring studies (Boronow, et
al., 2017).
Projects