back...

Evaluating Approaches to Crowdsourced Visual Analytics
Jessica Hullman, Erin Krupka, and Eytan Adar

We consider an alternative workflow that generates many datasets through bootstrapping and distributes visualizations of these resamples. Taken as a set, the resample datasets reflect variance in the input dataset such as that caused by statistical error. In the integrated resampling workflow we present a crowd member with a unique sequence of visualizations that depict different resamples drawn from the data. We show that this method, where the crowd member integrates multiple views to form a final assessment, results in order effects that systematically bias final assessments. In the distributed resampling workflow a crowd member is presented with a single unique visualization that is generated from the bootstrapped data. We show that aggregating distributed assessments do not suffer from order effects and generate additional distributional data the other methods do not.


PDF, (430KB), to appear, Collective Intelligence, 2015