Apologies to and lessons from rcklein and minkiemink

Stardust@home project news.

Moderator: Stardust@home Team

Post Reply
ajwestphal
Stardust@home Team
Stardust@home Team
Posts: 2
Joined: Wed May 17, 2006 7:01 am

Apologies to and lessons from rcklein and minkiemink

Post by ajwestphal » Fri Aug 18, 2006 12:58 pm

On August 16, we concluded that eight of the top-ranked 200 volunteers
were focussing on the calibration movies at the expense of attention
to real movies ("cheating"). This was based on a significantly longer time spent
on calibration movies as compared with real movies. After we posted a notice and zeroed
scores of these eight, two of them contacted
us in confusion protesting their innocence. Because their communications were polite
and credible, we had a dialog with each to try to understand what it could have
been about their scanning could lead to the observed time differences.
Both volunteers described their scanning strategy in great detail. One is very thorough and
careful in examining each movie that is in focus, spending about four times longer
than the average volunteer on each movie, but is quickly dismissive of movies that
are not in good focus. We verified this looking at the data. This volunteer took a very
conservative approach in deciding on whether or not a movie was in good focus, rejecting
about 50% of them. Since there are no bad focus movies among the calibrations, this
volunteer took much less time on average with real movies. Further, this volunteer took the time
on clicked tracks to check MyEvents -- this systematically increased the time for calibrations movies.
We calculated that with the fraction of movies that this volunteer labeled as bad focus, we should observe the just the
time difference that we did observe. The second volunteer had a hard time at first finding tracks, and
spent a lot of time early in scanning looking carefully at the calibration
movies to try to learn how to recognize them after finding them. Later this volunteer started to do the
calibrations much more quickly after becoming proficient. We reanalyzed this person's data
based on a snapshot of yesterday's data taken just before we zeroed the scores,
and found that the time difference had dropped significantly, and in fact would not have been
flagged as "cheating" -- this is consistent with this person's strategy.
In both cases, these strategies, though different from each other, are reasonable and valid.

We therefore apologize to rcklein and minkiemink, and thank them for their hard work and their
polite, thoughtful, and patient response. We are reinstating their statistics and scores, and will
include their data in the analyses. A good thing has come from this, at least from our point of view.
We have learned something new about differences in scanning strategies and how they can manifest
themselves in non-obvious ways in the observed distributions of evaluation times.
And we've gained important feedback about our training and volunteers' learning curves.
With new colleagues signing up every day we are always looking for ways to make the process clearer.

While we acknowledge that "cheaters" may be doing so out of innocent misunderstanding, we stand by our
response for the other six. Their time differences are so large, and the time spent on real movies is so
short, that they cannot be consistent with any valid scanning strategy that provides
useful information in searching for interstellar dust tracks.
Andrew Westphal
Senior Fellow and Associate Director
Space Sciences Laboratory, U. C. Berkeley

Post Reply