I've looked all up and down the FAQ's, updates, and other fourms so apologies if this is posted somewhere else.
Is the S@H team posting overall statistics for the project? Users registered? Pass rate of training program? Daily/weekly number of logins? Total hours spent logged in? Number of real movies viewed?
Averages, means, rates, etc. People with time on their hands to think about these things want to know.
My stats right now are 16-0-23 (correctly-incorrectly-real movies). I am currently 6307 out of 10635. I really don't care about my rank in terms of my score but how it fits into the puzzle of how many people are churning out quality work and the rate which the project will show completion.
Happily borging for the dusting team,
6307 of 10635
OVERALL PROJECT STATISTICS?
Moderators: Stardust@home Team, DustMods
OVERALL PROJECT STATISTICS?
MG it's full of *s
==================
Resistance is.... ah forget it.
==================
Resistance is.... ah forget it.
-
- Posts: 2
- Joined: Mon Aug 07, 2006 5:46 pm
- Location: Davis, CA, USA
How about ranking with Bayesian posterior probabilities for both sensitivity and, what was the other one, selectivity. IE, the probability for a person to correctly mark the next Tracked Calibration movie correctly, given his or her past record of marking Tracked Calibration movies, and given the record of all participants combined? And the same for the Trackless calibration movies?
I would like to see, both top 100's and bottom 100's for both statistics, as being always wrong is just as good information as being always right!
Currently, is the ranking based simply on number right minus number wrong? If so, then, unless the number of tracked calibration movies equals the number of untracked ones in the calibration pool, you're ranking volume.
And such statistics would give you a probability that your search is done!
--Brian Schick.
I would like to see, both top 100's and bottom 100's for both statistics, as being always wrong is just as good information as being always right!
Currently, is the ranking based simply on number right minus number wrong? If so, then, unless the number of tracked calibration movies equals the number of untracked ones in the calibration pool, you're ranking volume.
And such statistics would give you a probability that your search is done!
--Brian Schick.
accuracy too !
To follow up on the previous idea (with which I agree), an easy (?) augmentation of aggregate stats could include at least adding columns for the top 100 to display their specificity and sensitivity, along with their scores. There's space to do that on the page. While it's perhaps less informative than adding pages for the best & worst accuracy, it allows us to compare the accuracy among the most prolific volunteers, and--MORE IMPORTANTLY--provides a currently missing element of subtle incentive for all of us to be careful about accuracy. Currently, the scores / ranks / aggregate stats are mainly about volume and attach a very low weight to accuracy.
tiburd
I agree about the aggregate statistics. A psych major could do all kinds of studies. The following might make neat graphics without compromising any individual's data;
accuracy as a function of average viewing time for each volunteer.
accuracy as a function of local time of day for each volunteer.
aggregate density of clicks as a function of position on the image.
overall views as a function of time of day (GMT)
accuracy as a function of average viewing time for each volunteer.
accuracy as a function of local time of day for each volunteer.
aggregate density of clicks as a function of position on the image.
overall views as a function of time of day (GMT)
-
- Posts: 74
- Joined: Thu Aug 03, 2006 7:55 pm
- Location: Topanga, California
It might be neat to make a new account and see what rank it computes after one, ten, and 100 movies viewed. Actually you might need to use the calibration movie-based scores. A few intermediate people such as myself (score 946, total movies 3569, rank 783 out of 13,983) would fill in the middle of the graph without wasting too much time.
As an ex-SETI@home participant, I would like to see the ranking be more of a "weighted" ranking that takes into account all the factors as a participant: Total Number of Movies viewed AND accuracy.
Perhaps a weighted rank like this:
WT. RANK = (Total REAL movies viewed)*(Specificity)*(Sensitivity)
This would allow for Volume AND accuracy. Someone who just fly's through the movies without regard to accuracy is overall ranked lower than someone who views less movies but has a high accuracy.
Just a thought...
Also - I'd like to see the rankings tables available beyond the top 100 - I'm sure many would like to see our rank relative to other people...
Perhaps a weighted rank like this:
WT. RANK = (Total REAL movies viewed)*(Specificity)*(Sensitivity)
This would allow for Volume AND accuracy. Someone who just fly's through the movies without regard to accuracy is overall ranked lower than someone who views less movies but has a high accuracy.
Just a thought...
Also - I'd like to see the rankings tables available beyond the top 100 - I'm sure many would like to see our rank relative to other people...