Discuss your experiences with and ideas about Stardust@home here.

Moderators: Stardust@home Team, DustMods

Post Reply
Posts: 2
Joined: Mon Jul 31, 2006 6:34 pm
Location: Cleveland Hts., OH


Post by optuser »

I've looked all up and down the FAQ's, updates, and other fourms so apologies if this is posted somewhere else.

Is the S@H team posting overall statistics for the project? Users registered? Pass rate of training program? Daily/weekly number of logins? Total hours spent logged in? Number of real movies viewed?

Averages, means, rates, etc. People with time on their hands to think about these things want to know.

My stats right now are 16-0-23 (correctly-incorrectly-real movies). I am currently 6307 out of 10635. I really don't care about my rank in terms of my score but how it fits into the puzzle of how many people are churning out quality work and the rate which the project will show completion.

Happily borging for the dusting team,
6307 of 10635
MG it's full of *s
Resistance is.... ah forget it.
Posts: 994
Joined: Wed May 17, 2006 8:33 pm
Location: Indiana, USA

Post by Nikita »

You know, that would be interesting. However, I don't think they have the time right now. But I'm sure they will need it later!

By the way, I like your sig! :lol:
From dust we come
Posts: 2
Joined: Mon Aug 07, 2006 5:46 pm
Location: Davis, CA, USA

Post by brianschick »

How about ranking with Bayesian posterior probabilities for both sensitivity and, what was the other one, selectivity. IE, the probability for a person to correctly mark the next Tracked Calibration movie correctly, given his or her past record of marking Tracked Calibration movies, and given the record of all participants combined? And the same for the Trackless calibration movies?

I would like to see, both top 100's and bottom 100's for both statistics, as being always wrong is just as good information as being always right!

Currently, is the ranking based simply on number right minus number wrong? If so, then, unless the number of tracked calibration movies equals the number of untracked ones in the calibration pool, you're ranking volume.

And such statistics would give you a probability that your search is done!

--Brian Schick.
Posts: 24
Joined: Thu Aug 03, 2006 7:47 pm
Location: Wyoming

accuracy too !

Post by tiburd »

To follow up on the previous idea (with which I agree), an easy (?) augmentation of aggregate stats could include at least adding columns for the top 100 to display their specificity and sensitivity, along with their scores. There's space to do that on the page. While it's perhaps less informative than adding pages for the best & worst accuracy, it allows us to compare the accuracy among the most prolific volunteers, and--MORE IMPORTANTLY--provides a currently missing element of subtle incentive for all of us to be careful about accuracy. Currently, the scores / ranks / aggregate stats are mainly about volume and attach a very low weight to accuracy.
Posts: 19
Joined: Mon May 22, 2006 10:45 am
Location: Knoxville, TN

Post by Martino »

I agree about the aggregate statistics. A psych major could do all kinds of studies. The following might make neat graphics without compromising any individual's data;

accuracy as a function of average viewing time for each volunteer.
accuracy as a function of local time of day for each volunteer.
aggregate density of clicks as a function of position on the image.
overall views as a function of time of day (GMT)
Posts: 74
Joined: Thu Aug 03, 2006 7:55 pm
Location: Topanga, California

Post by minkiemink »

A combined total of 13,972 users were registered as of today. Any idea how many of those are actually searching, and how many are not? Just a liitle curious.
“The true harvest of my life is intangible - a little star dust caught, a portion of the rainbow I have clutched”
-Henry David Thoreau
Posts: 19
Joined: Mon May 22, 2006 10:45 am
Location: Knoxville, TN

Post by Martino »

It might be neat to make a new account and see what rank it computes after one, ten, and 100 movies viewed. Actually you might need to use the calibration movie-based scores. A few intermediate people such as myself (score 946, total movies 3569, rank 783 out of 13,983) would fill in the middle of the graph without wasting too much time.
Posts: 5
Joined: Sat Oct 14, 2006 11:42 am
Location: Carlsbad, CA

Post by LarryG »

As an ex-SETI@home participant, I would like to see the ranking be more of a "weighted" ranking that takes into account all the factors as a participant: Total Number of Movies viewed AND accuracy.

Perhaps a weighted rank like this:

WT. RANK = (Total REAL movies viewed)*(Specificity)*(Sensitivity)

This would allow for Volume AND accuracy. Someone who just fly's through the movies without regard to accuracy is overall ranked lower than someone who views less movies but has a high accuracy.

Just a thought...

Also - I'd like to see the rankings tables available beyond the top 100 - I'm sure many would like to see our rank relative to other people...
Post Reply