scoring, ranking and science

This forum is for discussing space science topics related to Stardust@home.

Moderator: DustMods

Post Reply
cello
Posts: 6
Joined: Wed Jun 14, 2006 1:01 am
Location: Latvia

scoring, ranking and science

Post by cello »

stardust team developed calibration movies system which works in some way giving out sensibility and ranking. sensibility usually is high nineties. ranking based on that and amount. not bad.

however, in reality we see a different picture. is there a single real movie marked by every second viewer? i doubt. most are marked by one of ten. which should point to average sensibility of 10%, not 99%. i feel that many volunteers just running through the movies, hitting calibration without problems (they are really much more simple than real), and not paying enough attention to real science. not looking though movies carefully. maybe different or parallel scoring/ranking system might change things. for example, score is real marked movies, counting only those which marked by 3 or more viewers (or 5 or more, whatever). people will be interested in to find more real candidates, not just look through more movies.

such philosophy.

ps. this doesn't apply to everybody. look through forum and you will see, there is also many people who don't care about scoring, but finding his/her own real interstellar grain :)

gavin42g
Posts: 31
Joined: Tue Aug 01, 2006 4:08 pm
Location: Kelowna (-ish), BC, Canada

Post by gavin42g »

I think as long as a few people mark a potential movie, even out of several dozen, it'll get looked at eventually. It all scales rather nicely: the more promising finds get lots of confirmations, the fainter ones, fewer so, regardless of the ratio of confirmations to viewings. So even if the most promising movie in the entire project only gets, say, 20 out of 100 confirmations, it's still the most promising, and will be given a high priority, right?

gamalmfalyii
Posts: 30
Joined: Sun Jul 09, 2006 7:46 am
Location: New York City
Contact:

Post by gamalmfalyii »

Or so common sense agrees. However, we've seen that many issues with the system completely lack common sense. They're allegedly marking all the movies that even have one confirmation and they will all be seen but as they said, the ones with fewer agreements will be seen last. It's all about prioritizing so we'll get word on the ones with fewer clicks by the year 2095...I mean, in a couple of months, maybe.
"She said a good day
ain't got no rain
She said a bad day's when I lie in bed
and think of things that might have been"

icebike

Post by icebike »

gamalmfalyii wrote: They're allegedly marking all the movies that even have one confirmation and they will all be seen but as they said, the ones with fewer agreements will be seen last.
I don't understand the "allegedly marking all the movies" bit. Who is "They" in this sentence?

The computers that run this system take any movie that gets even one mark, and send that same movies to many other viewers (originally planned to be 100 additional viewers).

(Later, they will sift all the movies for the ones having the highest percentage of confirmations. These will be reviewed and compared to the high-resolution photos (by the DustTeam Scientists), and if those reviewers agree, NASA will start slicing and Dicing Areogel to get at those specific particles.)

So the minimum clicks to be considered for human review is 2. One original, and one confirmation. Realistically, it will probably take a dozen confirmations to even attract any interest.

Secondly, they have the option to weight the clicks of anyone who has lousy quality scores, or good quality scores.

(There are three of these scores, the two that appear on the screen, and the other one that can be computed later which they haven't mentioned yet. If you fail to click movies that a large percentage of other viewers clicked, or if you click seemingly at random and NO ONE confirms your clicks, an attentiveness score can be computed.)

The system is utterly ingenious. Long after this project has accomplished its goals, the Social Sciences and psychology crowd is going to have a field day with this data. Dissertations have been written on less.

ToSeek
Posts: 48
Joined: Thu Aug 03, 2006 8:40 am
Contact:

Post by ToSeek »

If I understand the setup correctly, if we really knew what we were doing, not a single person would have marked a single movie yet - the odds are that slim of getting an actual event. However, not unreasonably, we're marking anything that's even suggestive of what we've been told to look for. The result is a massive number of false positives to sift through, which I hope is what the project was expecting. In any case, it's more important to endure having a lot of false positives than to miss something real just because it doesn't fit expectations.
If you're going to be just like everyone else, what's the point in existing?

icebike

Post by icebike »

ToSeek wrote: The result is a massive number of false positives to sift through .
Read what I wrote above.

The system itself does the sifting.
The highest ranked ones will be looked at first, and those that
are low ranked might never be looked at, because of the methodology assures that any suspect movie is seen by at least 100 people.

Post Reply