Re: Skill Score
Posted: Thu Jul 12, 2012 2:22 pm
Sorry to cross-post, but having flagged it up here, think the issue belongs in this topic:
[... ...]
1: My original skill-score proposal was that skill be shown on a percentage scale and, although didn’t specify, envisaged at least one or two decimal points (e.g. 0.00 - 100.00).
The implemented 0.00 – 1.00 scale reduces that sensitivity by 100-fold, meaning that one can remain seemingly for ever within a certain narrow band without feedback as to gradual performance improvement or otherwise.
[I did privately ask the team to let me see their precise implementation of the formula before imposing it, but heard nothing until it suddenly appeared in current form.
And could that, if they added a hidden rounding-up operation, and/or by clicking refresh rather than ‘no track’, explain how both enigmatic leaders of the pack (id & Noname Yet) spookily maintain their perfect & static 1.00 skill levels??? I’ m mostly jealous, but certainly curious].
2: Because of the cumulative nature of the formula, personal experience and those of others such as ERSTRS suggests that the minimum 50 PMs viewed, during which one is implicitly expected to learn all the subtleties of PM track (& hopefully new real) track-detection before being fairly scored, is probably too few. I’d now recommend several hundred, if not 1000!
Essentially, too little credit is being given to gaining in experience over time; too many errors early on and you’re faced with a slow up-hill struggle.
3: Can I now therefore suggest that ‘skill’ should in fact be reassessed and updated at regular intervals; it’s something that hopefully grows, after all. How many Gold-medal Olympic winners would have got that medal if all their previous failures over the years were also taken into account?
I stand by the formula (if the results are presented in more detail – point 1), but would like to propose it be recalculated on a ? monthly / ? 3-monthly basis – either for everyone simultaneously, or based on each individual’s start date.
Thoughts, anyone?
John
[... ...]
1: My original skill-score proposal was that skill be shown on a percentage scale and, although didn’t specify, envisaged at least one or two decimal points (e.g. 0.00 - 100.00).
The implemented 0.00 – 1.00 scale reduces that sensitivity by 100-fold, meaning that one can remain seemingly for ever within a certain narrow band without feedback as to gradual performance improvement or otherwise.
[I did privately ask the team to let me see their precise implementation of the formula before imposing it, but heard nothing until it suddenly appeared in current form.
And could that, if they added a hidden rounding-up operation, and/or by clicking refresh rather than ‘no track’, explain how both enigmatic leaders of the pack (id & Noname Yet) spookily maintain their perfect & static 1.00 skill levels??? I’ m mostly jealous, but certainly curious].
2: Because of the cumulative nature of the formula, personal experience and those of others such as ERSTRS suggests that the minimum 50 PMs viewed, during which one is implicitly expected to learn all the subtleties of PM track (& hopefully new real) track-detection before being fairly scored, is probably too few. I’d now recommend several hundred, if not 1000!
Essentially, too little credit is being given to gaining in experience over time; too many errors early on and you’re faced with a slow up-hill struggle.
3: Can I now therefore suggest that ‘skill’ should in fact be reassessed and updated at regular intervals; it’s something that hopefully grows, after all. How many Gold-medal Olympic winners would have got that medal if all their previous failures over the years were also taken into account?
I stand by the formula (if the results are presented in more detail – point 1), but would like to propose it be recalculated on a ? monthly / ? 3-monthly basis – either for everyone simultaneously, or based on each individual’s start date.
Thoughts, anyone?
John