I recently read Sonas article on ratings and was impressed by effort he has put into his work.
But the one thing I did not see addressed was the same issue that now haunts the USCF rating system - reconciling the rating system to the off board real world realities.
The USCF has destroyed its ratings system with an insurgence of young players - some as young as 4 or 5 - and by trying to apply the same formula to them as they do Grandmasters (well, with the one proviso that the K is reduced at Master level). When a high school senior who once played in kindergarten and received a rating of 100 and has since studied religiously with Fritz and ChessBase enters a tournament, he sucks the rating system dry. To try and offset this we now have rating floors and bonus points - all things political which have nothing to do with playing strength or predictive mathematical formulas.
But similar realities ocurr at the FIDE GrandMaster level too!
The K factor is indeed the crital issue but the analysis stops short of what is needed. If the goal is to be predictive - and not just another bragging crown - then time limits alone are not sufficient. The K factor must vary with the weight of the game. And the same K factor should not always be used for both players.
Elo basic fault was his assumption of a zero-sum game. But it is not. If two friendly masters meet in the final round, one needs a draw to secure first place in the tournament and the needs other a draw to assure the next title level - what will the result of the game be? All the analysis of K factors and normal distributions won't help. In the Real World, we KNOW the result is going to be drawn. In the ratings world of claculations and formulas the result may be 64% to 36% in favor of the champion.
Only by modeling the real world can ratings provide a predictive value. Until we learn to model the Rating after the real life senario, we won't have another breakthrough worthy of shelving Dr Elo's work.