Allow me to answer some of the points moodles has made:

It seems that this system cannot provide an accurate and/or precise evaluation of skill if a person can be at the purple star level one week and at the red dot level the next week.

There is no way to measure "skill" per se ... so we do the next best thing: we measure "results of play".

To the extent that everyone plays all their games to their highest level of ability, these correlate.

If someone plays a rated game and "fools around", or is not feeling well, or for some other reason plays

below their "level of skill", it will not.

In fact, I think that the largest number of points I saw awarded/reduced for a single game was 48 points, which notably, is almost half of the points allotted to a C2 ranking level.

Your memory or your observation were inaccurate. I ran the algorithm over a wide range of possible inputs

for player points and game results, and the largest swing was 16 points. Such large swings only occur

in games with wide margins (at least 20 points, and usually it requires more).

... if a person can be at the purple star level one week and at the red dot level the next week ...

To drop two color levels (200+ points) would require at least 12 "bad games" in a row,

or perhaps mixed in with some wins that did not gain as much.

And if a purple star level player is losing lots of games in succession by 20 points or more,

then he probably got lucky to reach that level in the first place.

It is my experience that players at that level only rarely lose games by big margins,

never mind 12 or more in a row.

And if these really are "abnormally bad games" ... maybe he should take a break

before piling up 12 of them.

One game does not determine what level of player you are - I think it would be nice if the ranking system reflected this.

One approach in addressing these concerns would be, before altering the points, to multiply the points to be added/subtracted by some predetermined factor (i.e., percentage). This way the methodology would be the same, and the points won=points lost, but the total points awarded/lost each game would not be so large (i.e., relative to the ranking scale). It would take you longer to climb up the rankings, but you also would not fall as quickly.

I agree with this point ... and in fact there already is such a "damping factor" built into the algorithm.

Your proposal is to reduce it - which is a possibility, but must be given some consideration.

I recall that during the development of the algorithm we experimented with different values before

settling on what we used.

I agree that you should not lose points after winning a game. In fact, I think this not only discourages mid- to high-ranked players from playing lower-ranked players, but if they do play, encourages them to win with the largest margin possible, thus also discouraging the lower-ranked players.

It is hard to disagree with your points about encouragement/discouragement,

but to some extent it is necessary to take into account the winning margin.

Unfortunately (in this context), it is

the game itself which encourages this behavior,

since unlike chess, there is a score at the end of the game, not just a win and a loss.

The object of the game is "play as many pieces as you can", rather than

"play more than your opponent and then stop".

A system which merely took into account who won and who lost would not differentiate players as well.

The point adjustments are essentially trying to correct any errors in the relative rankings of

the two players, given the additional input of the just-played game. Based on the rankings

of the players and who plays first, there is an "expected result".

(E.g. a 500 level player vs a 300 level player "should win by at least X points".)

If the expectation is not met, the players' points are adjusted to bring them closer together

(or possibly even switch places if they were close to begin with).