Monday, March 19, 2007

The Cleveland Indians and Pythagoras

In the post-mortems that followed the 2006 season, chief among the questions asked was how the Cleveland Indians, given the fact that they scored 88 more runs than they gave up, were still 6 games under .500 (78-84). According to their Pythagorean projection, such a team should have won 89 games, for a whopping discrepancy of 11 games. Were they unlucky? Poorly managed? Or is there a flaw in the Pythagorean system?

One of the most reliable predictions about humans is that they make all kinds of generalizations that they shouldn't from a small sample of observations. Eleven games seems like a big amount (especially to a town that already has a "We're cursed" mentality... and now speaking as an Indians fan, why did it have to be 11 games below what was expected?), but how big is it really?

For those unfamiliar with the Pythagorean methods of estimating, the idea started out as a derivation of the Pythagorean theorem so near and dear to the hearts of high school geometry students. The original formula was developed by Bill James (why do I always feel the need to bow whenever I write the man's name?) and looks like this:

Winning % = RS^2 / (RS^2 + RA^2)

Here, RS is runs scored and RA is runs allowed. At the time, it wasn't clear why exactly it worked, but the formula had an uncanny knack for accurately predicting a team's success. Further tinkering lead to adjusting the exponent downward to 1.82 (some say 1.81).

Coming along later, Clay Davenport over at Baseball Prospectus suggested that the exponent also vary with the parameters and devised the equation (1.5 log((RS + RA)/games) + 0.45 for the exponent, which is then placed back into the original formula. David Smyth, in a similar mode, set the equation for the exponent at ((RS+ RA)/games)^0.287). Davenport, by the way, has since endorsed Smyth's formula.

With the right data base, it's a simple matter to put each formula to the test. I took all available team-seasons with more than 100 games played (2370 seasons) and calculated the team's actual winning percentage and their projected winning percentages by all four models (Pythagorean, Exp 1.82, Davenport, and Smyth). I then took the difference between the actual result and each of the projections to get the residuals for each and looked at the properties of each.

Mean residual values:
The first measure of a good predictor is whether it has some sort of bias in its estimation. Ideally, residuals should be centered at zero. Exp 1.82, Davenport, and Smyth all check in at around .00029 and .00028 (roughly, .06 games per 162), with Smyth the winner by a hair. The Pythagorean had a MR of .00038. All four had a slight tendency to over-estimate the actual values. Given the small values, though, these biases are negligible.

Skew in residuals:
Residuals, ideally, should be normally distributed. For each of the four formulae, skew statistics again showed excellent fit to normality. Pythagorean (+.014) and Exp 1.82 (+.068) were both slightly positively skewed, as were Davenport (+.044) and Smyth (+.045).The standard criteria for violation of normality is 3.0. Also, the standard error for the skew statistic was .050, meaning that even the most skewed (Exp 1.82) was not significantly different from zero.

Standard deviation in residuals:
I often tell my students, "If mean, then standard deviation." Clearly, none of these formulae are perfect in their estimations, but is one more given to error than the other?

The results:
Pythagorean: .026866 (4.35 games per 162)
Exp 1.82: .026440
Davenport: .026095
Smyth: .026080 (4.22 games per 162)

No really clear winner here either, although again, the Smyth formula comes out ahead by a bit. It looks like the best of the formulas, although the differences among the four are small.

Now, the question of whether Mark Shapiro and Cleveland are snakebit: The Indians were predicted by the Exp 1.82 formula to have a winning percentage of .55311, while their actual winning percentage was .48148. The difference is .07163 (11.6 games), or 2.709 standard deviations away. According to the z-distribution, a difference of that magnitude (in either direction, either 11 games above what they are predicted or 11 below) would be expected about .67% of the time (roughly, 1 in 150 cases), and a difference of that magnitude in that direction about .34% of the time (roughly, 1 in 300 cases).

Given 30 MLB teams per year, we would expect that such a discrepancy would occur once every 5 years, and that it would happen in the Indians direction (winning less than expected) once every 10 years. To put it another way, over a ten year period, one team gets as un-lucky as the Indians and one team gets as lucky as the Indians were un-lucky. It's tempting to think that karma would allow the Indians to follow up last year's bad luck with a run of good luck, but karma has no basis in statistical theory.

Are the Indians (and the rest of Cleveland) cursed? If a curse is a series of low-probability events happening in sequence, then yes, the ghost of Rocky Colavito is still haunting Jacobs Field.


While I'm here:

The week in quotes from Baseball Prospectus.

No comments: