Baseball HQ – FREE PREVIEW
For years Ron Shandler ran a story at Baseball HQ called the Forecast 40, which compared the projections of a group of touts at the start of the season on a selection of players about whom there were decided differences of opinion. Then after the season he did a fun little thing measuring how the touts did.
But two years ago someone posted that story at r.s.b.f. and the usual spew erupted about Ron’s evaluation methods and the rather unscientific sampling blah blah blah. So last year Ron did something a little different and polled members of his site about the players who were confounding them. And then he found projections for them from 13 touts and posted them.
You can read the story at the link above.
I comment not to point out my brilliant Preston Wilson projection, but to point out that Ron ran projections from 11 sources that were put together in March, and from two sources that were put together in early December. My gripe is that the projections in the Fantasy Baseball Guide (and the Fantasy Baseball Index) were done under much different conditions than the others Ron looks at in the story.
I don’t in fact mind being included (I think we win the Preston Wilson and finish high in a few others, which in a 13 team field isn’t bad at all–especially since we skipped the Hideki Matsui and Jose Contreras projections in December but win both of them in our February projections published at mlb.com ), but I think it is material to note when the predictions were prepared. So I note it here.
The whole exercise also raises the question about what type of projection is most useful to a fantasy player. Ron seems a little obsessed with projections that deviate from the norm, and certainly there is glory to be gained by concocting a gutsy risk-taking story of emergence for some player. For Ron that player recently has been Brad Fullmer.
I think the real story about projections is that there is an awful lot of random information that gets transmitted with a player’s stats, which means that the player’s real ability and potential is shrouded beneath a veil of statistical noise. All of us who project player performance for a living (or seriously, I should say) know about this noise, and we know that this noise makes it impossible to judge a player’s skill level reliably. Or rather, we can judge the skill level just fine, but how he performs will be distorted by the noise.
So instead we take bearings and triangulate and hope between some amalgam of averages, similarity scores, and (I dare say) gut hunches, we get more right than the next guy. Ron seems to think there are some of us who triangulate off of his work, which may be true though ultimately really sad.
My point is that we are constantly learning more about what goes into player performance and so we’re constantly stripping away a little more of the veil, but we’re a long way from knowing all that’s knowable, and I’m quite certain there’s a large chunk that is purely random, which is why there will be a very good set of projections the day a bunch of monkeys with typewriters bangs out the works of Bill Shakespeare.
Someone who tries to sell you projections that are “much better” than any others is bullshitting you. The important thing for you as a consumer to understand is what system your prognosticator is using, understand what biases that introduces, and learn to make the necessary adjustments to incorporate risk evaluation into the process and get the players who fit your fantasy league’s rules best.
Those projections that are outside the comfort zone, as Ron calls it, are flashy, but they’re of little statistical use. What you want (I think, but tell me if I’m wrong) is to follow the predictor who gets the general flow (guys who improve, guys who fall off) more right than anyone else.
If someone does that they’ll make you money in almost any league
(So, who’s going to study this?).