The Fantasy Baseball Price Guide —

Last Player Picked

Mays Copeland has made a player rater. He has explained how he derives the values of the rater, and how he prices those values. He’s made some software that makes it easy to customize those ratings and prices for your league. He has also, most impressively, created an interface that allows the user to use a variety of projection systems (Marcel, CHONE, ZIPS, others I’m forgetting, and even a composite of them all) to create a list for their league’s format. 

The problem? The smell test. I ran the numbers for my Tout Wars NL league and the answers stunk. JJ Putz was named the fourth most valuable closer in the NL, by virtue of his 19 saves (!). F-Rod ranked first, with a projected 35 saves. The cloud, it seems, doesn’t always compute. Will Albert go for $49, as the PG suggests? In a word, no. But that isn’t the mistake (it is only a symptom).

Bid prices are different than projected values. Ignoring this truism means imagining that Johan Santana might be worth $48, as the LPP site suggests, but we all know that is wrong, even if Johan is healthy, which he may not be.

The problem is that there is no automatic pricing system based on projections that is going to properly price players for auction. Why? Because projections are, inevitably, 25  percent (or more) wrong. Even my projections aren’t perfect. We value projections because they fix for us what a player is expected to do, but what matters are the prices everyone else is willing to pay. And how our expectations stack up against them. That market evaluation is where the heavy lifting of draft prep comes. It combines the information in the projection with our assessment of risk, and filters it through our knowledge about the league we play in.

A system that projects Santana for $48, when he will not go for more than $38 in your league, is failing to properly allocate $10, compounding the effect of the error.

I dig Mays Copeland’s efforts, and maybe he isn’t selling the Pricing Guide as a list of bid prices, but rather as a way to compare different sets of stats (projected and real), though that wasn’t what I took away. My problem is that when he disses other sites, like the rototimes player rater, he offends me. Actually, he makes me an enemy. It isn’t that the rototimes rater is perfect, it isn’t, but it’s a lot better than anything Mays has come up with. His strict adherence to category scarcity blithely ignores the way people actually play the game. 

From the spirit of his site, I would think Mays would find ways to improve the rototimes rater. but instead he chooses to diss it and promote himself. That would be okay if he got it right. Mays hasn’t yet.

Update: I went back and checked his 2008 prices, figuring that would be a better test of the Price Guide. It is better. The Putz problem is a result of the projections, not anything the pricer is doing wrong, but he’s still giving Steals and Saves full value, so he prices Mariano Rivera at $49 last year. That is probably correct in a math sense, but doesn’t reflect the way the game is played. Also, his 2008 stats have Matt Holliday in the AL, which makes them useless for evaluating what actually happened last year. So, an interesting efffort showing some promise, but there are kinks to work out.

Alex says the software is the Cadillac

The Final Update was posted at 9pm on April 9th. Both software and data packages are updated, with lots of adjustments because of the spring surprises. I mean, Andres Torres? That said, he’s coming off a fine season, so who knows?

One player I didn’t update in the update was Emilio Bonifacio. His claiming of the 3B job in Florida was a surprise, as have been his heroics thus far. I probably should have bumped him up to 375 at bats (he’s in there for 275 now), but I don’t like to react too strongly to first week events by changing prices. And I had gotten Bonifacio up to 275 AB because I was high on him as a super utility guy, who steals bases but doesn’t field well enough to hold down a full time job. I still thing that’s what he’s going to end up being. 

This year we created a data only Patton $ Software product, for those who didn’t want to use the software. You can buy either by visiting askrotoman.com/patton but if you’re undecided which product fits your needs better, go to Alex’s pitch for the software at Patton & Co.

Thanks to all who purchased this year’s software, and special thanks to the incredible group who have been buying it year after year after year. Your loyalty is a great compliment. Have a great season! Peter and Alex

18 Undrafted Players To Watch

RotoAuthority.com

Everybody wants to know about Sleepers. For those of us in AL or NL only leagues that means aging vets who might get an unexpected shot, or a closer in waiting breaking out by virtue or good luck. If you play in a mixed league this list of players not generally taken in the first 278 at Mock Draft Central is a good place to start your prospecting. The only question is which guys taken earlier shouldn’t have been.

An article about Road Home Run Rates

Derek Carty THT Fantasy Focus

The Hardball Times’ fantasy writer looks at which teams and players have the biggest changes in the home run rates of their road ballparks in the coming season. As he says at the end of the story, this is fun stuff, especially if you learn that one of your freezes (Josh Hamilton, let’s say) had one of the toughest road park schedules for homers last year. On the other hand, the team that gains the most this year is the Phillies, up 2.2 percent!

If they hit 105 road homers last year, this information suggests that this year they might hit 107! The last three years the Phillies have averaged 102 road home runs. Make of this what you will.

Some Post-Oscar Thoughts on Forecasting

FiveThirtyEight.com: Politics Done Right

Nate Silver is taking some guff for his foray into Oscar predictions. What is revelatory in this 538 post is how his venture into understanding why he missed two of three contested Oscars tracks his approach to baseball projections.

The model may be wrong, but that’s fixable. Which is why PECOTA gets better every year. What isn’t, as Nate so politicly admits, are the vagaries of unprojectable circumstances. Nate found out that projecting six Oscars with a dubious data set focuses much of the attention on the vagaries and the unprojectable. Um, he got them wrong.

Which is why his protracted explanations in this post are both admirable, he’s trying to figure it out, and a little sad–didn’t we trust him because he knew that already?

Regular readers know that I admire Nate’s work, but that I also think his great insight into projections is one of marketing. Not statistics. Nate figured out how to get everyone to ascribe the failure of his subjects to follow his model to his subjects, rather than to him. That isn’t a bad thing, it is a perfectly fine (perhaps brilliant) way to convey the confidence interval, but it doesn’t do much to help us explain the large swath of the numbers (in my case Baseball, in Nate’s, all of them) that are unpredictable.

This Year, Patton $ in Cheaper Data Only Format, Available Now!

The Patton $ 2009 page

The link takes you to the information and ordering page for Patton $ Software and this year’s new product: The Data Only for Less!

For the many who use the software to prepare their own projections and prices, make their bid lists, and run their auction or draft, the price remains the same: $30. Click the buttons on the left side of the page, if you want to buy.

For the others, who have paid the $30 for the data only, in text and Excel formats, this year we’re offering the projections and bid prices for $15. Click the buttons on the right side of the page, if you want to buy.

Software owners will be able to access the data files from the software download page.

For those unfamiliar with the product, a visit to the Patton $ Software and Data information and ordering page, will answer many questions. Or ask a question here in the comments.

“Hand-made Trophies Worth Bragging About”

FantasyTrophies.com

Today I stumbled on an ad for Dave Mitri’s fantasy trophies on Rotowire, which I guess means they’re doing okay. I wrote them up last summer, in large part because I live around the corner from Dave and his wife Suzy, and also because the sculptures made me laugh.

They made me laugh again today, and I got to read this page about how the trophies are made, which is pretty impressive.

If the season were a horse race. Isn’t it?

BaseballRace.com

When I was a kid I had a toy race track and I spent an inglorious number of hours turning the dice to see which horse prevailed in that race.

As we all know now, but I didn’t as a magical thinking second grader, the winners came completely at random (though I may have given blue an advantage, since it was my color).

Baseballrace.com animates each season’s pennant race, so you can see in a picturesque display how far ahead the front runners were and how far behind were the laggards.

I’m not sure there’s much actual utility here, but the imaginative display of information may well help you or me or someone else to come up with an idea that changes the way we think. And even if it does not, coming up with something no one else is doing is reason enough to be proud. And wouldn’t it be a great idea for him to license the software to fantasy league stats providers, so that we can live and relive the year of our grief in a horse racey animation?

Okay, maybe not. But maybe.

Alex Rodriguez and the invisible depths of steroid abuse.

By William Saletan – Slate Magazine

I’m a regular reader of Slate, which features smart often contrarian writing about politics, culture and lifestyle. One regular column is called Human Nature, by William Saletan, a writer who specializes in parsing semantics and finding new or clearer meaning. Human Nature is about science, which allows him range broadly over a variety of topics.

I used to be a fan of his, but I stopped reading him after he wrote an explosive series about race and intelligence, quoting eugenics theorists who say there is racial difference without revealing that they often had ties to racialist groups. Saletan was trying to get at the truth about evolution, race, intelligence, and discuss how we should deal with legal, social and moral issues that come with knowing that there are racial differences in intelligence. That’s perhaps a brave and worthy topic, if you’re being speculative, but Saletan wrote it up as if the issue had been settled scientifically. It certainly has not been, and to assert that it is was a horrible blunder that destroyed the trust I had him as a writer.

Today he writes a piece, a horribly naive series of questions about ARod and baseball’s steroids testing, that purportedly points out that PED use is inevitably broader than the number of people caught (doh!), but also uses a broad brush to make all sorts of implications that just a little work would have taught him were false. 

The 2003 secret tests weren’t secret. They were part of a deal between MLB and the union. Everyone knew about them, and I’m pretty sure we can say there were no other agreed upon testing programs before 2003. To suggest that there were is just dumb.

If there were no other tests then the government didn’t seize any other results and the Union didn’t suppress them. If those things didn’t happen, and again, there is a nearly zero chance they did, to assert that they might have is just bogus and exploitative.

Saletan does talk about the allegations that Gene Orza, of the player’s union, warned A-Rod and others of the impending 2004 tests, as the basis for the union perhaps warning other players about other tests. Could have happened, I’ll give him that one. 

But a time line in the NY Times today shows that the 2004 testing didn’t begin until July of 2004, and the 104 players who tested positive in 2003 weren’t tested until they had been informed they’d tested positive–in September! With just a few weeks of testing to go between being told of their 2003 positive tests and the end of the season, those players were in effect told when the tests would happen, without actually being told. It becomes unclear how explosive the charge against Orza could be in this instance, but we’ll have to see what develops.

The reason the 2004 testing started late was because the union and the owners disagreed about technical issues involving the tests and the definition of a positive test, according to the Times. No one knows why it took the union months to inform the players who tested positive in 2003 about that after federal investigators seized the urine samples in April 2004. And no one knows why the union didn’t destroy the samples, as it was legally allowed to do, once the results had been certified in November 2003, which would have ensured the player’s anonymity, which had been a crucial component of the 2003 testing.

(I have a question. I assume that no one knew which players tested positive until the federal investigators seized the samples, at which point it became necessary to find out who they were in order to inform them that the government had their names and their positive tests. But I don’t know that. I’ve never seen the point addressed directly. Or maybe I should go back and reread the Mitchell report. But unless that was the case then the results weren’t really anonymous anyway.)

But I’m getting off track here. The point is that Saletan ignores the facts and just makes stuff up, and while that doesn’t invalidate his overall point (that more players used than tested positive in 2003) and while he points out that what he’s suggesting isn’t necessarily true, it is really bad form that most of his questions almost certainly aren’t true. That’s just shoddy.