Thursday, May 30, 2013
Michael Lewis' book Moneyball concentrated on the Oakland A's in 2002, more on the front office than on the players on the field. The idea was that general manager Billy Beane and his employees were looking at new and better ways to develop talent and use it, giving them a chance to be competitive with teams whose payrolls were several times higher than what the frugal A's ownership was willing to spend.
Beane and his team used sabremetrics, a word coined from the acronym SABR, the Society of American Baseball Research. The idea was that he would be able to draft players under-appreciated by other clubs and build a nucleus of young talent, though the best would be lost to free agency within a few years.
Beane's method really wasn't that scientific. What Billy Beane wanted to avoid was drafting another Billy Beane. He was much sought after out of high school. He debated whether he would go to college or go straight into baseball out of high school. He was drafted out of high school and he made the major leagues eventually, but he wasn't the All-Star the scouts had hoped he would be.
Beane the general manager drafted no high school players in 2002, worried there was a high probability he might find too many kids like himself who crumbled under the pressure of professional baseball.
Using the 2002 draft as our data set, let's ask the question: Is there a significant difference between the success rates of high school and college draftees?
The null hypothesis: There is no significant difference.
Data set #1: The first 50 position players drafted
Data set #2: The first 50 pitchers drafted
How we split the sets: A player was either drafted from high school or college and the player either made the major league roster or did not.
We will perform a chi-square test to see if the differences we see are significant.
Problem with this test: We are lumping together some players with very good careers so far with some guys who just barely had a cup of coffee in The Show. That problem will be addressed in the test used tomorrow.
High school draft: 13 made the majors, 13 did not
College draft: 13 made the majors, 11 did not
Test statistic: chi square = 0.087, well below even the 90% confidence threshold of 2.706
High school draft: 13 made the majors, 10 did not
College draft: 12 made the majors, 15 did not
Test statistic: chi square = 0.725, well below even the 90% confidence threshold of 2.706
These numbers just count whether players will make it to the majors or not, and as we can see, out of the first hundred or so players chosen, about half will see major league experience and high school draftees are not significantly different from college draftees. Another question is how good are those major leaguers when we compare the high schoolers to the collegiates? Tomorrow, we will use a different statistical test on only one stat per player, not a completely fair test, but it does give an approximate idea of the players' worth to their squads.
Wednesday, May 29, 2013
Polling is still scarce on the governor's race in Virginia, which is as it should be since the election is in November. A new poll came out this week from Public Policy Polling showing a very similar pattern to the poll from mid-month from Quinnipiac.
In brief, the last two polls show a lead for the Democrat McAuliffe over the Republican Cuccinelli, right now at 42% to 37% compared to 43% to 38% earlier in the month. Both candidates have large negatives and about 20% of the public is not tuned in yet.
The first poll in May showed a lead for Cuccinelli. While the Confidence of Victory now shows a 92.8% chance for McAuliffe, that would only be true if the election were being held today, which obviously it isn't. More than that, if the election were right around the corner, I would not want to base my results on a single poll with 21% either undecided or voting for some other option. Unlike Nate Silver, I never talk about these early results as showing anything about what will happen in November. These are just early snapshots.
Tuesday, May 28, 2013
As of this series, there are ten people who have ventured an opinion on all thirteen series now completed. Unfortunately for the class average, the two best predictors so far, Tim Legler and Chad Ford, both neglected to give their opinions, so the overall quality of the predictions suffers mightily. When the entire playoff season is over, I'll show how everyone who made any predictions at all did.
Abbott: Grizzlies in 6 (0 of 10 points) 62.3% overall
Adande: Spurs in 7 (7 of 10 points) 68.5% overall
Arnovitz: Spurs in 7 (7 of 10 points) 69.2% overall
Barry: Grizzlies in 6 (0 of 10 points) 64.6% overall
Elhassan: Spurs in 7 (7 of 10 points) 71.5% overall
Gutierrez: Spurs in 7 (7 of 10 points) 65.4% overall
Haberstoh: Spurs in 7 (7 of 10 points) 73.1% overall
Pelton: Spurs in 7 (7 of 10 points) 69.2% overall
Stein: Grizzlies in 6 (0 of 10 points) 60.8% overall
Wallace: Spurs in 7 (7 of 10 points) 70.8% overall
Windhorst: Spurs in 6 (8 of 10 points) 65.4% overall
If we were putting letter grades on these numbers, the majority of the class would be failing. Legler and Ford would make the group look better with a B+ and a B grade, both with twelve predictions instead of thirteen, but this group has the star of the class Haberstroh with a C that is close to a C- and seven students in the D range. In the series that remains, everyone is predicting the Miami Heat to win, so they all could pick up some much needed points, but no one in this group can get to an 80% prediction rate over the entire fifteen series and some could dip below 60% if the Indiana Pacers get hot.
I don't do this to mock these people. I do this to show that prediction is usually very difficult and if there is a high degree of variability, even the best don't do very well.
Friday, May 24, 2013
Let us assume the object we are arranging in our permutation have some natural order. For example, if we are using the first five letters of the alphabet, the natural order would be the permutation abcde, while the opposite direction edcba would be as far out of order as we could imagine.
What we will look at is consecutive pairs in a permutation and we will count them as being in order or not being in order. For example, let's take ecabd.
ec not in alphabetical order
ca not in alphabetical order
ab in alphabetical order
bd in alphabetical order
So this is a sequence that has two consecutive pairs in order and two that are not. It is not the only permutation of five letters with this property. For example, adecb also has two consecutive letter pairs in order (ad and de) and two that are not (ec and cb).
If we want to count such things, the easiest tool to use is Euler's Triangle. Here are the first few rows.
1 4 1
1 11 11 1
1 26 66 26 1
It bears some resemblance to Pascal's Triangle, since the first and last numbers in each row are always 1 and each row reads the same forwards and backwards. The first few rows of Pascal's Triangle are
1 2 1
1 3 3 1
1 4 6 4 1
1 5 10 10 5 1
The next row of Pascal's Triangle can be created by adding together the consecutive elements of the previous row. The rule is similar for Euler's Triangle, except we have multipliers for each number. Let's use the row 1 26 66 26 1 to make the next row
1 × 1 ... 2 × 26 ... 3 × 66 ... 4 × 26 ... 5 × 1
+ 0 ... +5 × 1 ... +4 × 26 ... +3 × 66 ... +2×26 ... +1×1
1 ... 57 ... 302 ... 302 ... 57 ... 1
The sum across any row of Pascal's Triangle is always a power of 2.
1 sum = 1
1 1 sum = 2
1 2 1 sum = 4
1 3 3 1 sum = 8
1 4 6 4 1 sum = 16
1 5 10 10 5 1 sum = 32
The sum across any row of Euler's Triangle is always a factorial.
1 sum = 1! = 1
1 1 sum = 2! = 2
1 4 1 sum = 3! =6
1 11 11 1 sum = 4! = 24
1 26 66 26 1 sum = 5! = 120
1 57 302 302 57 1 sum = 6! = 720
Like Pascal's Triangle, Euler's Triangle shows up in places that don't seem to have much to do with factorials on first blush, just as Pascal's Triangle might show up in a formula that doesn't seem to have much to do with powers of 2.
As a reminder the first 1 in the fifth row means there is just one way to order the first five letters in perfect alphabetical order abcde. The first 26 means that if there is just one thing out of place, like dabcem there are 26 patterns that fit that description.
Thursday, May 23, 2013
A permutation is an arrangement of n distinct objects where order matters. For example, if you have a deck with n cards, all the permutations would mean every possible arrangement you could create by shuffling the cards. For this demonstration, let's use the lowercase letters of the alphabet for our objects.
If we have only one object, there is only one permuation: a
If we have two objects, there are two permuations: ab and ba
Three objects increases the number of permuations to six: abc acb cab bac bca cba
Four objects can be arranged into 24 different permutations.
This number sequence 1, 2, 6, 24, ... is the start of the factorials, which are denoted with an exclamation point.
1! = 1
2! = 2 × 1 = 2
3! = 3 × 2 × 1 = 6
4! = 4 × 3 × 2 × 1 =24
5! = 5 × 4 × 3 × 2 × 1 = 120
Here's why the sequence increases this way. We have the list above of the six permutations of the letters a, b and c. If we add the letter d to the list, let's take a look at any of the three letter permutations, for example cab. It's possible to add in the d to this pattern in four places.
dcab cdab cadb cabd
The d can be put in the first, second, third or fourth position. Since there are six different permutations, we multiply six by four to get twenty four different permutations of four distinct objects.
The factorials increase very quickly, even faster than exponential growth. 10! = 3,628,800 and if we consider the number of ways to shuffle a standard 52 card deck, 52! is a number with sixty eight digits, 8.0658 × 10^67. A deck with 60 cards would have more different permutations than the current estimate of atoms in the observable universe.
Tomorrow, we'll look at a way to categorize permutations and introduce the numbers in Euler's Triangle.
Sunday, May 19, 2013
We are now halfway through, down from sixteen teams to four. Each contest to eliminate a team is a best of seven series and twelve of these series have already been played.
ESPN carries the games in the early rounds and the championship is aired by ABC. On their website, they ask a group of experts for predictions about each series. I have taken those predictions so far and graded the experts on how well they foretold the actual result.
Here's the grading system for those who are interested.
1. Predicting the correct team is worth 7 points. Predicting the wrong team is worth 0 points.
2. In each series, the experts also say how long the series will last. Four games is the shortest and seven the longest.
3. If an expert predicted the winner correctly, it is possible to get 0, 1, 2 or 3 extra points depending on the predicted length and the actual length of the series.
Gets the series length precisely: 3 extra points for a total of 10, a perfect prediction.
Predicts either one game too many or one game too few: 2 extra points for a total of 9
Predicts either two games too many or two games too few: 1 extra point for a total of 8
Predicts either three games too many or three games too few: 0 extra points for a total of 7
4. Close but no cigar points: If an expert predicts a seven game series and the series goes seven games but the team predicted to win actually loses, the expert gets 5 points total for getting everything but the last game's result correct.
List of experts: Abbott, Adande, Arnovitz, Barry, Elhassan, Ford, Gutierrez, Haberstroh, Legler, Palmer, Pelton, Stein, Wallace, Windhorst
Results of the twelve series so far and the right-wrong record of the experts:
Heat-Bucks: Heat wins, Experts 14 right, 0 wrong
Pacers-Hawks: Pacers win, Experts 13 right, 1 wrong
*******Nets-Bulls: Bulls win, Experts 9 right, 7 wrong
Knicks-Celtics: Knicks win, Experts 14 right, 0 wrong
Thunder-Rockets: Thunder wins, Experts 14 right, 0 wrong
Spurs-Lakers: Spurs win, Experts 13 right, 1 wrong
**************Nuggets-Warriors: Warriors win, Experts 0 right, 14 wrong
*********Clippers-Grizzlies: Grizzlies win, Experts 5 right, 9 wrong
Heat-Bulls: Heat wins, Experts 14 right, 0 wrong
Thunder-Grizzlies: Grizzlies win, Experts 13 right, 1 wrong
Spurs-Warriors: Spurs win, Experts 14 right, 0 wrong
***********Knicks-Pacers: Pacers win, Experts 3 right, 11 wrong
The asterisks indicate the difficult series to predict. Not a single expert went out on a limb to predict the Warriors beating the Nuggets, so no one has a spotless record. The Nets and Bulls went to seven games and the experts were nearly evenly divided. In the other two tough series to predict, the Grizzlies were seriously underrated and the Pacers were ridiculously underrated.
Expert ratings after twelve series
Name: points out of 120 possible (percentage correct)
Legler: 106 (88%)
Ford: 100 (83%)
Haberstroh: 88 (73%)
Elhassan: 86 (72%)
Wallace: 85 (71%)
Barry: 84 (70%)
Arnovitz: 83 (69%)
Pelton: 83 (69%)
Adande: 82 (68%)
Abbott: 81 (68%)
Palmer: 81 (68%)
Stein: 79 (66%)
Gutierrez: 78 (65%)
Windhorst: 77 (64%)
I want to give Tim Legler and Chad Ford props for doing so well in the first two rounds, because as of this morning, they have not given predictions for either of the next two rounds and the first game of Spurs-Grizzlies starts in less than an hour. There are more than 14 experts total working at ESPN, but I only grade those that make a call on all series contested. If both of them are removed, I may change my grading system to include all predictions, even those by people with only a few. If I don't do that, all the experts will look like students struggling to avoid a C- grade or worse. Prediction is hard, but it isn't that hard.
Saturday, May 18, 2013
A new poll has been taken for the Massachusetts senate race and Democrat Ed Markey continues to lead. This poll was taken on May 15 and the most recent poll before this one was completed on the 7th, so using my seven day rule, this is the only poll being considered.
Most recent poll: 15 May 2013
Polls taken within a week of the most recent: 1
Lead: 7% lead for Markey(D)
Confidence of Victory of the median poll: 98.2%
I do prefer having a larger set of data on which to base a post, but I will remind readers that I consider these updates to be snapshots of an evolving situation rather than predictions of what will take place on election day. While this is "just one poll", the resulting Confidence of Victory numbers are much the same as we saw last week. I intend to have an update at least once a week until the election on June 25 if there are enough new polls to warrant such regular reports. I'm not sure how many polling companies will be working this race in May, but I fully expect multiple polls a week in June.
Thursday, May 16, 2013
Virginia's gubernatorial election doesn't take place until November, but we are getting early polling results. I do not consider these numbers a prediction of what will happen, but more like a snapshot of the current situation.
Most recent poll: 13 May 2013
Polls taken within a week of the most recent: 1
Lead: 5% lead for McAuliffe(D), 43% to 38%
Confidence of Victory: 97.7%
I don't consider polls taken this far in advance to be completely meaningless, but the phrase Confidence of Victory rings hollow right now, and I say that as the inventor of the phrase. It's all about the proviso "if the election were held when the poll was taken", and obviously, the election is not being held this week or even this month. More than that, a poll at the beginning of May gave Cuccinelli a commanding lead that would have given him a 99.7% Confidence of Victory.
Still, this is an important race and I will continue to cover it throughout the year.
Wednesday, May 15, 2013
The data I'm using for today's post comes from the website CO2Now.org. If some "skeptic" wanders by and says this is data from an interested party and therefore not unbiased, bite me.
In fact, unlike temperature data which swings wildly from year to year, average yearly CO2 levels are increasing steadily and there is no contrarian position.
My question is how steadily?
This graph is a manipulation of a data set taken from the Mauna Loa CO2 readings. Let me explain my process.
1. Take the average yearly levels from 1959 to 2012, the first and last years with a full twelve months of data.
2. Starting in 1968, subtract the yearly level from ten years prior (example: 1968 level - 1959 level) and divide by 10 to get the average yearly change over the previous decade.
What this says is not that CO2 levels are increasing, but that the rate of change is increasing as well. The increase is not exactly linear, but you can see how close the black trend line is to the jagged red line with the white dots. The R² value of 0.88... means the fit to the line is very good. If the rate of increase is exactly linear, the graph on the CO2 levels would be the increasing part of a parabola. We call this quadratic growth.
The good news is this is not exponential growth, for all the good that does us. Quadratic growth is faster than linear growth, and even if it flattened out at current levels, CO2 levels are increasing about twice as fast as they did in 1970. (In comparison, world population has increased by a factor of 1.8 over the same period.) Given the relatively steady growth of the rate, the amount of change in the last 40 year span should only take next 30 years.
CO2 matters. It's a natural part of our environment, but like everything in nature, too much of it is not good. CO2 does a lot of things, many of them positive for the environment, but it is a greenhouse gas, which means it helps trap the heat from the Sun. The greenhouse effect is about as controversial as gravity. More CO2 in the long run means higher temperatures in the long run. This can mean big changes in the environment, positive changes for some and negative changes for others. Overall, it does not look like a zero sum game, with much more pain than gain.
The so-called Serious People In Washington are convinced the economy needs austerity. We must curb spending or the next generation will inherit a mess. On the other hand, these Serious People are not fully on board about us changing the way we spend energy. Austerity means lower taxes for the rich, so they are in favor. Changing energy habits means less money in their pockets and possibly regulatory limits on their styles of living. For anyone saying this is cynical, let me use Lily Tomlin's apt quote from years ago, "No matter how cynical you become, it's never enough to keep up."
Right now, people are paying attention because of breaking the 400 ppm barrier. As a mathematician, I know that most people are impressed by round numbers and ignore the rest of them. We may be decades away from 450 ppm, but we can't wait for that news story. We may not be able to put on the brakes, but we sure as hell should take our foot off the gas pedal, both literally and figuratively.
Think about your use of carbon, which for the most part means how you use energy. Think of ways you can cut back. For the people who argue that we shouldn't have to limit ourselves unless China, India, Brazil and others also set limits, my counterargument is this.
How old are you? Seven?
If you love anyone who is younger than you are, that is reason enough. We may not be able to give them a better world than we have, but we shouldn't consign them a hellhole.
Tuesday, May 14, 2013
The climate scientists Michael E. Mann, Michael Kozar and Sonya Miller have come out with a prediction for the number of named storms in the 2013 Atlantic Hurricane season, which officially starts on June 1 and ends November 30. Their number is 16 with a 95% confidence interval of +/-4, which means 12 to 20 is the range they are 95% confident will contain the correct number and 16 is what they consider the most likely number. I will come back on December 1 to see how the prediction turned out.
To give an idea of the recent range here are the number of named storms in the years from 2000 to 2012.
Their calculations are based on more data than this, but it should be noted that 16 is the average of these 13 numbers.
Friday, May 10, 2013
There are only a few elections this year, but I mean to cover the Senate and governor's races using the Confidence of Victory method, which takes polling data - percentages and size of sample - and turns it into probabilities of victory for both sides, or all three sides if a race has three truly competitive candidates, a rare occurrence in American politics.
In Massachusetts, there is a special election for the Senate seat vacated by John Kerry. It will be held on June 25.
Most recent poll: 7 May 2013
Polls taken within a week of the most recent: 4
Largest lead: 17% lead for Markey(D)
Smallest lead: 4% lead for Markey(D)
Confidence of Victory of the median poll: 97.4%
Markey has much greater name recognition than Gomez and Massachusetts is a generally blue state. The big lead is from the latest poll, but my system is interested in the median, not the most recent.
I do not think of this number as a prediction, but instead as a snapshot of the current position. I'll report back at least once a week on this race, more often if it looks like its getting closer and there is enough polling data to track changes at a faster pace.
Wednesday, May 8, 2013
We've discussed the Fibonacci sequence previously on this blog. You can click on this link to read the earlier posts. All we need remember right now are the numbers themselves
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ....
The rule is that the next Fibonacci number can be created by adding up the two previous numbers.
Today is May 8, 2013, which is often written as 5-8-13, which means this is a Fibonacci day and the next one would be 8-13-21. Since there aren't 13 months in the year, that will be the last Fibonacci day this century.
But consider this. In the United States, we state a date in the form Month-Day-Year. It seems natural enough since most of us have done this all our lives, but note that the order in size is middle-small-big. It might be more natural to state it in the form Day-Month-Year, which would be small-middle-big. If we do that, today's date is the 8th day of the 5th month of the 13th year or 8-5-13. Using that method, this year's Fibonacci day would be the 5th of August, not the 8th of May.
Side note: when it comes to time measured on a smaller scale, the whole world agrees to go from the largest unit to the smallest, hours:minutes:seconds:fractions of seconds. If we agreed to that way with the larger time units, this moment should be 2013 May 8 7:35 PM.
Tuesday, May 7, 2013
The polls are closed in South Carolina and Mark Sanford is projected to be the winner by about a 10% margin. The last poll only put him up by 1%, but my system isn't designed to predict the margin of the win, only the direction.
Nate Silver on Twitter was not very confident because of the minimal amount of polling, but he did make Sanford a favorite at about 64% chance to win, which is pretty much what my system thought as well.
There is another special election to replace John Kerry in the Senate on June 25. I'll keep track of the polls there and make periodic reports of the snapshots of the race, as well as a prediction on the morning of the 25th before the polls open.
Monday, May 6, 2013
The system I use, which I call Confidence of Victory, takes the two leading vote getters in a situation like this and calculates the probability that a lead in this poll will translate into a victory in the election. With this lead and this size of a sample, Sanford has a 64% Confidence of Victory and Colbert-Busch's probability is at 36%. My system assumes a 0% chance for the Green candidate so far behind.
I would like more polling data from more companies, but if only one company reports data, it would be hard to top PPP. In their polls of the electoral college and Senate races from the last week of the general election, they went a perfect 33-0 in predicting winners. We will see tomorrow night how my prediction from this one polls fares and I will report back.
Sunday, May 5, 2013
Assessing risk is very difficult and it is a question that confronts us at all turns. In some cases, government has decided to penalize people who take some risks. You have to wear a seat belt while driving and you have to have proof of insurance. Both of these actions are penalized by fines. Driving under the influence can be a fine or can be jail time.
Smoking outside of designated areas can cost you money in this day and age as well, as can selling cigarettes or alcohol to minors. Governments around the country have taxed cigarettes much more heavily than other products.
The question is: just how dangerous is it to smoke? This is a difficult question to answer and the answer must be stated in statistical ways, which is to say we do not have proof like we have in mathematics, but instead confidence levels and correlations.
This means it is possible for a smoker to live to be 90 and die from some cause not related to smoking, just as it is possible for someone who quits smoking and exercises regularly to die at the age of 52 like Dr. Jim Fixx, a physician who advocated a life of regular strenuous exercise.
Here is a simple metaphor, and I admit it is likely too simple. Think of life as a game. The rules are just about the same for everyone, but we are all rolling our own set of dice. Certain risks are so big that you are opting for a set of dice that really do hate you, if I may borrow a joke from the great nerd cartoonist John Kovalic.
Still, there is wild variation. John Banner, the actor who played Sgt. Schultz on Hogan's Heroes, died just a few years after the series was over at the age of 63 from am abdominal hemorrhage. It would be easy and likely fair to blame him early death on his weight. But consider Leon Askin, another fat actor from Hogan's Heroes. He was big all his life and lived to be 97.
(Two other coincidences in the lives of Banner and Askin. Both were Austrian Jews. One part of that is not such a coincidence, since the regular actors playing German soldiers and/or Nazis on Hogan's Heroes were Jewish.)
This randomness is often used by people who want to downplay the risks of smoking, many of them in the pay of the tobacco industry, others addicts of the product. Among the addicts who likely took no cash from the tobacco industry to complain about the unfair restrictions on their habit are the great statistician Sir Ronald Fisher, the novelist and strong believer in evil government ineptness Ayn Rand and the musician and composer Joe Jackson, not to be confused with the father of The Jackson Five and daughters Janet and LaToya.
Which brings us to climate change. Again, this is a matter of statistical risk, not certain mathematical risk. Many of the arguments against any human cause for a warming climate have a stance similar to the arguments against links between smoking and human health risks. More than analogies, many people who were in the pay of the tobacco lobby are now in the pay of the petroleum industry.
To steal a joke from a friend who is a public defender, these people sell reasonable doubt at a reasonable price.
Regular readers will know I took a few months on this blog to look at climate data and came to the conclusion that the climate is changing and in the great majority of places, the climate is warming, in some places at catastrophic rates. As for human causes, to accept this we have to pile statistics upon statistics. That said, the model for the increase in CO2 is a much better predictor than most statistical models and is showing no signs of slowing down as humans continue to consider their addiction to fossil fuels a God given right.
Again to quote Mr. Kovalic, when it comes to the climate, our dice really do hate us. It's time for all of us to try to do what we can to change the set of dice that will determine the future for the generations that will still be here when we are gone.
Thursday, May 2, 2013
It turns out I'm not fantastic at flipping mental coins. Obama barely beat Romney in Florida. But in the other 83 races, my system picked the winner every time.
Nate Silver of the New York Times also made predictions in all 84 of these races. His system also called Florida a toss-up, which is a credit to both of us, but also a little lucky. In other elections, my system has called a race a toss-up and one side or the other won handily. In the other 83 races, Nate went 81-2, missing two Senate races in Montana and North Dakota, two results that my system got right.
So if we include my guessing call of Florida, I went 83-1 and Nate went 81-2. Our percentages are 98.8% and 97.6% respectively, both of which count as excellent when it comes to prognostication.
Are we geniuses or what?
Well, I'm going to say "or what". The general election polling data was non-stop for several months. Looking back at my records, there were 700 polls dealing with the 84 races in the last five weeks of the race. I started keeping daily track of the median electoral college result after Obama's disastrous first debate appearance, and his numbers did suffer. But then came the Biden-Ryan debate and second Obama-Romney debate and Romney finally repudiating his "47% comment" and the Obama advantage moved up to where it had been as of early October. It was hard to pick winners because the races were not very close and the opinions were not taking huge swings, just small ones.
Here is the best data that shows Silver and I are not geniuses and that is the primary election season. This graph shows the ups and downs of the four candidates still in the race in February, Mitt Romney (green), Newt Gingrich (gray), Rick Santorum (brown) and Ron Paul (gold). I also tracked NONE OF THE ABOVE in black.
There were a lot of polls during this month, but not anywhere near the number there were in the general election. More than that, the Republican electorate was in an amazing state of flux. You can see Santorum climbed from third place to first place then back down to second in the space of four weeks. More than that, NONE OF THE ABOVE was holding steady at about 15% throughout the month.
In the primaries that month, Nate and I weren't scoring in the 98th or 99th percentiles. The data was sketchier and our predictions suffered. Predictions from polling data is a lot more accurate than predicting the results of sporting events, to give just one example, but even taking the average (or median) of a lot of polls can be shaky, especially when NONE OF THE ABOVE is well over 10% this close to the election.
Nate's book The Signal and the Noise is a study of why some predictions do well and others do not. He thinks that in the long run we are going to learn how to do better in general. I'm not convinced. Sometimes, the randomness inherent in a system will overwhelm the cleverest human prediction methods.