Called collective intelligence, sometimes crowds can be smarter than individuals.
Take jelly beans. When researchers have tried to compare the individual and the crowd, they have used the “guess-how-many-jelly-beans-in-the-jar approach.” For one experiment with 850 jelly beans in the jar, only one person out of 56 came close. The crowd though, when averaged, said 871. Similarly, guessing the weight of an ox that weighed 1,198 pounds after being slaughtered and dressed, the crowd average was 1,197.
So where are we going? To collective intelligence in tennis prediction markets.
Beyond jelly beans and oxen, the collective wisdom has been used in prediction markets. Whether looking at elections, the timing of interest rate hikes from the Federal Reserve or sporting events, the price of the bet on a certain outcome can reflect its likelihood.
During the primaries leading up to the 2012 presidential election, the University of Iowa prediction market for the Republican presidential nomination had a dollar payoff for a correct outcome and zero for all other bets. With the price of a contract representing the probability that market participants believed the event would happen, a 22-cent contract displayed a candidate’s 22% probability of winning.
Before Rick Perry’s debate gaffe, the price of a Perry contract was 31 cents. After the debate, it nose-dived to 17 cents. For Romney, the average price of a contract remained in the vicinity of 80 cents. With these contracts, we have two variables at work. First we have supply and demand determining price. Also, researchers have concluded that voters’ expectations more accurately indicate an outcome than their intentions. Expecting Romney to win whether they intended to vote for him or not, traders placed their wager in his court.
And that takes us to tennis.
Forensic Economics at Wimbledon and The Bottom Line
Like elections, tennis betting markets indicate who “gamblers” expect to win. But what happens when one market’s expectations contradict the collective wisdom elsewhere? That is where a sports law assistant professor got suspicious. Looking at 6,204 matches from 2011-2013 with a tennis gambler who had an algorithm to predict match outcomes, Florida State’s Ryan Rodenberg identified 23 matches whose betting patterns were 16% to 19% out of synch with expectations. Their conclusion? Tennis authorities should investigate to see if those 23 matches, including 3 first-round matches at Wimbledon in 2011 and 2012, were fixed.