Imagine a scientific study that was based on shades of gray. Presented with words that appeared as near black, near white and all that was in between, participants who had been identified as a liberal, conservative or moderate had to decide how light or dark the color samples were. The goal was to see if physiology and politics correlate.
The moderates perceived more shades of gray than far left liberals or far right moderates. Because experimenters were able to conclude that your physical and mental states were linked, they had a publishable study that would generate career boosting publicity.
Instead, they thought (something like),”If our results are too good to be true, maybe they aren’t.” So they replicated the study. Sadly, they wound up with different data that indicated no correlation between the grays we see and the politics we practice. The experience though took them to questions about confirming scientific results.
After their shades of grade experience, University of Virginia professor Brian Nosek and his colleagues established the Reproducibility Project. Its purpose was to reconsider the conclusions of reputable scientific studies. Occupying three and one half years and involving 100 studies, the project recreated selected psychological research that had been published during 2008. Their conclusion? Thirty-nine percent of the studies they replicated produced the original result.
Like me, you might be thinking about how the Reproducibility Project takes us to accuracy questions. Faced with decisions on bacon and coffee, Obamacare and quantitative easing, how can we select the studies with the most accurate recommendations? While replica studies provide some answers, they can be expensive and time consuming.
One alternative might be a prediction market.
Dr. Nosek’s team established a prediction market by asking a group of psychology researchers to bet money on whether they thought an experiment’s results could be replicated.
Used to forecast election winners, the timing of interest rate hikes, or sports victories, the price of a bet on a certain outcome can reflect its likelihood. During the primaries leading up to the 2012 presidential election, the University of Iowa prediction market for the Republican presidential nomination had a dollar payoff for a correct outcome and zero for all other bets. With the price of a contract representing the probability that market participants believed the event would happen, a 22-cent contract displayed a candidate’s 22% probability of winning.
Similarly, as Dr. Nosek explained, “A price of 75 indicates that the market perceived a 75 percent likelihood of replication success…” In his prediction market, the crowd had 71 percent accuracy. By creating prices through supply and demand, the group accurately predicted the results of 71 percent of the replications in the Reproducibility Project.
Illustrating that a prediction market can be more accurate than a survey, below, the two are compared for the Reproducibility Project.
One article calls this prediction market a handy reproducibility “gut-check.”
Our Bottom Line: Some Wisdom
As economists with research that ranges from the cost impact of healthcare legislation to the size of the investment multiplier, we also have replication issues. So, let’s just conclude with a quote from Brian Nosek:
“…science operates as a procedure of uncertainty reduction…The goal is to get less wrong over time.”