Recent research from the Cornell Food and Brand Lab concluded that for some kids an Elmo sticker could make an apple more desirable than a cookie.
Where are we going? To research replication issues.
But first, the Cornell story…
I have always enjoyed the studies from Brian Wansink’s Cornell Lab. With research ranging from tipping behavior to buffet preferences, his topics are interesting and his conclusions are even better. Pretty consistently, he tells me what I expect or want to hear.
With the apple or cookie study, it was a pleasure knowing we could encourage kids to eat healthier just with a sticker. At buffets, food that cost less seemed to taste worse. For plate size, he suggested smaller, and for color, create a contrast between the food and the plate if we want to eat less.
But now, scientists are finding statistical anomalies. For the apple or cookie kids, it appears that the test group was really 3 to 5 years old whereas the paper said their age range was 8 to 11. We all know that a very young child could want the sticker while the 8 to 11 year old would prefer a cookie. With the buffet papers, (there were four) the numbers did not add up when people checked, for example, how the mean was calculated.
Your can see below that the apple or cookie study was in a prestigious JAMA digest. (Did the retraction made it undigestible? Sorry–could not resist.):
We should note that the NY Times Upshot column emphasized that good nutrition research is tough to do. Day after day you need the same environment. With school children, you want to observe without influencing behavior. You have to convince parents and schools to participate. Also, to ensure safety, an Institutional Review Board has to approve the study. But they added that the Wansink errors were unusually egregious. Cornell cited him for mistakes but not misconduct. And Dr. Wansink has been responding.
Our Bottom Line: Replication
A 2015 paper from the Federal Reserve tells us that only 22 out of 67 economics papers from reputable journals had replicable results. Concerned, they believe one solution would be to require data submission. Then, the economic community can more easily confirm research. And, journal authors can learn from their mistakes.
Our takeaway? Perhaps to encourage a bit more skepticism, especially when we like what we are reading.
My sources and more: In yesterday’s NY Times, I discovered research we had been citing at econlife was not quite dependable. From there, the path of questions about Brian Wansink’s papers create a cacophony of alarm bells that involve many captivating topics. For a summary of Dr.Wansink’s research, his lab’s website appears to have it all. Here is more on the JAMA rejection.
As for replication, you just might read this fivethirtyeight paper. The next possibility is the Federal Reserve description of economic research replication attempts. And after that, this econtalk podcast adds insight.