Mike Konczal deserves a huge back-pat for blowing up the internet with this post about a new academic paper identifying several flaws in the main piece of pro-austerity research at the heart of Paul Ryan's argument since 2010.

In 2010, economists Carmen Reinhart and Kenneth Rogoff released a paper, "Growth in a Time of Debt." Their "main result is that…median growth rates for countries with public debt over 90 percent of GDP are roughly one percent lower than otherwise; average (mean) growth rates are several percent lower." Countries with debt-to-GDP ratios above 90 percent have a slightly negative average growth rate, in fact.

This has been one of the most cited stats in the public debate during the Great Recession. Paul Ryan's Path to Prosperity budget states their study "found conclusive empirical evidence that [debt] exceeding 90 percent of the economy has a significant negative effect on economic growth." The Washington Post editorial board takes it as an economic consensus view, stating that "debt-to-GDP could keep rising — and stick dangerously near the 90 percent mark that economists regard as a threat to sustainable economic growth."

In short, the original paper is shown by three researchers from UMass to have three major flaws. First, it selectively excludes data on high-growth, high-debt countries. Second, it uses a bizarre (and statistically ridiculous) method of weighting the data. Third, and perhaps most awesomely, they made a formula error on the Excel spreadsheet (!!!) they used to analyze the data. As Mike says, "All I can hope is that future historians note that one of the core empirical points providing the intellectual foundation for the global move to austerity in the early 2010s was based on someone accidentally not updating a row formula in Excel." Since he explains well the three major errors with the paper, I won't belabor them here.

Explaining the petty intricacies of academic research for mass consumption is not easy, and that's what Mike really nailed here. However, I want to point out yet another issue with the original research.

The problem began when no one in academia could replicate the R&R paper. Replication is at the heart of every field of scientific inquiry. If I do a test proving that water boils at 212 F, then everyone else should be able to get the same result. In order to make that possible, I have to share my data with the rest of the scientific community – what kind of vessel I used to boil the water, the altitude and atmospheric pressure, the mineral content of the water, and so on. I have to show everyone else exactly how I did it.

What non-academics might underestimate reading Mike's account is just how egregious a red flag it is when A) no one can replicate a major finding despite considerable effort and B) the authors of a controversial paper refuse to share their data or explain their methods. To a non-academic, it might seem like "property" owned by the authors to which no one else is entitled. In academia that simply is not how it works. Every reputable journal on the planet has a policy of sharing replication data, and any publicly funded (NSF, etc) research must, by law, make all data publicly available.

So when R&R not only refused to share data for years but also refused even to tell anyone how they weighted the observations, Red Flag doesn't begin to convey how sketchy that is. Fireworks should have been going off at this point suggesting that this is not research to be taken seriously, and in fact it is possible that the "findings" were simply made up.

The science/academic people out there are probably wondering how in the hell one gets a paper published without even explaining the methodology used in the analysis. Good question! The answer is our friend the "special issue" in which papers are invited by the editor rather than being peer-reviewed. In other words, the R&R paper didn't even go through the standard review process (which is plenty flawed, but at least it's something) before publication. No one at any point in the process checked these results for accuracy or even looked to see if the authors actually did an analysis. Cool.

So that's how a paper based on cherry picked data, a scheme for equally weighting every country in the analysis (which wouldn't pass muster in an undergraduate statistics course), and a computational error became the primary evidence in favor of a pro-austerity agenda here and around the world. Mike is charitable in calling these issues innocent mistakes on the part of the authors. They might be, but I have a hard time believing that Harvard economists make the kinds of errors that somnolent undergrads make on homework assignments. When authors refuse requests for data, 99.9% of the time it's because they know goddamn well that they did something shady and they don't want you finding out.

Are these results fabricated? No. They did an analysis. A really bad one. My guess is that they ran dozens of different models – adding and removing variables, excluding and including different bits of data – until they got one that produced the result they wanted. Then they reverse-engineered a research design to justify their curious approach to the data. Every academic who handles quantitative data has been there at some point. That point is called grad school.