ED VS. LOGICAL FALLACIES, PART 8: FALSE PRECISION

Mt. Everest, when first accurately measured by modern scientific equipment, was exactly 29,000 feet tall (the mean of recorded heights from 6 different measuring techniques). Assuming no one would believe that they actually measured it if they reported such a round figure, scientists called it 29,002 feet. The first figure was completely accurate but suffered from the fact that it sounded like an estimation. Being able to say that it is "exactly" 29,002 feet makes it sound so much more precise.

People are impressed by numbers. Numbers create the impression that an author has done "research" and possibly even math. Numbers that smack of tremendous precision are a common and often flawed form of argument. Consider two examples: one crude and easy to spot, another much more subtle and relevant.

One great example lies in the way countries report their oil reserves. A dirty secret among the Oil Will Last Forever crowd is that most of the world's major producers self-report their reserves and, like Iran or Saudi Arabia, refuse to allow outside verification of their fantastic claims. A cheap, lame way to cover for their hyperbole is to release incredibly precise figures to create the impression that they have very detailed measures. Note Venezuela's figures on this Department of Energy list. Rather than the correct answer of "about 80 bbl" they report figures of 79.721 and 80.012 bbl in separate reports. Not a single engineer on the planet would claim to be able to measure the exact number of barrels of oil in the ground so accurately – especially since Venezuela includes wildly unpredictable tar sands among its reserves. "About 80 bbl" really sounds like they're making it up; 80.012, which is every bit as fabricated, is intended to preempt skepticism.

Public opinion polls are another terrific example of false precision. The media give statistics that imply (but never explicitly state) that they have measured some public sentiment very precisely. Of course, no news organization is irresponsible enough to omit the margin of error (among other fine print) at the bottom of the poll. But they certainly don't do much to emphasize it. Instea they state exact figures when any measurement with a margin of error is really a range. Consider the caveat from the following recent Fox News/Opinion Dynamics poll (via pollingreport.com)

FOX News/Opinion Dynamics Poll. Oct. 23-24, 2007. N=303 Republican voters nationwide. MoE +/- 6.

Plus or minus six percent. That's a range of 12%. Yet the figures are reported without that crucial bit of information included. Therefore you get something like this:

Rudy Giuliani 31
Fred Thompson 17
John McCain 12
Mitt Romney 7
Mike Huckabee 5
Duncan Hunter 3
Tom Tancredo 2
Ron Paul 1
Other 2
Unsure 16
Wouldn't vote 4

Typical.jpg

Wow, Rudy looks like he has a massive lead, and Fred Thompson is a clear second. Right? Well, here's the correct interpretation, which is the range represented by the green bars here (plus and minus 6% around the reported figure):

correct.jpg

The correct interpretation shows that, while Giuliani is in 1st place no matter how the data are sliced, any one of five different responses (Huckabee, Romney, McCain, Thompson, or Don't Know) could be second. The accuracy of polling data is intimately tied to the number of "don't know/undecided" responses, and once the MOE is taken into account that could be as high as 22% here – nearly one in four respondents. So this is a really accurate picture of the GOP primary as long as you don't care about who's in 2nd through 6th place. Or about the quarter of the electorate who have yet to make up their minds. Maybe they'll make Opinion Dynamics' job easier by distributing themselves exactly according to the "precise" poll numbers reported here.

Lying with numbers is so easy that it's almost remarkable when they're used to tell the truth.

(PS: I'm officially 29 today, showing no signs of mellowing with age)