Lying with Statistics

0
154

There are three kinds of lies: lies, damned lies, and statistics.

Mark Twain wrote that in 1907. And over a hundred years later, it still seemingly holds true.

There’s something disarming about numbers—something irrefutable. Statistics: “the most objective, analytical form of evidence anyone can give”—so people say.

Statistical evidence can make for compelling arguments. Yet it’s also, consequently, a type of evidence that many frequently abuse—in attempts to portray a situation in a skewed light.

Not all statistical findings matter. And when statistics are used improperly, statistical lies emerge. Here are just a few key factors to consider when approaching a statistical finding:

Procedure

The fundamental methodology used by researchers has to first be valid for any statistical finding to matter. More generally, however, this can be linked to the credibility of the research. Results that have been peer-reviewed and published, for example, are less liable to errors in methodology. This is a first check on any research finding: the reliability of the source of the information.

Statistical Significance

Perhaps more valuable is the statistical significance of a result.

The use of statistics in any type of research, whether it be sociological surveys or microbiology experimentation, is done in order to generalize some statistical finding to a broader population.

Researchers may choose a random group of individuals to survey in order to test a hypothesis. This could be, for example, an opinion poll before an election. The sampled voters are surveyed and the results are extrapolated to the entire population.

Due to random chance, however, the sample may poorly reflect the population, leading to an inaccurate statistic. Therefore, statistical significance can only be confirmed if some pattern in the data is unlikely to be due to random chance. A conclusion can only then be drawn, and even then, any findings are probabilistic in nature. Statistics is unable to prove a given conclusion; it can only support it.

Thus, in certain cases, particularly in informal research, the results may simply not be accurate to the population and the conclusion may not actually hold true in a more general setting.

Effect Size & Presentation

Effect size has a lot to do with how researchers and others ultimately present the results to an audience. In many cases, even if a result is statistically significant, it may not matter much on a larger scale. A small effect size could still be presented in a way that exaggerates the importance of that finding.

What’s important is to place the results in their proper context and evaluate a statistic based on external information. A 1% difference could either be worthy of note or not important  at all depending on the context. Similarly, the selective presentation of certain statistics and the exclusion of others can also impact how something is perceived.

These potential places for statistical lies arise fundamentally due to a limited scope of information. It’s clear that a singular figure or set of data is not enough alone to draw conclusions, whether personal or academic.

In the end, it might be apt to tell people to distrust statistics. After all, it is quite sneaky. But that would be a disservice to a tool useful in almost every field imaginable.

The best advice overall, perhaps, is to just avoid taking whatever we see as fact—even when it comes in the form of numbers.

Image Sources: Featured/1