Sheila Braun
2 min read

The Argument Against Aggregation

We take for granted, in these enlightened times, that the mean or average can stand in for real people. Such an assumption is at the heart of our acceptance of numerous mass media stories about studies. Besides the reproducibility crisis Brian Resnick explains so well in this article, we might also question (with some validity) whether there is any such thing as an “average” person to whom study findings might apply. In fact early researchers did not take this idea at all for granted. For instance, in 1865 Claude Bernard wrote

“Another frequent application of mathematics to biology is the use of averages which, in medicine and physiology, leads, so to speak, necessarily to error. . . . If we collect a man’s urine during 24 hours and mix all his urine to analyze the average, we get an analysis of a urine that simply does not exist; for urine, when fasting, is different from urine during digestion. A startling instance of this kind was invented by a physiologist who took urine from a railroad station urinal where people of all nations passed, and who believed he could thus present an analysis of average European urine!”

Bernard was committed to ferreting out as many new facts as possible rather than reducing “to equations the facts which science already possesses.” He provided another hypothetical example, this one about “stone” (kidney stones?), in an attempt to destroy the field of statistics once and for all:

“A great surgeon performs operations for stone by a single method; later he makes a statistical summary of deaths and recoveries, and he concludes from these statistics that the mortality law for this operation is two out of five. Well, I say that this ratio means literally nothing scientifically and gives us no certainty in performing the next operation; for we do not know whether the next case will be among the recoveries or the deaths. What really should be done, instead of gathering facts empirically, is to study them more accurately, each in its special determinism . . . to discover in them the cause of mortal accidents so as to master the cause and avoid the accidents.”

Bernard had no notion of the term statistical power, but he was clearly familiar with the concept. But he had clearly never encountered big data.

The history of statistics is fascinating for many reasons, not least of which is the fact that an important early statistician, Francis Galton, was a racist and a progressive. While dealing with the gnarly issue of whether the ruling class might be trusted with the vote, he went to the fair and made an enlightening discovery.

The Ox

The writings of Francis Galton are often not germane to the field of statistics (he was a Victorian statistician, polymath, progressive, sociologist, psychologist, anthropologist, eugenicist, explorer, geographer, inventor, meteorologist, protogeneticist, and psychometrician, according to Wikipedia), but he made astonishing contributions to the field. One story is relevant here and amounts to a defense of the use of averages and the discovery of the standard deviation. Economists call this story “The Parable of the Ox.” Galton called it “Vox Populi” and used it to support the notion that voters might just get it right. I have recorded it here. It’s only a few minutes long. If you know anything about statistics, while you listen to it you might feel, as I did, a bit awed by how much of the practice of statistics today rests upon the conclusions this man drew with his data populi. It almost takes one’s breath away.