I’ve been a runner long enough to have learned to estimate paces and distances with very little help from verifiable data. I even got good enough that teammates would trust me with pace-making during workouts and races. But when I first heard of GPS watches, I immediately saw their potential to revolutionize the way I train and race. Indeed, for the last ten years, I’ve connected to navigational satellites before almost every run. Even when I haven’t particularly cared about the data, satellites have triangulated my location and measured my pace in ways I never dreamt possible for the first two decades of my running career.
When I got my first GPS watch, it felt as big as a brick on my thin wrists, and it tore quite a few of the sleeves that got caught on it, but I loved the data. I have my mile splits for every mile, in every single Boston Marathon I’ve ever run.
Here are a few:
At first, when you downloaded your workouts onto your laptop, you could view your data with or without the smoothing algorithm. I used to be intrigued by the wild, spiky inaccuracies in the raw data. “Look, it thinks I was running a 3:15 mile!” Maybe that was when we went under that bridge and lost the satellites.” Or – “Wait! I didn’t slow down that much! That 20:00 mile pace is a complete lie.” But when you enabled the smoothing algorithm, it all went away. Your performance was normal—normally distributed across a pretty easy statistical model, based on a relatively narrow range of possible human performance and the likelihood of certain levels of variance.
It’s pretty obvious that pace data aren’t the only data smoothed by algorithms. Unemployment rates are smoothed by excluding certain populations—seasonal workers and those who weren’t looking for work or who were in school or training; the disabled, incarcerated, the undocumented; and those whose labor wasn’t defined as employment (parents, caregivers). The inflation rate is smoothed by excluding seasonal variances in the cost of things like heating fuel or tropical vacations. Smoothed data sets narrow the range of human diversity, determining things like an artificially thin band of normal for things like BMI and blood pressure. In a data-crazed world, smoothing makes so much comprehensive; it corrects for errors in measurement and eliminates the noise in our signal.
Our mania for quantification—for evidence-based interventions and for 10,000 steps a day, for Racing to the Top and for Every Student Succeeding—has its advantages. I never broke 3:00 in Boston without obsessing about data and following scientifically validated training, nutrition and racing strategies. But human reality isn’t really very smooth, is it? When it comes down to a specific choice between scientifically validated best-practices or trusting my intuition, honed over thirty years of competing, I’ll take my intuition any day. Best practices work over months and years; but in the singularity of any particular circumstance, their utility is limited.
Moreover, the wreckage of validated, quantified, best-practices and standards is strewn all over the rocks of human history. Could we imagine a reality so smoothed that a single set of educational standards can be applied evenly over the texture of diverse cultures, histories, neighborhoods, socio-economic realities and cognitive styles? Would we ever want to?
The roughness of human life is what makes it, well, alive: its diversity, its non-conformity, its local textures, its wildly varied ways of making sense and meaning, the uneven progress of human history, and the nearly infinite variety of its creative expression and sensory experience. Art is rough. Can we realize the efficiencies and scale of our data-driven, algorithmic era without erasing art of being truly present or crashing against the legacy of wildly varied pasts? The former would render all awareness that of statistical models and probabilities, rather than of the unique specificity of any moment. The latter promotes the bias that there’s something wrong with the cultures, neighborhoods, skin colors, etc. that aren’t captured in the normal, smoothed curve.
I appreciate the smoothing of data that offers me accurate mile splits and a helpful risk profile for my 401K. I believe that there are things that data can tell us that are obscured by human cognition, prone as we are to certain biases and irrationalities. But I worry that our enthusiasm for quantification and data-driven decision-making is only as helpful as the decisions we make (consciously or unconsciously) in our data gathering and data processing itself. And I’m afraid that our zeal for making models look like they work means smoothing the roughness of human history and diversity. In fact, I think it’s time for an anti-smoothing movement, to nurture our appreciation for the uniqueness of a here-and-now that doesn’t fit neatly into efficiency, optimization, standardization, or best practices.
Continue on to Part 2