How have we gotten into this mess? And how can we get out? (Practical social epistemology in a time of epistemic crises)

Daniel Heck
42 min readJul 12, 2020

--

Photo by Marc-Olivier Jodoin on Unsplash

For most readers in the US, I imagine it is easy enough to know what I’m referring to by “this mess.” In one sense the mess is our COVID-19 situation, with 136,000 US deaths and counting today, July 11th, 2020. The disease is spreading exponentially in a number of states, where its spread has been encouraged by lax policies adopted contrary to voluminous advice from the scientific community. Deaths in those states are now also trending up again.

Our failed response to the COVID-19 crisis is itself a symptom of a deeper crisis in the US. Knowable truths are routinely ignored and denied, often aggressively and angrily, from all kinds of quarters. From political elites to institutions to large parts of our population, the problem manifests at every social scale. This horrible disease sharpens to a microscopic point a critique that has been growing increasingly sharp over the last several years. Our society has become unmoored from reality in genuinely dangerous ways. Our immediate crisis is itself the result of a deeper crisis of knowing, which we can call an ‘epistemic crisis’.

Many other countries at various levels of wealth have succeeded in understanding and acting on humanity’s available knowledge, containing the virus and minimizing its social and economic impact. In contrast, the US has utterly failed. Mind-bogglingly, this has happened in an enormously wealthy society that invests heavily in education and knowledge creation, and has access to far more than enough knowledge for the task at hand. We also spend about twice as much on health care, per person, as any other country. The issue isn’t that we are poor in knowledge or resources. It is that the knowledge hasn’t permeated and guided our leaders, institutions or population in the way that it has in so many other places. How did that happen?

If we are to truly get our heads around the problem, I think we need the help of some clarifying and galvanizing language. So I’d like to coin a term to describe an important aspect of how this happened: epistemic capture. We have gotten into this mess because a lot of people and institutions, including powerful ones, have become epistemically captured.

Let’s define it clearly:

Epistemic capture is what happens when a social system persistently cultivates less true or less warranted beliefs and prevents the cultivation of more true or more warranted beliefs. (Warranted beliefs are just beliefs based on good reasons.)

Alternatively, we can understand epistemic capture as the thing that con artists consciously try to achieve. However, epistemic capture is also a broader idea, because it includes unintentional but systematic processes that yield a similar result. The concept focuses our attention on systematic processes that foster falsehood, intentional or not.

Epistemic capture raises lots of questions about how we know things! That’s intentional. I think that an important part of understanding and addressing our national epistemic crisis involves thinking carefully about epistemology, the study of how we know. I won’t unfold a complete theory of knowledge here, but I will pull together some thoughts and observations that can help us diagnose and start to treat our epistemic capture problem.

First I’ll talk about the distinct and indispensable roles played by trust, rational models, and careful observation in giving us the knowledge that we need to function competently in the world today, both as individuals and in groups. After that we can more sharply see what has gone wrong, and what might help us address the current crisis and build a brighter future.

Trust

We all depend heavily on trust to understand reality. We enter the world as fragile and flexible things, beautiful, blinking, ready to learn. From the start we’re dependent on those around us for our words and worldviews, our habits and ways of life. We rely on those around us for a toolkit that empowers us to deeply understand and interact with the world.

While it starts in childhood, learning through trust is far from being a primitive or childish process. It also lies at the heart of science, technology, and humanity’s greatest practical (and impractical!) achievements. At least in the US, a lot of our science stories tend to focus on the heroic individualism and rationality of scientists who have held fast to some powerful insight in the face of a hostile status quo. These stories are compelling, inspiring, and important. They also make it easy to miss the far more fundamental role that is played by communities of trust, even in cases of bold invention and discovery. If the most brilliant person on Earth today were forbidden to learn through trust and was constrained to the knowledge that they could personally demonstrate and establish, they would remain remarkably ignorant on a huge number of important topics. We all need to stand on each others’ shoulders to see even a fraction of what we currently see. In fact, the cutting edges of knowledge and know-how are especially dependent on trust, not especially independent of it.

Consider a simple and powerful illustration: could you make an electric toaster from scratch? It is a pretty inexpensive appliance. And it has always been beyond the capacity of any solitary human, even the most brilliant, to make one from scratch in a lifetime. If this weren’t true then devices as complex as toasters would have almost certainly emerged far earlier in our long history. Consider just the intellectual knowledge involved in making one: the metallurgy, the tool making, the geology (what kind of rocks and plants and liquids must be coaxed into yielding the materials you need?), the geography (where do you find those raw materials?), the mechanical engineering, the electrical engineering. Consider the skill involved in locating and travelling to the patches of earth where you’ll extract your raw materials. Sorry, you can’t use a bike and certainly not a car. Maybe build yourself a crude raft? Oh, and there will be people to negotiate and wrangle with over access to those things, so I hope you’re great at that as well. I hope you manage to do it more ethically than we usually manage to do it today. After a long string of daring quests to gather the materials, your head and hands will have to loose and transform and bind the shiny-purified-rock-stuff and the ever so carefully processed oil-stuff in an almost magical act of creation. Toasters are wizards’ work.

Now to top it all off, consider that even arriving at the idea of a toaster involves insights that you borrowed from seeing toasters or other machines.

If building even a basic toaster includes a dizzyingly complex integration of skilled labor and physical resources and knowledge and novel ideas, how do the humans do it? We specialize and we trust. Without specialization, we wouldn’t have the required skills and time. Without communities of people who are trustworthy because they can competently and reliably do the work, the work wouldn’t get done. And without trust to bind these specialists together, the different pieces of the puzzle couldn’t come together.

So individual genius and hard work clearly can’t even build a toaster. But a community of people who work smart and hard, who are trustworthy and trusted enough, can transform the impossible and unknowable into the trivial and obvious.

Sometimes we imagine and speak as if the advent of our scientific and technologically-driven age has made trust obsolete. But precisely the opposite is true. Trust has become more foundational and utterly indispensable than ever before. This is because most of our knowledge today is so very toastery, and even the most utterly brilliant person can’t hope to begin to specialize in everything.

For all of its obvious necessity, there is also something deeply disturbing about trust as a path to knowledge and power. Let’s explore some of those problems.

Imagine we are part of a group of people who only trust other members of our own group, but we deeply fear and distrust outsiders. On its own, this isn’t necessarily going to lead us into any particular falsehood, especially if we have effective and rigorous ways of checking our knowledge. A social bubble alone isn’t necessarily a problem. But now let’s add in one more factor: suppose that those outsiders are people who are actually more trustworthy than us on a lot of things, because they rigorously check their work to make it as accurate as possible, carefully identify those who are trustworthy within a particular domain and give them appropriate levels of both trust and accountability. They even preserve and double-check and teach and communicate their knowledge well. But we think that’s all b.s. Here our distrust isn’t just a barrier that blocks us from learning from them. It is even worse than that. If those outsiders say something, we are powerfully inclined to suspect some sort of trick, some ulterior motive, some deep flaw in their reasoning, because they belong to the distrusted out-group. Because of this, our misplaced distrust will predictably and reliably point us precisely the wrong way. And what’s worse, while we do this we don’t think we are being unreasonable at all. In fact, in a real and basic sense, we feel almost exactly the same way anyone feels when we trust a reliable source: we feel smart and reasonable, because we haven’t let ourselves be tricked.

The problem is simple but important to state clearly: the problem with our group is that we distrust people who are more trustworthy, and trust people who are less trustworthy. It isn’t that we’re dumb trusting sheep, unlike those rational people. As we’ve seen, trusting well is absolutely essential to the human pursuit of knowledge, especially in highly developed fields like the deep mysteries of toastering. The issue is simply that we trust poorly and so we reliably and predictably get twisted exactly backwards.

Imagine how astonishingly tight this trap can become. Imagine some suspect outsider comes along and tells us we’ve gotten something wrong, and they have a bunch of charts and graphs that supposedly prove this. But we can look around at the people we trust and see that they all plainly disagree with the outsider. We all listen to media that constantly warns us that the outsider is one of those delusional, wrong-headed, wicked, ill-intended, stupid, arrogant, asses. We don’t need to waste time on their graphs and nonsense, and in fact it would be unwise to do so, although some of our smart people might give it a cursory look and dismiss it for us. That would be more argument than most of us need. (Isn’t that exactly how you treat people when you’re confident they’re trying to con you?) Why would we waste our time figuring out exactly what they’ve gotten wrong when we already know they must be wrong?

Now on top of this, imagine what would happen if you were a member of our group who is foolish enough to take the outsider seriously. At best we might make fun of you. At worst, you might find yourself alienated, attacked, and ultimately shunned. On the other hand, if you join in bashing the outsider you’ll find an easy path to warmth and welcome and security in your identity as a member of our group. Here we can glimpse another enormous practical strength of trust: trust powerfully helps coordinate and consolidate groups, and rewards individuals with all kinds of material and psychological benefits.

In fact it is pretty easy for the human and social dynamics of group trust to matter far more to members of a group, in real and practical terms, than the substance of a topic under discussion.

Let’s take a concrete example.

Imagine that you’re a Flat Earther and you’ve found a fairly loving and supportive community among fellow Flat Earthers. It has become the heart of your social life. You look forward to the annual Flat Earth conference where like-minded people warmly embrace and share their Flat Earth research and art. Flat foods are freely shared by all: delicious pizzas and pitas and cookies galore. When you aren’t meeting, you watch Flat Earth videos and listen to Flat Earth podcasts and participate in Flat Earth forums, all of which constantly reinforce the message that the Earth is flat and that you and your community have a special mission to free humanity from the grips of the round-earth conspiracy. The mission gives your life meaning and purpose, and the community gives your life warmth and camaraderie.

Getting the shape of the planet wrong has almost no practical implications for you as a brute fact. Sure, scientists and engineers in some fields need to get this right or the consequences could be dire. But for you it is technically quite irrelevant. On the other hand, it is really important for your social life that you get the shape of the planet wrong! In this case the strategic, rewarding, principled-feeling and obviously prudent thing to do is pretty easy to know, especially once you’ve begun to trust your friends in the Flat Earth group far more than outsiders. You’d have to be an idiot to not know the right answer, from a personal practical standpoint. If you’re in that context and you ask yourself, “What do other trustworthy people think?” the answer is intuitive, plain, and eminently practical for you: trustworthy and reasonable people like us all agree that the Earth is flat. Acknowledging this is the price of belonging, but it is a small practical price to pay for a great deal of community. What would be the point in turning your life upside down just so you could perversely challenge something that everyone you trust knows to be true? Aside from the social burden, acknowledging that you’re wrong would also be personally painful: it can be agonizing to realize you were wrong, that you’ve been had. Realizing you’ve been wrong about something you’ve made central to your life and identity is the sort of thing that shakes a person to the core.

Precisely because our views on the shape of the Earth have few personal practical implications for most people, group identity formation and trust-building can easily come to the fore. In contrast, it is very unlikely that a group would form around the view that bikes should be ridden upside-down, or that door knobs are a lie, or that electric toasters work fine without electricity. The big power companies just want you to THINK you need it! Just follow the money, unplug, and toast to your heart’s content. It would be far harder to create a “doorknobs are a lie” society than a Flat Earth Society.

Now ask yourself this: are most of the political topics and health topics that people divide over more like the shape of the Earth, or are they more like the notion that you can’t ride a bike very well on your head? I want to be very clear here. The actual shape of the Earth is of real and enormous practical importance, and the same is true of all kinds of public policy. But our understanding of the shape of the Earth, or the effects of public policies, are heavily mediated by trust. In this, they’re very different than topics where we can get direct and personal feedback. Here we can see something central to our current predicament: much of modern science and politics sit in a domain where trust is rather plainly our central path to knowledge. This is the case for absolutely everyone because, as with toaster-making, extensive specialization is involved. For example, how can you understand the general impact of a minimum wage increase across a nation? Do the benefits of the wage increase get cancelled out by job losses, or not? It takes years of rigorous research by teams of various specialists, themselves trusting all kinds of previous work from others, to develop a strong answer to a question like that. I think that we can answer questions like that quite well, at least in terms of which view has better evidence and warrants, just like we can make toasters quite well. But unlike the toaster, which anyone can at least test, even the test of this kind of public policy research involves trusting others. It is obviously, precisely, and literally impossible for any individual to personally do all of the work involved in understanding these sorts of things. These are toaster problems all the way down, not the sort of thing anyone can find out alone.

So where does this leave us?

With this:

Trust is an utterly essential path to understanding reality for essentially all of us today. Where people are trustworthy and trusted, we have a quick and reliable-enough path to understand all kinds of aspects of reality. This allows us to access the hard-won knowledge and know-how and work of others so that we can build on it. And trust doesn’t just help us understand reality. It also helps us powerfully shape it. The groups and ideas that structure so much of our inner and outer lives are built on trust. At the same time, trust is easily manipulated in fairly obvious ways that are easy to learn and reproduce. For example, by directly speaking and acting in ways that manipulate our natural trust of insiders and distrust of outsiders, leaders and groups can obtain an enormous degree of social power and control over their followers. This power is rooted not only in their ability to define people’s image of reality, but is supplemented by the coordination benefits, personal psychological pressures, and collective social pressures generated by communities of trust. The social power of trust means that it can move human mountains. Even if the groups mobilized by trust are otherwise detached from reality, the human reality it creates is a real and remarkable force in its own right.

Rational Modelling

And now for something completely different. Instead of just trusting people, let’s try to get at reality by carefully developing an internally consistent, explicit, rational model. (After that, we can rigorously check to see how well our model fits with reality through careful observation.)

Already we can tell that this feels a lot more complicated and a lot less intuitive than just trusting someone. It is harder to even describe exactly what it is. Trusting someone is as quick and easy as copying their math, but this involves actually understanding and doing math and other stuff that is a lot like it. Because this is tough to do, it is easy to do poorly. Let’s take some time to really dig into how tricky it can be to get this all right.

For example, take a quick look at this ancient proof about the relationship between the area of a triangle and the length of its sides. Go ahead and follow the link. This proof is an example of a particularly simple application of rational modelling, in a way that can fit a lot of practical observations very well. Do your eyes quickly glaze over? Do you want someone to just tell you the point? If so, congratulations you can stop. You’ve gotten my main point for now: rationality is a lot weirder and harder than trust. And if you happen to be fascinated by the proof and you want to dig into it and do the work of really understanding it, know that you’re wonderfully weird.

Unlike trust, which is quick and intuitive for most people, rational modelling is slow, mentally tiring and often unintuitive. Like so many skills, it takes practice and hard work to understand it and use it well. It is easy to see why so many kids in geometry class are eager to copy their neighbor’s work. These cheaters are being rational and efficient in their own way, especially since the brain actually burns a lot of sugar doing this stuff. Why waste the brain sugar if someone else already did it, and they’re better at it anyway? Plus, they’re probably more trustworthy than you even when you try your hardest, since they practice so much and you copy so much. Without this impulse to copy the more trustworthy work of others, no domain of human knowledge would make it very far. (See: toasters.) But it is also true that if we all only copy, no domain of human knowledge progresses either. We would become caught in a loop of endless stagnant imitation.

Even for people who are fantastic at it and who practice it constantly, it is still quite easy to make mistakes. Checking, re-checking, and having competent friends check your work are all important parts of doing it well.

But mistakes in modelling, the kind you lose points for in math class, are just part of the problem! There are other issues, deeper and more hidden. Let’s take a good hard look at a few.

Rational modelling depends on the play of priors

Rational models often help us identify surprising insights or contradictions that spring from explicit logical priors. And although they require some work, good models can also save huge amounts of time and energy by simplifying complex ideas into more easily-managed ones. The fact that rational models are reliable and reproducible can also help us notice, document, and clearly communicate things in the world that are also reliable and reproducible, including some very complex things.

That may sound abstract, so let’s take a very simple example. If you have 2 apples and then you get another 2 apples, guess what? You have 4 apples! You don’t have to say you have “2 and then 2 more apples”. 2 + 2 = 4 is a nice example of one extremely simple and common model that has a huge range of practical applications. In this example, we might say “If you have 2 and then 2 apples” is an explicit, logical prior. The conclusion is “You’ve got 4 apples!” This conclusion may be surprising and exciting to some people, especially toddlers. It is also useful if someone tries to charge you for 5 apples when you bought 2 + 2 apples. As useful and simple as this is, sometimes it actually matters that you got 2 apples (from your friend) and another 2 (from the warlock with the poison ‘apples’) and so you might not want to just lump them in together.

This illustrates that priors can involve implicit assumptions that we’re not even aware we’ve made. Is ‘apples’ really the category we should be focused on, and is that what those things from the warlock even were? We can even find more purely logical examples. For example, the geometry of triangles that I shared above assumes that the triangle is on a flat plane, although it doesn’t make that explicit. But what happens if you draw a triangle on a ball? Suddenly you have a whole new branch of geometry, and many of your old ideas go out the window. Rational modelling requires that some priors are made explicit and clear, but in all kinds of fields the dream of making all of our priors explicit is practically impossible. Sometimes it is truly and utterly impossible, in a way that goes beyond our own practical limits. In fact, even in arithmetic the goal of creating a complete formal system that makes all of its priors explicit has essentially died, thanks to Gödel’s incompleteness theorem. It turns out that the foundations of rational models, if we want to call them that, are kind of like the foundations of houses: we dig down a bit and create a base to build on, and hopefully we can trust that it is sturdy enough. But nobody digs to the core of the Earth to create a foundation for their home. And would the core of the Earth be a true foundation anyway, or would we need to find gravitons before we could truly begin? And would finding gravitons even be enough? What are they made of anyway? More to the point, you don’t actually need to know that to build a house.

Beyond implicit logical priors that can be made explicit, there are all kinds of ‘priors’ that aren’t logical in the same way, but which need to be present before we can make a rational model at all. You may need computers, job opportunities, communities of knowledge, prior training, theoretical imagination, and experience with both the stuff you’re studying and other models of it before you can make a good model for scientific use. It can be helpful to be as explicit as possible about these kinds of practical priors, but their exact relationship to a model is as fuzzy and unclear as it is real and undeniable. All of this is worthy of our attention and reflection because it isn’t hard to build an immaculate palace of consistent thought on a rotten or wobbly foundation. Even a solid foundation may be too small or too shaky or located on too steep of a slope for the building you had in mind. We might have our math right, but it might be the wrong math for the job. Or it might be adequate but far from ideal, thanks to some prior that limits you in ways you haven’t quite understood.

Let’s call the puzzles and problems associated with the relationship between rationality and priors “the play of priors.” Our assumptions sometimes matter a lot, and there is an art to figuring out which assumptions are important and when. Playing well with priors is distinct from the work of checking the internal relationships within a model, and it means that even the most immaculate model sits like an island in an ocean of priors, some explicit, some always implicit. One important way that people face the woolly challenge of picking good priors is trust. What if you could just trust some smart and hard-working people to give you good assumptions and priors so you can focus on doing the interesting or useful math that comes from them? Through the play of priors, rational modelling relies fundamentally on communities of trust to help us identify fruitful priors to build on. In the play of priors the promises and problems of trust rear their head again, even in the very heart of rational modelling.

Another challenge: Reification

It is also painfully easy to forget that our models are never reality themselves. The human mind is very good at mistaking a map, which is a nice example of a model, for the territory. Even the models with the firmest foundations and a strong mapping to reality never fully capture the variation and beauty and complexity that permeate our world. There is a technical term for this problem of mistaking maps for reality: it is called reification. Reification is distinct from the play of priors in important ways, and has its own connections with trust. If we absolutely trust our map-maker, for example, we might insist on driving into a river where a bridge once was. “Look,” we might say, “the bridge is right here on the map!” Will we trust the map or our own lying eyes? This example sounds silly, but once we start talking about mapping more difficult-to-perceive phenomena like the course of a disease in a population or the behavior of subatomic particles or the effectiveness of a public policy, it becomes easier and easier to mistake the map for the territory. It also becomes more and more important to rigorously check the work and limitations of our mapmakers.

Part of the challenge is that the more reliable a map is, the easier it becomes to mistake it for the territory itself. Imagine that you come to a river and can’t see a bridge, but you trust a map that tells you there is one there. As you slowly make your way over the creaking, invisible bridge, gazing from your window in wonder, you learn a valuable lesson about trusting the map over your own intuitions. The map is never the territory, but the better the map the more worthy it is of our trust. And the easier it becomes to mistake it for the territory. As fantastical as it sounds, this is very much the situation in which we find ourselves whenever science produces powerfully counter-intuitive answers that are nonetheless more reliable than our natural perceptions and intuitions. And to be sure, a great deal of the science that we now have is extremely powerful and reliable, and also deeply counter-intuitive! Those who embrace it are often rational and warranted in their embrace. They are also in the unnerving and sometimes treacherous situation of trusting a map more than their own immediate experience of the world. Among other issues, reification makes it easy to misunderstand how very easy and natural and intuitive it feels for many people to trust their own lying eyes, even when their eyes are actually lying to them.

And here in the contrast between our maps and our lying eyes we can begin to really appreciate that even our perception processes are the product of modelling that we don’t yet fully understand. Our perceptions present the world to us in a way that is simplified and made coherent in useful ways, but ways that aren’t entirely reflective of reality. Color perception offers us a beautiful illustration of the issue. When we want to point to a basic and reliable fact about the world, “My dress is blue and black” seems like a good place to start.

And yet plenty of us have seen things like this dress, which demonstrates that color and color perception are nowhere near as immediate in their ability to map the world as they feel to us:

Does this dress look blue and black or white and gold to you? Different people see it differently. Sometimes you can get your own perception of the colors to flip, but not by just thinking about it.

One likely cause for different perceptions of the dress is that different people’s minds are making different implicit prior ‘assumptions’ about the lighting in the picture. But notice that these ‘assumptions’ aren’t something you’re aware of at all, and you can’t easily scrutinize them, articulate them, or change them. These ‘assumptions’ are priors of perception itself, and we can’t figure out what they are without doing extensive psychological research. This shows us a way in which the problem of reification goes beyond explicit rational models, and in fact is part of our normal perception of the world. Even the map of the world that our eyes and minds give us in the most immediate and non-reflective ways isn’t the territory of the world. Don’t despair though. If you trust neuroscientists and their ability to weave insight from careful modelling and careful observation and communities of trust, you can understand a great deal about what is going on with these weird corners of human perception. As a human community we have the power to understand quite a bit about the ways our eyes lie to us all, if we work at it together. All we have to do is trust that map more than our own lying eyes. This in turn can tempt us to mistake the new map for the territory … and we’re back again to the need to routinely remind ourselves about reification.

Oh, and one more challenge: Experience can never be entirely reduced to models

Let’s consider one last challenge involved in rational modelling: there’s an irreducible gap between any rational model of reality and the normal human experience of immersion in reality. Imagine that scientists create a perfect and accurate model of how humans perceive color, and there is a brilliant scientist who knows all of this science inside out. Now imagine that she is also completely colorblind. She can know all of the science, in this sense, and there’s still something really important that she does not know: what is it like to see color? (Here I’ve borrowed from and modified this famous argument.)

Let’s call this the phenomenological gap.

This is distinct from the problems of reification and the play of priors because it focuses precisely on the elusive nature of experience itself. How does this relate back to trust? In part because it is in this normal experience of reality, in the shouting and blushing and wonder, in the blood rushing to our faces and the delight of a surprising discovery and the splashing of our cars into rivers as the water rushes in around us, that we find our limited conscious influence over our patterns of trust and distrust.

With this reminder that experience fundamentally can’t be reduced to a rational model, let’s turn to a third necessary element of building knowledge.

Careful Observation

In developing good understandings of the world, observing well is as necessary as trusting and reasoning well. The processes of making careful observations and the processes of building useful, explicit, rational models overlap and interweave in beautiful and complex ways. The delicate dance can’t be fully captured in a simple method, like looking through a telescope and taking notes in apurely repetitive way. Still, training people in various methods can be helpful, and is indispensable in most fields of study. This art of observation can’t be boiled down to a few basic rules: the nature of celebrated careful observation varies dramatically from physics to chemistry to biology to anthropology to political science and on into the arts and counseling and countless other fields.

In understanding excellent observation, we’re simultaneously reminded of the enormous value of a good map and the difference between the map and the territory. There are certain broadly shared themes across fields when it comes to describing good observation, but perceiving well also involves a necessary element of skill and craft, generally learned in a skilled community of trust. The fact that observation can be done well also means that it can be done badly.

One basic issue with observation involves the art of seeing enough to understand what we hope to understand. Often, a bit of context can dramatically change our understanding of a scene. We see a man screaming over his severed legs. Are we looking at a horrible human tragedy or the set of a B movie (which is arguably a horrible human tragedy of another sort)?

Some of the challenges involve regular mistakes that our minds make. One famous example is focusing illusions. Focused attention is important when studying almost anything in detail, but it comes with a tendency to exaggerate the importance of the thing we focus on. Even being aware that these mistakes happen isn’t enough on its own to prevent them. However, careful repeated attention and cooperation among trustworthy and trusted people can help groups address these issues.

There are also strict limits on how much we can truly perceive and pay attention to. For a fun example of how truly blind it is possible to be, feel free to enjoy this classic attention test.

So imagine that we have an excellent, expensive thermometer and we can see that the temperature outside has risen precisely 3.60007 degrees. A little girl is crying because her ice cream is melting. Has the rise in temperature caused her ice cream to melt? It can seem intuitive to say yes, especially in the moment that we’re focused on our fantastic thermometer. But maybe the prior temperature would have been plenty to elicit the dripping ice cream and the dropping tears. The difference between the temperature in the freezer and the air outside might be the thing we needed to notice, even though our thermometer is in the wrong place to measure it. What if we’ve very carefully observed an irrelevance and our focus has lead us away from a simple truth? Appropriate focus and appropriate context are essential parts of good observation, in addition to having the right tools (physical or mental) to observe as precisely as needed. But close and careful observation make us more susceptible to focusing illusions in the same way that lying on our back to look under a car makes us vulnerable to someone stomping on our stomach. The risk is an unavoidable part of the investigation. So in general, more precise observation is better than less precise observation. But it comes with costs that are easy for an individual to miss. An imprecise observation that really matters is much more useful than a precise one that doesn’t.

This just begins to scratch the surface of the difficulties that arise in perceiving well, the surprising depth of the art of noticing.

Now I’m going to shift focus shortly, because we’ve gotten hyper-focused on the daunting obstacles to the enterprise of building human knowledge through the complex integration of trust, rationality and observation. Precisely because of that it may well feel like a hopeless endeavor right now. And yet what we can learn is utterly breathtaking in its beauty and scope and power. Communities of specialists all around the world are growing and deepening in skill by practicing the art of science and training others to do it, too. As with high level dancers, athletes, musicians and other performers, skilled researchers routinely make the impossible look effortless thanks to long and rigorous practice. Even when careful research is done relatively poorly or when our best efforts fail, it often gives us the best picture we have of the world. Sometimes the best research is helpful just because it illuminates the precise scope of our ignorance.

In spite of the difficulties and inevitable failures, it is undeniable that across a vast range of social and cultural contexts, this odd little toolkit has helped humanity arrive at profound and shockingly powerful insights into important aspects of reality. The successes are now routinely revolutionary. We can now reliably predict and explain things about the farthest reaches of the cosmos and the tiniest particles we can find, and a great deal in between. With it, we make computers of mind-boggling power, cities more massive than any the world has seen, and bombs that could destroy us all many times over. We can even understand, predict and prevent the exponential growth of a pandemic. At least we can help those who are willing to be helped.

For all of the pitfalls of rational modelling and careful observation and warranted trust, for all of the difficulty involved in bringing them together in this strange dance, dance they do. With training and hard practice, they dance marvelously, powerfully, beautifully, and often in a way that looks magically effortless from outside.

In light of its dizzying power it is easy to miss something else that is really important about this dance: there is a deep moral core at its heart. Before getting to our diagnosis of what has gone wrong here in the US, I want to draw out the moral core of this system of rationality, observation, and warranted trust more sharply. But what should we call this dance? I’ll call it science. The precise boundaries of science are contested, but here I think I have brought together something that is worthy enough of the name. If the concept that I’ve fleshed out here feels a bit more broad and expansive than “science” usually feels in English, you might want to blame the Germans who have influenced me with their similarly broad notion of Wissenschaft.

The moral heart of science

Contrary to the common image of science as a cold and heartless and amoral endeavor, there is a beautiful ethical core to it. The integration of rationality and careful observation and warranted trust grow out of commitments that are deeply moral. When pursued consistently, these roots can grow in ways that help us become more consistent, more rigorously fair, more attentive, more conscientious, more humble, and more open to correction.

Think of each aspect of this dance and how closely it is connected to living a morally decent life. A failure of moral rationality can lead us into hypocrisy or moral arbitrariness if we don’t apply our standards consistently. What is hypocrisy, after all, but a failure to apply our standards fairly? The consistent and fair application of standards is central to consistent and rational modelling in the sciences at least as much as it is required for moral fairness. Cherry-picking our observations in a biased or partial way also represents a kind of favoritism and arbitrariness, and a disregard for real situations and real applications can render our ethics hollow. So the integration of rationality and observation, done well and consistently, helps people become more trustworthy in multiple ways. It helps us become more reliable because we can reliably get things done, and if it is practiced consistently it also makes us more trustworthy in the sense that we have more integrity.

Drawing on this but reaching beyond it, a common theme among the sciences is a desire for consilience: a broader coherence and fit among a variety of ideas. In this scientific quest for unity and coherence we find something that resonates with a broad moral vision, seeking unity in truth among people and peoples. The moral heart of science involves seeking a deeper unity. There, we find a place where honesty, integrity, courage, hope, loyalty and even love meet, especially the love of those who are different. So it shouldn’t surprise us that scientists, for all the partisan politicization that has been unleashed on them in the US, remain one of the few widely trusted groups in the country today. Done well, this dance of science helps connect us to reality, including moral realities.

Now of course the deeply moral practices of science can be twisted toward evil ends. In this, science is hardly unique among moral activities. The twisting of good things into cruel shapes is a common feature of all kinds of evil. A gift can always be poisoned. The fairness, humility, openness to honest critique, and abiding commitment to deeper unity that fuel science can be set in service to the creation of bombs and poison gases and implements of torture. But contrary to what is sometimes said and thought, this is not simply the immoral application of amoral scientific activity. It is a betrayal of the moral heart of science that violently carves it up by narrowly compartmentalizing, instrumentalizing, and manipulating the moral drives that make it tick. When science is reduced to toolmaking, regardless of who the tools will hurt, the abuse of science involved is an extension of the abuse that is always involved in routinely reducing humanity to tools and objects.

While the connection between science and a moral life can be broken, the connection is a deep and natural one. Science grows from a moral seed which, if allowed to flourish and grow, stretches far beyond the narrow technical needs of any particular field or lab.

Some people are able to carefully slice and carve out their hypocrisy. They can avoid hypocrisy when it is strictly useful to consistently evaluate arguments in the sciences, while embracing hypocrisy where it is politically useful to falsely accuse others of the very things they do. Still, we shouldn’t be terribly surprised if we find that movements and people that reject moral fairness often tend to reject science as well, being unpracticed in the kind of integrity and rigor that lie at the heart of both moral decency and science. It takes a special kind of ruthless brilliance to cultivate hypocrisy in some areas of your life or your movement while stamping it out in others. Relatively few people have the level of intelligent sociopathy required for that. It is even harder to build a broad social movement that holds wisdom and madness together in an optimally strategic way.

In the short run, people who manage to be both wrong and cruel are flabbergasting, disconcerting. In the long run, though, I find a deep and abiding hope in this. Wickedness and wrongness don’t always go hand in hand, but they seem to like each other quite a lot. Becoming disconnected from truth puts immoral and anti-scientific movements at a substantial disadvantage, even if deceptions and hypocrisies can yield short run benefits.

So how does epistemic capture happen, and how can we stop it?

Out of the various smaller points we’ve explored so far, an important and much larger point emerges. Learning what we need to know about the world today requires intentional, focused and practiced work by teams of people that are both trustworthy and trusted.

This means that getting stuff right is hard, but possible if we work at it.

Because this dance is hard, it is easy to screw it up on accident, easy to disrupt it, and almost impossible to do it well if we aren’t clearly focused on doing it well together. The good news is that if we do clearly focus on it, prevent others from screwing us up, fix mistakes quickly and gladly when we make them, and do the hard work, then we can pull off something remarkable.

With this in mind, we can now appreciate more about the risks of epistemic capture, and some of the ways it can work.

One easy way to foster epistemic capture is to prevent people from trusting the fruit of science, possibly by passing off a shoddy but shiny substitute for the real thing. Epistemic capture can also happen in more subtle ways by getting people to trust, use rationality, and/or observe in ways that are consistently misleading.

Some of the most successful epistemic capture that we deal with today works by bluntly and constantly attacking people’s ability to trust those who are more trustworthy, in order to secure their trust in people who are less trustworthy. Let’s look at one especially prominent, powerful, and simple example.

Rush Limbaugh’s epistemic capture ‘success story’

Rush Limbaugh is especially worthy of attention at the moment because his epistemic capture strategies are widely reflected among COVID-19 deniers, and Limbaugh is himself working hard to deny and minimize the crisis. He’s especially emblematic of our historical moment because he is something of a godfather to a family of epistemic capture projects that sit at the heart of political power in the US today. This February he was awarded our highest civilian honor, The Presidential Medal of Freedom.

Here is what he had to say about COVID-19 in June, as cases were rapidly starting to rise in some states that had opened up in ways that heavily encouraged the spread of the disease:

“There is no pretense of doing news, there’s no pretense. The only thing that’s happening is destroy Donald Trump, destroy Trump voters, destroy the Republican Party, reelect Democrats. That’s all that’s going on. You cannot believe this stuff. You can’t believe the virus numbers.”

Notice the repetitively repetitious use of repetition in his blunt denial of the trustworthiness of reliable news sources that are accurately reporting on the science. “No pretense of doing news … no pretense.” Importantly, he doesn’t offer an analysis or argument, but then activates his followers’ feeling of threat. For Limbaugh and those who trust him, the only reason for the media to report accurately on COVID-19 is that they want to “destroy … destroy … destroy.” Not only do they want to destroy individuals and the Republican Party, they also want to destroy the leader with whom Limbaugh and his audience identify. Limbaugh then goes on to repeat another one of his core messages, which is epistemic capture in its most blunt and pure form: “You cannot believe … you can’t believe the virus numbers”. This is a direct attack on his audience’s ability to trust the more trustworthy. Of course, Limbaugh in his broadcasts also spends a great deal of time telling his audience who they can believe: him. In this way, through day after day after day of repetition, Limbaugh has built the cage in which he keeps his lucrative audience captured. What is the trap made out of? Largely, it is this simple core message, repeated constantly: “Trust me, not them. They want to destroy us.”

The indoctrination is easy to spot. So why do people stick around for it?

Largely, it is for this: “They want to destroy us.” There are few more captivating and attention-grabbing messages. When people truly feel threatened by an outgroup, they can be mobilized to give or take almost anything to protect their ingroup. It feels noble to stand up, be brave, and sacrifice to protect us from them. So if you want to mobilize people to do almost anything, to die or to commit atrocities, it is extremely helpful if you can convince your followers that they want to destroy us, but we will boldly stand up against them. It was precisely this sentiment that Japanese officers used to instigate World War II by staging the Mukden Incident, a fake attack on Japanese rail lines. False or trumped up claims that “They will destroy us” have mobilized nations for aggression on many other occasions as well. The broad sphere of epistemic capture projects that Limbaugh is associated with has incited more violence over the last several years. But the greatest harm Limbaugh has done has probably rested in his power to incite ignorance around COVID-19 and other important issues. The same basic strategies are echoed and routinely applied by President Trump, whose own epistemic capture strategies owe a large debt to a broader radical right wing media system that mirrors and is often inspired by Limbaugh’s techniques. It is almost too perfectly fitting that Trump awarded Limbaugh the Presidential Medal of Freedom.

Unfortunately, it seems that simply reducing the real threat posed to Limbaugh’s “us” can’t fix this. Even in the presence of a real alternative threat like COVID-19, Limbaugh insists that efforts to protect his audience from COVID-19 are an evil conspiracy against them. It isn’t hard to understand how important this is to his strategy. Everything he has built is premised on “they want to destroy us”. So efforts to help are themselves a sort of existential threat. To be clear, protecting his audience from COVID-19 isn’t any kind of threat to the actual people, the flesh and blood human beings who are caught in Limbaugh’s system. But love, kindness and a shared sense of purpose that crosses the deep trench that Limbaugh’s rhetoric digs between his “us” and everyone else fatally undermines Limbaugh’s narrative. So authentic help for his captives poses an existential threat to the system of epistemic capture that gives Limbaugh his income, his sense of purpose, and his accolades. Limbaugh isn’t wrong to see the truth about COVID-19 as an existential threat to his epistemic capture strategy, because it means that the scientists and journalists and universities that he routinely makes the enemy in his narrative are not just trying to destroy Limbaugh’s followers after all.

Addressing an institutional epistemic capture strategy of this sort requires that we clearly understand how twisted and perverse it is, and how its power derives precisely from this twistedness. Limbaugh’s program is not discursive speech oriented toward truth, but propaganda speech oriented toward capturing an audience. Seeing this in terms of epistemic capture clarifies that the issue is not just that Limbaugh is constantly relaying false information, which is easily shown, but that he is constantly doing something that isn’t really about relaying information at all. Epistemic capture is more than lying, although it can give birth to endless lies. It is a system for persistently displacing truth and replacing it with falsehoods. In this case, the falsehoods are transparently and extremely convenient to the leader. That is the real ‘truth’ and power of Limbaugh’s core message, his eternal refrain of, “Don’t trust anyone else. They just want to destroy us.”

To address the roots of the US’s epistemic crisis, people who value truth and honest inquiry need to respond to capture strategies like this in ways that are also institutionally scaled to match the scale of the problem, and we need to sharply distinguish between epistemic capture and truth-seeking speech. This isn’t just an individual problem to be solved by asking individuals to have better media habits. It is a social problem that has to be solved by cultivating truth-seeking and truth-sharing media and leadership. As a civil society in the US, we need to displace epistemic capture projects like Limbaugh’s to make room for something better.

And now that I’ve spoken clearly and directly about what needs to be done, it should be obvious that serious solutions will face furious resistance from those who cultivate epistemic capture for their benefit. In light of this, let’s lay out some of the nature of that resistance and how to avoid being taken in or discouraged by it.

Defeating Epistemic Capture

In order to fight and defeat an opponent, it is extremely helpful to see them clearly. Understanding epistemic capture helps us avoid counter-attacks that involve false equivalences between warranted and unwarranted claims, and between truth and falsehood.

Whenever anyone moves against these systems of organized epistemic capture in the US, we are met with complaints that our intention is to undermine free speech or that we are biased. As with so much else in these discussions, these false and hypocritical accusations get things precisely backwards.

Here I want to be perfectly clear that I am not advocating legal punishment for epistemic capture practices, and so these accusations are simply misguided in my case. Between just focusing on atomized individual responses and legal punishment, a vast array of institutional options and responses are possible. These include cultivating better norms, building better media institutions, removing financial and political support for epistemic capture projects, loving those who are trapped but refusing to enable them in capturing others, and more.

As the example of Rush Limbaugh illustrates, defamatory speech of the sort that he uses is more than an offense against the individuals and groups who are constantly slandered. It is part of a broader strategy or system that builds wealth and power by disconnecting an audience from reality, binding them to the deceptive and often deceived leader instead. The United States has a strong tradition of free speech radicalism to a degree that contrasts with our peer democracies like Germany, France and the UK, which often treat defamatory speech and hate speech more seriously. For practical purposes this means that our non-legal institutions need to play a larger role in addressing the serious threats and challenges posed by these systems of capture. So I want to be clear that when I talk about seriously and frankly addressing these problems, I am talking about responses that are effective, large scale, public, strategic, and institutional, but these responses are not about criminalizing or legally prosecuting epistemic capture. In my view this is not a matter of free speech rights, because I believe and hope that our epistemic crisis can be addressed within the radical free speech liberalism of the current United States. Still, this wide freedom means that we need to tend and moderate our civil society and our public sphere all the more carefully, and we need to become the sort of people who can live into the responsibilities that inevitably come with this sort of freedom.

Beyond the simple confusion that is sown when people misrepresent efforts to moderate our forums as legal free speech issues, there’s also a more subtle confusion among types of speech that this cultivates. Essentially, these counter-attacks shift our focus away from the essential distinction between truth-seeking and truth-obscuring speech. In the face of that, it is essential to remember that some speech is true or warranted, while other speech is false or unwarranted. Some speech is part of discourse oriented toward the truth. Other speech is part of a conscious con job, or some other type of epistemic capture. Some speech is trustworthy, other speech is not. Some speech is like apples, and other speech is like poisonous apples that might not even be apples at all.

By framing any attempts at improving speech as an assault on “free speech”, those who engage in epistemic capture strategies distract us from these important areas of non-equivalence. As COVID-19 denialism has so powerfully illustrated, epistemic capture is objectively dangerous to us all. That doesn’t mean that it should be made illegal, but it does mean that it should be clearly and effectively and constantly opposed for the common good.

There are a range of other defensive strategies that are deployed by epistemic capturers to help them maintain a false equivalence between honest inquiry and deception. One is to accuse people of bias for endorsing warranted and true claims instead of their unwarranted and false ones. For example, someone might complain that their COVID-19 denialist views aren’t getting as much representation among epidemiologists and that this must indicate bias. COVID-19 deniers, global warming deniers, and anti-vaxxers all make these kinds of claims. But is their exclusion the result of bias, or is it because they aren’t even trying to do what actual scientists do? In these cases, it is clear that they’re just not doing the work that would be needed for the deniers to be trustworthy. What the Limbaughs of the world call “bias” is usually just the fair application of a consistent standard, and they just don’t measure up.

Let’s engage in a little thought exercise to illustrate the absurdity here.

Imagine an angry and blustering man who can’t dance. He walks into a dance performance, enraged that those know-nothing “dancers” are allowed on the stage instead of him. “What outrageous bias!” he bellows. “What arrogance on the part of the fake dancers!” But no, they are real dancers who have been training for years and rehearsing for months. In this case, it is easy to see that these accusations are baseless and really quite deranged. The dance performance isn’t a matter of bias or arrogance on the part of the dance troupe. Of course nobody judges the angry man for not being a dancer. In fact there are plenty of happy, attentive and grateful people in the audience who aren’t dancers either. Contrary to the charges of bias and arrogance, there’s nothing wrong with having dancers perform on a stage after doing a great deal of hard work and preparation. What has gone wrong here? The angry bellowing man has witnessed the fair application of a consistent standard to skilled work, and unjustifiably concluded that it is bias.

I firmly believe that everyone should be loved, even when that is hard. But not everyone can be highlighted on every platform, and not everyone should have a platform. The real problem here is that the angry man blustering about “dancer bias” has no appreciation for the hard work and skill of others, or the importance of boundaries in any decent society. In fact, we can easily see that the man making the accusations of bias and arrogance is the one who is arrogant and biased. He is unfairly demanding that he be given a degree of consideration that the dancers aren’t given: he wants to be considered skilled at something without having the relevant skills. None of the actual dancers are granted this kind of preferential treatment. And he is unduly disdainful of the efforts and hard work of others, assuming that he can do what they do even in his utter ignorance of what is involved. In other words, he is unspeakably arrogant and extremely biased. His accusations are both false and hypocritical.

Here we can see how bias and arrogance can disguise themselves as humility and fairness, if we allow a false equivalence between those who are in fact doing the dance and those who are not.

One final false equivalence that is frequently drawn here involves politicization. Because many of those who are engaging in scaled epistemic capture are highly politicized, it becomes easy to accuse people of being just as “partisan” and political as they are. As with the claims of bias, this misses the crucial heart of the matter. Some things are true and warranted. Others are not. Some people are doing the work. Others are not. In one sense, we’re all political all the time. We are part of a common life that involves politics. The problem with bad politicization is precisely that politics is allowed to trump the truth. To stand up for true and warranted claims in the face of politicized opponents is therefore precisely the opposite of the kind of politicization that Limbaugh represents.

When truth has been politicized, that isn’t the truth’s fault.

The false presumption that the dance of science is easy (or impossible) enables our arrogant and biased and politicized man to falsely accuse others of politicization and bias and arrogance. If it is easy, then he can do it just as well as anyone else. If it is impossible, then he can also do it just as well as anyone else. But because it is possible but skilled work that connects us with truth as a community, we can see the absurdity of the situation plainly.

Once these basic confusions have been cleared up, it becomes easy to articulate the obvious. There is no arrogance or bias or ‘politicization’ in trusting and promoting the trustworthy while distrusting and discouraging the untrustworthy. These counter-attacks and accusations fit the accusers, but not the accused. Now we can plainly see how these hypocritical and false accusations are an integral part of defending epistemic capture systems from the truth and honest inquiry which, to them, is death.

The mistaken presumption that scientific learning is easy can also make it harder for people to frankly address their mistakes, because people feel inadequate when they fail at something easy. Not being able to feed yourself is often embarrassing for people, because it is supposed to be easy. Making a mistake in a difficult dance, on the other hand, is natural and normal. We should all expect to make mistakes, be corrected, and welcome correction. This is one of the most basic habits that good scientists learn, and I think we’d all benefit from learning and practicing it much more widely. Like a challenging dance pose, the love of correction is unnatural. But with practice it can become second nature. It depends on removing shame from error, replacing it with an expectation of gracious encouragement. This, in turn, can help foster an eagerness to learn and be quickly corrected. Instead of feeling and reproducing shame, those who develop this skill respond to correction like this: “Oh, I made a mistake? How wonderful and kind of you to help me fix it. Thank you.”

All of this becomes easy and natural to see when our epistemology and our epistemological practices are healthy. And it is painfully easy to miss when the basics of healthy epistemology recede from view, replaced with an inadequate substitute.

One of the great sources of hope for me in all of this madness can be found in the disciplined, skilled, and focused work of dancers. Their feats are somehow simultaneously superhuman and mundane, astonishing and effortless. This is the power of habit and training.

When we firmly fix these dancers in our mind, we can see that the task before us is as simple to understand as it is hard to do. If we are to heal our nation, part of what we must do is train. We must patiently, routinely, tirelessly, systematically, interpersonally and at scale train our people, our institutions, and our leaders in the dance of science, rooted in a few basics of healthy epistemology. And we must be unwilling to treat shoddy and flashy substitutes as if they are the genuine article.

If we can do that, I can imagine a much brighter future for my nation than the dismal present, to which we have been delivered by an epistemic crisis rooted in epistemic capture.

In that future, there will still be counterfeits here and there. But when someone presents us with a counterfeit we won’t need to call the cops or shame them. Maybe they too have been fooled. Maybe they are intentionally perpetrating a fraud. In either case, what we need to do is simply refuse the fake and let them know that we know the difference. Where they have been fooled, we have a golden opportunity to demonstrate how graciously and quickly we correct mistakes around here. And where they are personally invested in the fraud, we can invite them to change their ways while keeping our children and others from falling under their influence. Beyond that, it is really the work of our broader counter-fraud operations to ensure forgeries don’t become so much of a problem that they destroy the good trust and credit of the United States. As long as that isn’t happening, we can survive a little bit of forgery here and there.

Unfortunately, in the case of the sort of deception and forgery that Rush Limbaugh represents, things have gotten entirely out of hand. The death toll is huge and climbing, and it is clear that the epistemic capture system will only continue to dig in rather than acknowledge the obvious problems. Addressing the situation will require sustained and concerted efforts to directly fight epistemic capture in the public sphere. The first step toward recovery is clearly understanding and diagnosing the problem. Beyond that, I’ve sketched the basic outline of what a cure will look like. What matters now is doing the work to develop and implement it.

--

--

Daniel Heck

Community Organizer. Enemy Lover. I pastor and practice serious, loving and fun discourse. (Yes, still just practicing.)