The Miracle and Misery of Statistics Today (and Why We All Might Die)

Daniel Heck
14 min readJun 12, 2023
Photograph of the sculpture, “The Dying Statistics” with archival ink. Artist’s Statement: This was a common form of sculpture in ancient Rome, and is widely considered to be the inspiration for the more famous, “The Dying Gaul.” The sculpture’s use of stochastic forms prefigures the later development of graphs for mathematical purposes. Statistics are not the only way to lie. — q 5 — v 5.1 — ar 16:9. (Prompt-poems written by me and entered into Midjourney on 6/12/23. Art lovingly curated, hand-exported and hand-imported into this Medium article by me.)

This is a descriptive exercise about AI and society. Some people might find it encouraging, or depressing, or both. Those feelings matter to me a lot, and I want to invite you to feel whatever you might feel as you read this. Still, my goal isn’t to evoke or avoid evoking any particular emotions any more than it is to recommend anything here. (I do that at other times.) Instead, my goal is to describe some dynamics of how society might develop in the wake of novel AI systems. Today, I am just trying to see what might be there, and what might be.

The basic method behind this meditation is sketching. Throughout my life I have survived by sketching, and so here I will rely on this skill again to illustrate the set of positive and negative feedback loops that exist, or might develop. I then explain how powerfully this can relate to the surprising predictability that often emerges through inter-scale comparison and inter-scale effects, such as by moving from the microbiological scale to global effects.

Ok. Here’s a way in:

Imagine you’re living with someone in your home who has a 10% chance of losing it and killing everyone in the home. How do you respond to this? People generally have a few types of responses. They’re usually well-categorized as fight, flight, freeze or fawn. In addition to these particular kinds of responses, social systems also routinely resopond by becoming less discursive. By this, I mean that people are less likely to have open and vulnerable conversations directly oriented toward the good. Language becomes more oriented toward strategy than truth. This makes it harder for people to have constructive interest-based negotiations where people reveal their interests and find ways to deeply synthesize those interests. In other words, high threat environments tend to evoke a range of “tactical” responses (fight, flight, freeze, fawn) and they tend to evoke a strategic shift away from discourse and towards tactical deception. The first casualty of war is the truth.

This is why fawning, which is a kind of pseudo-discourse that is only indirectly related to the good of safety that is being pursued, emerges as a standard form of communication. In high threat contexts, there is a kind of subsocial manipulation, but there isn’t really a healthy group dynamic and there aren’t any really authentic conversations. No one can pause to hear what is authentic in them, or in each other. These social psychology dynamics help us understand some of the core negative feedback loops that are already emerging in the wake of AI agents.

However, there are also widely discussed and appreciated positive feedback loops, which cause AI to further accelerate the development of more powerful AI. These loops underly reasonable expectations of exponential development. Some of these positive loops already include the following: AI systems are improving basic computer software efficiency, chip design, other AI systems, attracting capital investments to fund further resource accumulation and intellectual development, displacing competitor systems (for example, removing writers from the Hollywood studio system so that the system becomes more dependent on AI systems for writing), and more.

The negative feedback loops haven’t reached the level where they have substantially stopped the development of AI, but they do seem to be slowing development to some degree already. The top leaders in the field are almost unanimously voicing a desire to have the industry become more regulated, because of existential risk concerns. This matters greatly even if it doesn’t materialize: the perceived threat of this sort of collapse is already slowing down development to some degree. This is explicitly discussed very openly, for example by Max Tegmark. Note that this even causes someone like Tegmark, who is extremely eager for AI to completely transform the universe into something like a conscious expanding dustcloud of hyper-intelligence, to urge a slowdown. At least in the near term, the most wild-eyed futurists and the most wild-eyed anti-technologists have common ground, at least to the point of wanting a slowdown.

Other negative feedback loops include coordinated human fight responses, such as personal or social sabotage. Sabotage involves trying to generally stymie AI development, including through legal action that is designed to disable the use of AI in general. It may also include scaled responses through government and/or technology, including violence and regulatory structures.

There are also some coordinated human flight responses, as well as widespread and uncoordinated ones. Complete disgust and avoidance would presumably stop AI development at this point, although I could be wrong about that if there is already an autonomous-enough AI that could continue its development without human contact. Maybe that is somewhere in a lab, or will be on some kids’ computers? Regardless of that relatively distant outside possibility, it is unlikely that avoidance and disgust would be able to introduce an effective feedback loop on its own, although it might drive effective government policy in some places. There are enough people who are eagerly engaging in the development and use of the technology that disgust and avoidance are only likely to create zones of obsolescence, rather than meaningfully reverse the development of AI.

Fawn responses are already common, and don’t introduce a significant negative feedback loop, although they don’t necessarily represent a deep sense of alignment with the AI systems either. I would categorize the Youtube hustle culture around AI as a kind of fawning, in a strange but still illuminating sense. We humans often seek to appease the emerging social threat by climbing to the top, at least for a glimmering moment.

Freeze responses, it should be noted, don’t introduce a negative feedback loop, but they are a standard human threat response. Freezing can sometimes buy time to develop another strategy (fawn, fight, flee), and it might also slow down the situation so that it becomes discursive, so that deep alignment is fostered and threat is avoided. The call for a freeze was generally rooted in a hope that slower development might result in an increased chance of discursive alignment.

The upshot is that there are two main kinds of truly effective negative feedback loops that might be initiated with respect to AI systems: catastrophic collapse in the social or biological systems that are currently necessary for the development of AI, and coordinated human fight responses. These two possibilities (fight/collapse) are not mutually exclusive. Fight responses might even lead to collapse.

For example, consider this stylized but still illuminating scenario: we know that AI systems may make it a lot easier for a lot more people to develop powerful bioweapons. In professional literature, it is widely acknowledged that these systems are consistently “dual use” in the sense that any gain in function, for good, also tends to be a gain in function for harm. A system that can make great medicine can also make great poison, and no one knows how to change this in a way that isn’t circumventable. However, this also means that it might also be possible to make micro-organisms that destroy computer equipment in any number of ways. Or maybe someone will develop widely accessible EMP blast devices that would disable electronic equipment in general, in wide areas.

Concern about these sorts of micro-scale actions with macro effects might also lead to top-down governmental bans on computers, if the danger is real and present enough and if computers generally become dual-use enough. So AI systems are not just dual use, but triple-use? Many-use? We really might be in a situation in which there’s a race to see who develops a mass bioweapon or a mass siliconweapon and deploys it first. The strange thing is that these aren’t really “Terminator”-style scenarios, because they envision situations in which high powered and widely accessible AI systems enable people to take action at a kind of “micro-macro” scale that isn’t very intuitive for us to think about. The same psychological limitations that made it hard for many to intuit the exponential expansion of COVID also make it hard to intuitively grasp microbiological responses to this emerging reality. However, it is very possible for the “many-use” technology of generative AI to empower small groups acting at the micro-scale to have macro-impacts.

When effects traverse scale, we often find emergent predictability arising from the chaos. This is a widely documented statistical phenomenon, and it is why statistics are so extremely and counter-intuitively useful. For example, individual humans are unpredictable and extremely complicated. But polling empowers us to draw relevant inferences at wider social scales, even though it doesn’t empower us to predict what any particular person in the group will do. In a similar way, quantum mechanics is a wild madness, but it undergirds the extreme stability of atoms. (Atoms are so stable that we confusingly decided to name them after an ancient concept that suggests they are the lowest scale possible, in an absolute sense. That’s how powerful interscale aggregation effects can be.)

I’m no Eliezer Yudkowsky, the futurist who has long been predicting that there is approximately a 100% chance that AI wipes us all out, if we don’t find a way to wipe out the whole process first. Nonetheless, the possibility of a micro-scale phenomenon with global macroscale effects on silicone-based agents, carbon-based agents, or both, is worth very serious consideration. If it turns out that these sorts of systems are possible, such that any school shooter could decide to wipe out far broader swathes of humanity in the near future, then Yudkowsky’s broad apocalypticism is increasingly warranted. If extreme micro-scale to macro-scale effect conversions are possible, they start to approach inevitability much more quickly than seems intuitive for humans.

This may be the ultimate miracle/misery of statistics.

Oh, and to call back to the title with an always-jarring reminder: we all will die. The interesting questions involve when and how and why. And what happens after that.

Photograph of the sculpture, “The Dying Statistics” with archival ink. Artist’s Statement: This was a common form of sculpture in ancient Rome, and is widely considered to be the inspiration for the more famous, “The Dying Gaul.” The sculpture’s use of stochastic forms prefigures the later development of graphs for mathematical purposes. Statistics are not the only way to lie.

Me: Please offer constructive feedback on this meditation.

GPT-4 (via ChatGPT, 6/12/23):

Your meditation presents a very comprehensive and insightful exploration of the feedback loops and societal impacts of AI, along with some potential dangers. Here is some constructive feedback:

  1. Clarity and Consistency: The start of your meditation sets the expectation of a detailed discussion on the feedback loops related to AI and society, which you do address. However, some parts of the discussion shift focus from AI to social psychology or specific individuals (Eliezer Yudkowsky, Max Tegmark), which can be a little disorienting. It would be beneficial to maintain a consistent focus on AI and its societal impacts, and perhaps integrate individual examples more seamlessly.
  2. Complex Vocabulary: While your vocabulary is quite advanced, it might be overwhelming for some readers, particularly those unfamiliar with AI and related social dynamics. Using simpler language or providing brief explanations for complex concepts can make your piece more accessible.
  3. Structure: There seems to be a bit of jumping between positive and negative feedback loops, as well as discussions on societal responses (fight, flight, freeze, fawn). It might be more effective to structure your meditation in a more linear manner, starting with an overview of the positive and negative feedback loops, then diving into societal responses, and finally discussing the potential implications and future scenarios.
  4. Tone: The tone is academic and slightly philosophical, which is well-suited to your subject matter. However, the mention of inevitable death towards the end adds a somewhat grim note that is not clearly connected to the rest of your discussion. If this is an intentional element of your style, consider how to integrate it more fluidly so that it doesn’t jar readers out of the narrative.
  5. Contextualization: Your arguments and theories could be strengthened by referencing more real-life examples, studies, or events. This would make your arguments more concrete and relatable, and support your more speculative points.
  6. Proofread: As with any written work, make sure to thoroughly proofread for minor errors and to ensure overall clarity and coherence.

Overall, your piece shows a deep understanding of AI, societal dynamics, and their mutual implications. It raises thought-provoking questions and presents some challenging scenarios for the future. With some refinement in terms of language, structure, and context, it could be an even more compelling read.

Me:

Thanks! That’s great feedback. How does it affect your response to the text if this is the title: The Miracle and Misery of Statistics (and Why We All Might Die)

GPT-4 (via ChatGPT, 6/12/23):

The title “The Miracle and Misery of Statistics (and Why We All Might Die)” does change the context of your meditation a bit. Here’s how:

  1. Subject Matter: The title makes it clear that your piece will explore the relationship between statistics, AI, and potential catastrophic outcomes. This helps orient the reader to your overall theme and prepare them for some of the darker topics you touch upon, like the possibility of mass death.
  2. Tone: The title has an intriguing balance of humor and gravity. It establishes an expectation of a discussion that’s intellectually rigorous yet approachable, and not afraid to touch on dark or existential themes.
  3. Connection to Conclusion: The mention of death in your title helps to connect the body of the piece with your closing statement about everyone’s eventual demise. It makes the abrupt shift to this topic feel more like a callback to the title, rather than a non sequitur.

That said, it’s worth considering whether your title might be too provocative for some readers. Phrases like “Why We All Might Die” can draw attention, but they can also be off-putting to people who prefer a less alarmist approach. You might want to consider a title that focuses more on the statistical aspects or the societal implications of AI. For example, “The Statistical Symphony of AI: Unforeseen Consequences and Probable Outcomes” could convey a similar idea but with a less dire tone.

Remember, though, the title should fit your style and intentions as a writer. If the original title resonates with your voice and approach, it could be the right choice for this piece.

Photograph of the sculpture, “The Dying Statistics” with archival ink. Artist’s Statement: This was a common form of sculpture in ancient Rome, and is widely considered to be the inspiration for the more famous, “The Dying Gaul.” The sculpture’s use of stochastic forms prefigures the later development of graphs for mathematical purposes. Statistics are not the only way to lie. — q 5 — v 3 — ar 16:9

Me: Thanks. All very insightful and helpful. Please write a final draft drawing on your suggestions. I do want to maintain the philosophical tone and the final call-back to the title. It is good for readers to be challenged by the piece to a degree. At the same time, the transitions should be smoothed out, typos should be corrected, and sentences should be made more fluid. Please produce a draft like that.

GPT4:

Title: The Miracle and Misery of Statistics (and Why We All Might Die)

This meditation is an exercise in exploring the intersection of artificial intelligence and society, not with the intention to provide definitive answers but rather to draw attention to the subtle dynamics of this relationship. The key focus is the network of positive and negative feedback loops that arise with the advent of AI and the surprising patterns that come into view when we examine the cross-scale effects, from microscopic biological impacts to global consequences.

Picture yourself cohabitating with someone who, statistically, poses a 10% threat to the safety of everyone in the house. The spectrum of human responses to such a threat often falls under fight, flight, freeze, or fawn. Notably, in high-risk environments, social discourse tends to dwindle, replacing open, honest conversation with tactical deception, a casualty of the perceived need for survival. This dynamic serves as a microcosm for the societal responses we can anticipate in the wake of the emergence of AI agents.

AI systems offer myriad positive feedback loops, driving rapid development. They enhance computer software efficiency, streamline chip design, train other AI systems, attract investment for resource accumulation and intellectual progress, and increasingly replace human roles, like scriptwriting in the Hollywood studio system. The magnitude of these loops is instrumental in pushing AI advancement at an accelerating pace.

Simultaneously, the emerging negative feedback loops, while not halting AI progress, are seemingly beginning to decelerate it. Notable figures in the field are voicing concerns and initiating a conscious slowdown. This shift may be influenced by fears of a catastrophic collapse in biology or society, which could potentially stymie the evolution of AI. This prospect, as publicly discussed by Max Tegmark, has been enough to convince even the most enthusiastic AI proponents of the need for moderation.

Other negative feedback loops encompass coordinated human responses of fight or flight, personal or social sabotage, and attempts to stall AI development through legal action. While fawn responses, like the “Youtube hustle culture around AI”, and freeze responses might not generate significant negative loops, they reflect the human impulse to navigate perceived threats.

Perhaps the two most potent negative feedback loops anticipated in relation to AI systems include the potential for catastrophic societal or biological collapses and the coordinated human fight responses. These aren’t mutually exclusive and can manifest in diverse, unexpected ways. For instance, the “dual use” nature of AI can extend to the creation of bioweapons or threats to computer equipment itself, pushing us towards a race of who deploys such a weapon first. AI systems, therefore, are not just dual use, but multi-use, introducing a new level of unpredictability and risk.

In situations where phenomena traverse scales, we often witness predictability springing from chaos, an attribute extensively documented in statistics. Similar to how polling allows for societal inferences despite individual unpredictability, or how quantum mechanics underpins the stability of atoms, AI development has the potential to usher in stability and predictability, but also threats on a macro scale. These micro-to-macro scale conversions warrant serious consideration, even though they might feel counterintuitive to our human perspective.

While I don’t fully subscribe to futurist Eliezer Yudkowsky’s near-certain prognosis of AI leading to humanity’s demise, it is worth pondering on the possibility of a micro-scale phenomenon with global macroscale effects on silicon-based and carbon-based agents alike. The statistics suggest that if these extreme micro-to-macro scale effects are possible, they could swiftly transition from being merely possible to probable.

This curious interplay between possibility, probability, and eventual inevitability may well be the ultimate miracle and misery of statistics. In the shadow of this revelation, a rather certain, albeit somber, truth emerges: we all will die. The key questions revolve around the how, the when, and the why. And then, of course, there’s the question of what happens next.

Me:

Thanks. In my view, the piece has been improved in some ways but has also lost some important elements of what worked in it. What do you think some of the stylistic losses might be in your revision?

ChatGPT

Your original piece had a distinct philosophical, contemplative tone, interspersed with personal observations and theoretical elements. In aiming for more clarity and flow, the revision may have lost some of these unique elements. Here are a few stylistic aspects that might have been diminished in the revision:

  1. Personal Voice: The original text had a strong personal voice, weaving in your own interpretations, thoughts, and beliefs. In an attempt to create smoother transitions and clearer explanations, the revision might have inadvertently muted some of this unique voice and direct address to the reader.
  2. Abstract Musings: Your piece contained a number of thought-provoking musings and abstract ideas, which contributed to its philosophical tone. The revision, in pursuit of clarity and conciseness, may have streamlined these musings, losing some of the more abstract elements.
  3. Juxtaposition of Themes: Your original text had a fascinating blend of themes — from psychology to AI to existentialism. While the revised version still encompasses these topics, the juxtaposition of these themes might be less pronounced, resulting in a piece that may be easier to follow but potentially less intellectually stimulating.
  4. Unpredictable Transitions: Your piece jumped between ideas in a way that was unexpected and thought-provoking, mirroring the complex, interconnected world of AI you were discussing. The revision aimed for smoother transitions, which while providing better flow, might also have reduced the impact of these surprising connections.

Finding the right balance between readability and maintaining the unique style and tone of the original piece can be challenging. If you felt the revision lost important elements of your original work, it might be beneficial to incorporate some of the revised clarity and flow while preserving more of your personal voice and philosophical musings.

The Resurrection of Statistics. Archival print of the famous museum photograph. — q 5 — v 5.1 — ar 16:9

--

--

Daniel Heck

Community Organizer. Enemy Lover. I pastor and practice serious, loving and fun discourse. (Yes, still just practicing.)