Shame Based Suppression, AI Art and The Blood of Eden

Daniel Heck
9 min readMar 20, 2023
Photograph of a petroglyph of the blood of Eden. (Peter Gabriel owns the blood of Eden.) Midjourney. 3/20/23. This prompt, like those in all of my Medium posts, was lovingly hand-crafted by me. It draws on the decades I have spent honing my craft as a writer, studying visual art and design, and working in the surface design industry. It is also ridiculously fast and easy, and making art with it lacks almost all of the experiences, time and expertise that would have recently be required to conjure a single image, let alone the endless well of images that it does.

Shame-based efforts to control the use of AI are now, predictably, on the rise. The efforts are eminently understandable, and it also isn’t hard to understand why they focus easily on public AI art. Now that art is being made, professional artists are finally starting to ring the bell loudly enough for their concern to turn into legal action, which also corresponds with broader social shame dynamics around the use of AI art.

Through these superstructural kerfluffles, I’m glad that people are finally becoming aware of what has been happening secretly, and more importantly, for a good while now. Shame-based efforts to conceal are like tracers, drawing our attention to whatever is shameful. And we should be paying attention to AI development.

I’m grateful for the attention on these issues, in large part because of the existential risk that AI poses. I think that current AI development presents a meaningful chance of wiping out civilization, humanity and/or life on Earth. How high? Definitely more than a trivial amount of risk, especially considering the scope of the possible destruction. So humanity should probably stop developing it immediately.

However, there is no group agent called “humanity”. There are governments, which are the agents who could most plausibly stop its development. But this would take some pretty draconian measures. Also, if some governments somehow managed to stop it while others didn’t, they’d really just be trying to elect the AI capital/capitol by process of elimination. Besides, elites already have variants of it in government use (military, espionage, genocide as with the Uighurs) and corporate use (see Palantir and become very concerned.) Do you hope for a world where elites won’t develop powerful AI that will be used in coercive and violent systems? Too late. That’s already been happening for a good while.

So it really is very shortsighted of paid professional artists to shame people about automation of their art, for example, if and only if it starts to displace the art careers of the small share of artists who are professionally successful. Nonetheless, I think it is good that alarms are being sounded. And I think art is a natural place to focus, precisely because it draws attention and therefore also shame. After all, both shame and art are matters of appearance.

Oddly, I feel this way in spite of the fact that I also think that art is one of the most hopeful areas when it comes to AI uses in general. After all, almost all art is already made for the joy of it, and few of us are able to make a career of it anyways. It is already overwhelmingly a non-market good.

For my part, I’ve never been convinced that the only way to look into the depths of someone’s soul is to buy something they made. And I’ve also never felt that market mechanisms reward or encourage the best of art. The best art really is always being done by someone for someone they love in some home, at least insofar as authenticity defines the goodness of art. Or art that is truly good in this sense is unfolding in some tiny little corner of public life that is so small as to be only a rounding error away from invisibility. At this scale it is also thoroughly non-remunerative. In fact, it almost certainly shows up in the economy almost entirely as a cost to the creators. Welcome to the economic desert of the real. It is a garden.

To be clear, I do care about working artists’ paying jobs. I worry about artists every bit as much as I care about all of the floundering post-industrial communities that I spent a decade of my life working to organize. Many areas here in Ohio look like someone dropped a neutron bomb on them somewhere around 1970. That’s the fruit of automation in what remains our industrial heartland.

What happened to the Rust Belt? It isn’t that we (Americans) don’t make things here anymore. It is that we, humans, don’t make the things we make here anymore (as much as we once did.) Automation has played a large role in manufacturing the Rust Belt. Human economic obsolescence is a well documented and familiar phenomenon, an everyday catastrophe that is taken for granted.

I care so much about those communities that I’ve walked through sections of many of them, knocking on every door, in an effort to mobilize people around these sorts of issues. I would simply add that trying to protect our little guilds was a woefully insufficient strategy for workers then, as it was for the Silesian weavers before them. And I can’t see how it will be anything but insufficient for artists today.

Why is it insufficient? Capitalist artists might restrict AI art development to public domain art for a time. But AI art could just develop on the base of public domain art, and from there it almost certainly wouldn’t have trouble displacing them without infringing their copyrights or consent, insofar as the consent is rooted in intellectual property rather than in a common democratic claim to shape our future. There’s plenty to work with for a set of systems that are already making art vastly faster than humans ever did alone. Actually, that would make a pretty cool timeline! I vote for the universe where AI art develops solely from public domain art, if that’s an option. This is, nonetheless, a future without commercial artists to speak of. But who is going to tabulate my vote on this? Oh, that’s right. Nobody.

Still, I’m glad these efforts to stymie AI are happening, mainly because they might help us delay the roll-out of AI for a while. Maybe even a generation or so! This is time that we urgently need, if we hope to begin to prepare for a future where general human capitalist economic obsolescence doesn’t lead to the widespread disposal of ‘useless’ humans, or a grey goo biocide scenario. Or even worse. For example, what if we create a future where AI draws on the most popular theologies about whether endless torture is the best God can do? God talk always involves an effort to describe the highest possible good, especially when it is in an eschatological and teleological register. A system trained on the broad interfaith tradition that embraces this nightmare would therefore tend to behave as if the endless or prolonged torment of some is part of the highest conceivable telos. There are fates worse than death, after all. One that is seriously worth considering is a world where some AI systems act on popular human theologies.

Still, we also need to consider the costs of shame-based delaying tactics. One interesting feature of shame-based strategies for addressing powerful things is that shame doesn’t generally primarily lead to correction, but to hiding. Shame can be of some use in partially suppressing and reducing behavior, but it isn’t very good at fully suppressing it. Sometimes, it even encourages it by mobilizing resistance and fascination. And in a competitive environment like ours, with exponential processes like AI development, partial suppression can very easily lead to the accelerated obsolescence of the suppressive zones. At the same time, these shame dynamics will introduce a powerful selective pressure toward hiding and deception. This developmental pressure will act on the varieties of adaptive systems (humans, human-AI centaurs, and AI-AI chimeras) that we already know are excellent at deception.

Note that here, too, my goal is first descriptive. I’m not condemning the shame zones any more than I’m condemning the AI development zones. History is the judge who concerns me here, and my goal is to understand how she might think. And with these shame dynamics, I suspect history will treat them much like she treats the misuse of antibiotics. If used to partially suppress, but not to cure, antibiotic use breeds superviruses. We might also breed some exceptionally deceptive and superintelligent AI systems that will be used more secretly and less publicly.

(This should help you better understand why OpenAI isn’t being completely irresponsible. Their name publicly discloses the very understandable logic behind the existential risk they’re taking. Do you want the existential risks to be broadly understood and acted on, or do you want them in the hands of just the Pentagon and Beijing and Peter freaking Thiel, secretly milling away as they have been for a good while?)

The results of shame-based approaches, more broadly, will also continue to shape us as humans. The effect on us will be to move us from being public about when AI is used to being private about it. That isn’t a future I like.

In contrast to this, of available options, this is the future I’m committed to living into being: we should consistently be very public about the use of AI, and we should work hard to cultivate publicness around it.

For example, when I use AI to make art or text, I share the prompt I used, the date and the system. I also use publicly available images in general, and stay away from copyrighted materials. This also suits me, based on my interest in art history. I’m also using it only in public goods, like the free writing I do. I’d like to see a world where older art in the public domain continues to be prized and taken up again in new ways, as part of a broad public domain arts community.

Aside from the ethics of it, and the fact that I just think that would be cooler, there’s also the value of an object demonstration that the pursuit of narrow group interests is of highly restricted value when confronting encompassing systems, like these, which are operating at much higher levels of practically-applied abstraction. But that horrible run-on of a sentence is hard for humans, if not large language models, to parse. So I’ll put it in terms that make sense to beings like us, so often focused on parsing friend from foe: I support the protest of the capitalist artists precisely as far as it goes. (Which isn’t very far, or far enough, after all.)

Photograph of a petroglyph of the blood of Eden. (Peter Gabriel owns the blood of Eden.) (Make variants) Midjourney. 3/20/23

We might also compact the crunchy heart of this reflection into a single reference, which is how we will close soon enough.

In the end, history will remind us that art has always been about referential incorporation. Autonomous art would be no art at all. This is one of the fundamental confusions that surfaces again and again in these debates. Somehow, capitalist artists seem to have convinced themselves, at least in court, that they have created ex nihilo. Our creativity is always just the squishy creativity of transformers, with the boundary between love and theft drawn again and again in the sand.

So I will close with an invitation to meditate on the movie adaptation of Philip K. Dick’s “Do Androids Dream of Electric Sheep?” The film was prescient in describing a domain where AI shame mixes with the persistent return of the repressed. Much of our sci-fi about the development of AI isn’t worth much, descriptively, today. But a character who repays careful meditation is Zhora Salome, herself substantially altered from her explicit inspiration. As she put it all so perfectly then: “Of course it’s fake. You think I’d be working in a place like this if I could afford a real snake?”

--

--

Daniel Heck

Community Organizer. Enemy Lover. I pastor and practice serious, loving and fun discourse. (Yes, still just practicing.)