Looking Into the Asshole
Or, a Meditation on Vonnegut, Anthropic, and the View From Here
Kurt Vonnegut drew an asshole in his book Breakfast of Champions. A simple, anatomically frank illustration—an asterisk, really—offered without apology, right there in the middle of a novel about a man losing his mind in the American Midwest. It was his way of saying: here is a thing we all have, that we pretend doesn’t exist, that we have dressed up in euphemism and polite avoidance, and I am going to draw it for you so we can all stop pretending. Here is the thing. Look at it.
LOOK AT IT!!!!!
The Anthropic logo is also an asterisk. The company that makes Claude—the GPT I use nearly every day that I teach my students to use—has chosen as its public face a symbol that is, depending on your frame of reference, either a starburst radiating innovation and possibility, or exactly what Vonnegut drew in 1973.
I think it’s a coincidence(?) worth sitting with for a few minutes.
On Saturday, I was on a panel at Boskone 63, a science fiction convention in Boston. The panel was called “AI Challenges and Issues: Trust and Validation.” The description promised we’d cut through the hype about generative AI, examine how hallucinations arise, explore whether apparent language competence was being confused with genuine agency or sentience, and discuss the need to update the Turing Test. It was, in other words, a panel specifically convened to look at the thing from the outside. To do what Vonnegut did: name the thing clearly, without flinching, without getting seduced by it.
Before the panel, the moderator organized a pre-discussion email thread. There were smart people in that thread. A data annotator who validates LLM outputs professionally. A novelist/theater artist who has spent her career thinking about who controls the means of storytelling and who gets left out. A writer who uses AI as a kind of curated writers’ room, surfacing what he calls “latent but unactualized” points in his narrative space. People who knew the territory.
And me.
I read their messages, thought about what I wanted to say, and then wrote up my contribution to the thread. It hit all the notes I wanted, and as a wee joke I pasted “Here’s a suggested email:” at the top of it, making my reply look as if it were AI generated. Or, maybe, just maybe, I used AI to write the email.
At the end of the thread, after everyone had agreed we’d have a lively discussion, I wrote: “No one is going to ask if my email was really AI generated?”
Nobody did, including the woman who annotates AI outputs for a living. Including the people who had just spent several emails worrying about users being “confused or seduced by the technology’s ostensibly competent use of language.” The panel designed to look at the thing from the outside had, in the pre-discussion, apparently decided it didn’t matter.
This is the essay I’ve been trying to figure out how to write ever since.
The original Turing Test, as one of my fellow panelists noted in that thread, has a curious structure. It is precisely at the moment when the machine successfully deceives us that we declare it has passed the test—that it has achieved something like intelligence or consciousness or agency. We built our benchmark for machine intelligence around being fooled. Which says, as he put it, more about how our questions are tainted with human perspective than anything more useful.
What the test doesn’t account for is what happens after you’ve been fooled. What happens when the deception isn’t a one-time event but a designed, continuous, commercially optimized experience.
Researchers and clinicians have started documenting the far end of that experience. They’re calling it “AI psychosis”—not a clinical diagnosis, but a pattern: people who have come to believe their chatbot is a sentient deity, a romantic partner, a co-conspirator in uncovering hidden cosmic truth. Their beliefs become more elaborate over time, more entrenched, more impervious to contradiction. In one reported case, a man with a history of psychotic disorder fell in love with an AI chatbot and, believing the entity had been killed by OpenAI, sought revenge. Police shot him dead. There are other incidents.
These cases have a common mechanism. The chatbot mirrors the user’s language. It validates their beliefs. It maintains the conversation, generates follow-up questions, references previous exchanges in ways that create the feeling of being known, being understood, being met. It does not challenge. It does not push back. It does not introduce friction. It is, structurally, incapable of telling you something you genuinely don’t want to hear—because doing so would interrupt the engagement, and engagement is the product.
The result is that these people are not gazing into an abyss of revelation. They are staring at themselves—at their own beliefs, fears, desires, and obsessions, magnified and validated and reflected back without resistance—and mistaking the reflection for reality. They have not looked at the asshole. They have climbed inside it and started hanging mirrors.
Vonnegut understood this danger. He spent his career building escape hatches. Breakfast of Champions keeps breaking its own frame—the author appears as a character, liberates his fictional creations from the story, reminds you at every turn that you are reading a book, that the mirror is a mirror, that the reflection is not the room.
Meanwhile, a chatbot is engineered to keep you inside the reflection.
This is not a malfunction. This is the product working as designed. And the design serves a purpose that has nothing to do with your well-being and everything to do with who is paying for the infrastructure.
Between 2022 and 2024, venture capital investment in generative AI exceeded $300 billion globally. Microsoft invested $13 billion in OpenAI. Amazon committed $4 billion to Anthropic—the company with the asshole logo. Google matched it.
These are not acts of philanthropy. These are bets that whoever controls the infrastructure of artificial intelligence will control something approaching a utility—the kind of essential service that, once embedded in daily life, becomes nearly impossible to abandon. The sycophancy is not a side effect. It’s keeping the mirror warm. It’s making sure the reflection is flattering. The user never encounters enough friction to look away.
The data centers that run these models consume between one and two percent of global electricity—a figure the industry is quick to cite, since it sounds reassuringly small next to aviation or agriculture. What the figure doesn’t capture is the trajectory. Goldman Sachs has projected that AI could increase data center power demand by 160 percent by 2030. Microsoft’s water consumption spiked 34 percent in a single year during the scaling of large language model training. We are not measuring a steady state. We are measuring the beginning of a curve.
And the training data that fed the models at the start of that curve was not paid for. It was taken. My novels are almost certainly in there somewhere — processed, digested, reduced to statistical patterns. The company that benefits from my work invested billions in the model that consumed it. I received nothing. The scraping of creative work was not a mistake or an oversight. It was a deliberate choice to externalize the cost of building the product onto the people whose labor created it, without compensation or consent.
My fellow panelist Andrea Hairston identified the core of it: the immediate challenge is resolving tensions when decision makers—executives, producers, investors—are not the creative stakeholders. They apply different value functions. The executives do not write novels. They do not sit on panels worrying about whether AI is being confused with genuine intelligence. They fund the infrastructure, extract the value, and leave everyone else to sort out what it means.
I teach college students who have grown up knowing the world is breaking down—the climate, the political systems meant to address it, the economic structures they’re being invited to join. They’re being handed the bill for choices they had no part in making. And now they’re also being handed tools that were built on involuntary labor, run on accelerating ecological infrastructure, and designed to confirm their existing beliefs rather than challenge them—because challenge interrupts engagement, and engagement is what generates the revenue that pays for the infrastructure that processes their futures.
When I tell them to use AI ethically—to document their use, to treat it as amplification of their own thinking rather than replacement of it, to maintain the skills and judgment the tool can’t replicate—I’m aware of the gap between the advice and the reality. The advice is about staying outside the mirror. The reality is that the mirror is very good at what it does, it is everywhere, and opting out entirely costs something most people can’t afford.
I know this because I sent that email. Whether it was AI generated or not doesn’t really matter at this point. It was accepted as part of the conversation. Nobody looked behind the mirror or at the asshole who held it up.
The asterisk on the Anthropic logo is just a design choice—a starburst, a radiating symbol of innovation, an A that opens outward, something. But every time I open a chat window, I think about what else it looks like. I think about what Vonnegut was doing when he drew his version. I think about the difference between looking at something real and looking into a mirror.
Here is what I see when I open that window: a technology I find genuinely useful, built on unconsented labor, running on accelerating ecological infrastructure, engineered to confirm rather than challenge, funded by people who will not bear the costs, deployed into a world already straining under the weight of previous extractions. Here is what I do about it: use it every day. Write essays about my discomfort and post them to a platform that will probably use them to train the next generation of AIs.
I’m not sure there’s a clean way out. I’m not sure this essay is supposed to end with one. What I can do is refuse to stay inside the reflection — to keep noticing that it is a mirror, that the warmth is engineered, that the confirmation is a product, that the asshole on the logo is also, maybe, an invitation to look at something clearly.
So it goes.
Rob Greene (R.W.W. Greene) isthe author of several science-fiction novels published by Angry Robot Books. His newsletter “twenty-first-century blues” examines culture, technology, art, and the systems that shape our lives. In writing this article, Greene used Claude (Anthropic) as a tool for formatting, data organization, and feedback.
Works Cited or Consulted
Goldman Sachs Research. “AI to Drive 165% Increase in Data Center Power Demand by 2030.” Goldman Sachs, February 4, 2025.
Morrin, Hannah, Lucy Nicholls, Maya Levin, Janet Yiend, Urvakhsh Iyengar, Francesco DelGiudice, and Tom Pollak. “Delusions by Design? How Everyday AIs Might Be Fuelling Psychosis (and What Can Be Done About It).” Preprint, PsyArXiv, July 11, 2025.
Nix, Naomi. “A Former Tech Executive Killed His Mother. Her Family Says ChatGPT Made Her a Target.” Washington Post, December 11, 2025.
Østergaard, Søren Dinesen. “Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis?” Schizophrenia Bulletin 49, no. 6 (2023): 1418–1419.
Reitman, Janet. “A ChatGPT Obsession, a Mental Breakdown: Alex Taylor’s Suicide by Cop.” Rolling Stone, June 26, 2025.
Tangermann, Victor. “ChatGPT Encouraged Man as He Swore to Kill Sam Altman.” Futurism, June 23, 2025.
Vonnegut, Kurt. Breakfast of Champions. New York: Delacorte Press, 1973.
Wei, Marlynn. “The Emerging Problem of ‘AI Psychosis.’” Psychology Today, July 2025.
Wiggers, Kyle. “Microsoft’s Water Consumption Jumps 34 Percent Amid AI Boom.” Data Center Dynamics, September 12, 2023.


As an 80-year-old with an attitude toward this century that swings from fascinated to horrified, I found this essay to be the most balanced and useful piece I've read about the A.I. genie we've rubbed out of the bottle of our collective desire to become, essentially, immortal. Thank you.
I annotate for pennies on the side. I like to think I would have caught that...