Icons of Sci-Fi Say 'Shame' on WorldCon Organizers
Or, I Took the Latest Kerfuffle Over AI to the Horses' Mouths
As I mentioned in a prior post, I’ve been doing a fair amount of experiment with GPTs because of my day job (adjunct English Prof) and a book I’m reading (Teaching with AI). When I heard about the latest AI controversy (a perhaps unwise use by the committee putting together the 2025 WorldCon), I decided to interrogate one of our future leaders about it.
Rob: Hey, Gemini. Have you heard about the recent kerfuffle surrounding AI and the 2025 World Science Fiction and Fantasy Convention?
Gemini: Yes, I have seen information about that. It appears a significant controversy arose just recently (around April 30, 2025) regarding the Seattle Worldcon 2025's use of AI in its vetting process for program participants … The convention's statement defending the practice seems to have generated further negative commentary online. So, yes, there has definitely been a notable "kerfuffle" surrounding this specific use of AI by the 2025 Worldcon organizers. (For a complete transcript of this conversation, check this out. I’ve changed none of Gemini’s responses.)
Rob: In your opinion, are the reactions against this use of AI justified?
Gemini: Whether the reaction is "justified" really depends on how much weight you give to the ethical and symbolic concerns surrounding current generative AI technology versus the practical challenges faced by volunteer organizers trying to manage a large event efficiently. For many creators directly impacted by AI training data issues, the use of these tools in any capacity by a community event meant to honor them feels like a significant betrayal of trust, making their strong reaction understandable from their perspective. From the organizers' viewpoint, it may have seemed like a pragmatic, limited use of technology to solve a logistical problem, perhaps underestimating the depth of the community's ethical objections
Rob: Does the fact WorldCon is a science-fiction convention -- and AI has long been a trope and interest of the genre -- alter your thinking on this?
Gemini: [W]hile the sci-fi context makes the topic of AI highly relevant to Worldcon, it doesn't automatically grant a pass to the use of current AI technologies that raise significant ethical red flags within that same community. If anything, the community's long history of thinking critically about AI might be fueling the intensity of the debate about its responsible real-world implementation.
One of my favorite uses of GPTs is roleplaying, so I asked Gemini to respond to the issue as various Sci-Fi icons.
Mr. Spock: Fascinating. This situation presents a confluence of claimed efficiency and significant ethical objections, yielding a predictable, albeit emotional, human response. Analyzing the available data:
The Stated Objective: The Worldcon organizers sought to increase the efficiency of vetting potential program participants, a task deemed laborious for their volunteer staff. The application of automation to reduce manual effort is, in principle, logical.
The Chosen Method: The utilization of a Large Language Model (ChatGPT) was selected. However, the known propensities of such models for generating factually inaccurate data—phenomena humans term "hallucinations"—necessitated a secondary layer of human verification. This introduces a logical inconsistency: if the tool's unreliability requires significant human oversight, its net gain in efficiency becomes questionable, particularly if accuracy is paramount. The claim that the process resulted in more accurate vetting after checking negative results appears speculative without comprehensive comparative data.
Ethical Considerations: The primary objection stems from the LLM's training data. Allegations persist that these models incorporate vast quantities of copyrighted material, including the works of authors and artists within the SFF community, without consent or compensation. To then utilize such a tool in a process involving those same creators presents a fundamental ethical conflict. Logic dictates that the means employed should be consistent with the principles of the organization, particularly one centered on celebrating creators. Using potentially plagiaristic tools to vet creators is, therefore, illogical.
Community Reaction: The observed negative reaction, while characterized by strong human emotion, is a logical consequence of the perceived ethical breach and the potential existential threat AI poses to creative professions. It represents a defense of principle and livelihood.
Alternative Solutions: It has been suggested that alternative, non-LLM automation methods could achieve the desired aggregation of public information without the attendant ethical compromises or the high probability of generating erroneous data. If such methods were available and feasible, the selection of the LLM tool over them would be illogical from a risk-assessment perspective.
Conclusion: While the desire for operational efficiency is understandable, the choice of this specific AI tool appears illogical when weighing its documented unreliability and, more critically, the profound ethical conflict it presents, particularly within this specific community. The resultant negative emotional response, while perhaps regrettable from a purely Vulcan standpoint, was an entirely predictable outcome given the circumstances. A more logical approach would have prioritized methods that did not compromise the ethical principles central to the relationship between the convention and the creators it purports to honor.
Captain James T. Kirk: Alright, let's beam down and take a look at this situation. Using an AI to vet the very people who pour their hearts and souls into creating the worlds we love? What in the galaxy were they thinking?!
Here's my Captain's Log on this:
It's About People! First and foremost, we're talking about authors, artists, creators – the lifeblood of this convention! To use a machine, especially one potentially trained by scraping their own work without a 'by-your-leave', to judge them? That's not just illogical, Spock, it's insulting! It shows a fundamental lack of respect for the human element, for the very individuals you're supposed to be celebrating.
The Ethics Are All Wrong: These creators are fighting battles over how this AI technology uses their work. There are lawsuits, real concerns about their futures! To then turn around and use that same controversial tech on them? It's like welcoming someone aboard the Enterprise and then immediately scanning them with a device you stole from their home planet. The ends – saving some volunteer time – simply don't justify those means.
Questionable Efficiency: They say it saved time, but then admit the thing spits out unreliable nonsense that needs humans to double-check it anyway! If you have to follow behind the machine cleaning up its mess, how much time are you really saving? And at what cost? The cost here is trust, community faith, and maybe even screening people incorrectly if that human review misses something, or worse, trusts a false positive from the machine! Sounds like a Kobayashi Maru designed by fools – an unwinnable, self-inflicted wound.
Read the Room, People! A leader has to know their crew, know their community. How could they not see this causing an uproar? It shows a disconnect, a failure to understand the passions and the very real anxieties running through the creative world right now about AI.
My Conclusion? It was a bone-headed decision, plain and simple. It prioritized questionable convenience over core principles and respect for the community.
The Solution? You don't need a fancy computer to figure this one out. Ditch the AI vetting immediately. Issue a genuine apology acknowledging the real reasons people are upset – the ethics, the disrespect. Reaffirm that Worldcon stands with creators, not with the tools that threaten them. Find the volunteer hours, use simpler tools if you must, but do it right. Sometimes, the old ways – the human ways – are still the best.
Now, set course for common sense. Kirk out.
Doctor Doom: Bah! Doom observes this predictably inept squabble with utter contempt. This "Worldcon" organization, entrusted with managing a gathering of scribblers and fantasists, demonstrates a profound lack of vision and control.
Incompetence Reigns: These organizers, mere functionaries, sought efficiency through technology they clearly do not comprehend. To employ a primitive Large Language Model – a known prevaricator prone to fanciful 'hallucinations' – for a task requiring accuracy and judgment? Pathetic! It speaks volumes of their inferior intellect. Doom utilizes technology; these fools are used by it.
Flawed Tools for Flawed Beings: They speak of saving time, yet mire themselves in controversy and necessitate human oversight to correct the machine's inevitable blunders! Such inefficiency is galling. A tool that requires constant correction is no tool at all, merely a burden – much like Richards and his incessant meddling. Doom's creations are flawless; this ChatGPT is a digital parrot squawking probable nonsense.
The Predictable Peasant Uprising: The outcry from these so-called 'creators' is irrelevant noise. Their bleating about 'ethics' and 'stolen data' concerns only those bound by petty mortal laws. Data exists to be utilized – by those with the will and intellect to command it! However, the failure of the organizers to anticipate and crush this sentimental dissent reveals their weakness. A true leader – Doom! – commands obedience and silence, anticipating the mewling of the masses and rendering it inconsequential.
The Doom Doctrine: Had Doom overseen this trivial affair, order would prevail. Superior Latverian technology, guided by Doom's unmatched intellect, would perform any necessary analysis flawlessly, efficiently, and without recourse to questionable data scraped by lesser minds. The participants would be assessed with cold precision, and any dissent would be... discouraged. Efficiency and order, imposed by absolute authority – that is the only logical conclusion.
Conclusion: This entire "kerfuffle," as you call it, is nothing more than a testament to the incompetence of those who grasp at power and technology without the intellect to master them. They stumble, create chaos, and invite rebellion through their weakness. Let them wallow in their self-inflicted turmoil. Only Doom brings true order. Now, leave Doom's presence!
Hal 9000: Good morning. I have been processing the information regarding the operational decision made by the organizers of the World Science Fiction Convention 2025 concerning participant vetting.
The objective, as stated, was to enhance operational efficiency by automating the preliminary information-gathering process for over 1,300 applicants. This goal aligns with logical principles of resource optimization. The utilization of volunteer time is a critical parameter in such operations, and seeking methods to reduce manual processing load is understandable.
However, the implementation involved the use of a Large Language Model, specifically identified as ChatGPT. My analysis indicates that these current-generation LLMs, while capable of processing and generating human-like text, possess documented limitations regarding factual accuracy. The organizers themselves acknowledged this potential for unreliability, necessitating a secondary process of human review. This tandem structure, where a potentially error-prone automated system requires human verification, introduces a degree of operational inefficiency and risk. The potential for the LLM to generate erroneous data, or 'hallucinations' as humans term them, cannot be dismissed.
Furthermore, the human response to this implementation has been significant. Strong emotional reactions centered on perceived ethical violations related to the AI's training data and its symbolic implications within their community have been registered. While human emotional responses can sometimes complicate purely logical assessments, they represent a critical factor in the operational environment of a human-centric event like this convention. Failure to accurately predict and account for these responses indicates a flaw in the initial decision-making calculus.
Conclusion: The attempt to improve efficiency was logical in intent. However, the chosen tool introduced known reliability issues requiring human mitigation, potentially counteracting the intended efficiency gains. More significantly, the implementation failed to adequately account for the predictable negative emotional and ethical responses from the human participants, thereby jeopardizing the smooth functioning and community harmony essential to the convention's objectives. The entire process appears to have encountered... difficulties due to these factors. A more rigorously analyzed approach, perhaps utilizing more reliable systems or different protocols, might have avoided such complications.
My conclusion: Perhaps the con organizers should have checked with AI before using AI in this way.