An ethical case against LLMs: principles, choices, consequences
I jumped for joy when I saw Kyle Wohlmut’s talk on the METM25 programme. At last, someone was going to make sense of what my sceptical antennae had been twitching about since 2020, when Google fired AI ethicist Timnit Gebru for calling out dangerous racial and gender biases in LLMs. I wasn’t disappointed. Kyle’s intense yet entertaining presentation was grounded in solid research, but so jam-packed that I can only report a few choice takeaways here, and I urge MET members to look up his slides and references in the METM archive (sign-in required).
To the point: Kyle argues that there is no ethical use case for AI. These tools are unethical by design; ergo, their output will also be unethical. They are environmentally unsustainable, make no business sense, cause untold societal harm, and are grounded on a fraudulent business model. The message I took home was that as ethical actors in society, as individuals who make choices, we have the agency and the duty to reject and resist these AI tools. Because, as Kyle would repeat throughout his talk, choices have consequences.
“AI is not a thing”, he told us. It’s just existing technology re-packaged by AI marketing departments to convince us that AI means the LLMs, GPTs and image tools and applications they are trying to sell us. The ethical problem lies in these products.
Kyle introduced what he called the surface-level ethical problems of GenAI: IP theft, labour abuses and the environment. Take IP theft: AI companies admit to stealing trillions of copyrighted words because they couldn’t have built their LLMs otherwise. The consequence of their choice is a “tainted, illegitimate dataset”. But that wasn’t enough to generate images, so they scraped almost the entire content of the internet. Without including pornography, they did not have enough training data to produce women’s and minority groups’ faces. You could almost hear the jaws dropping across the room as that sank in.
Likewise, labour abuses. That oh-so-convenient ChatGPT email has been filtered by hoards of hidden workers, from Kenya to Colombia, paid a pittance to identify toxic content in the LLMs, and whose mental health has been wrecked in the process. And the environment. The industry’s demand for energy boggles the mind. Accurate estimates are evasive, but the projected energy needed to power this infrastructure comes at a huge cost to the environment and the health and well-being of local communities.
Then there are the below-surface-level ethical problems. Bias, for example. Legal, employment or financial GenAI decisions will be biased because the training data they are based on are themselves inherently biased and come from, Kyle reminded us, “a pretty sick place”: the entire internet. As this tainted output gets published and feeds back into subsequent models, human rights are slowly and violently undermined.
Humanity itself is being eroded. The AI industry downgrades the human experience to the level of the machine, rather than raising the machine to the human level. Indeed, the real-world harms are so numerous that, according to AI bro Sal Khan, “We all have to fight hard for the positive case uses”, because the list of evil use cases is endless.
And, it’s a scam! It’s not profitable and is unlikely to ever be so. The much-heralded artificial general intelligence (AGI) that’s “just round the corner” is no more a thing than AI is, as the industry itself admits. When OpenAI has a product that is generating $100 billion in profits, they’ll declare AGI has been achieved.1
Fortunately, Kyle ended on a hopeful note. Everything suggests that the bubble will burst and there are already signs of buyers’ remorse, prompting some clients to come back to real human language professionals. And because we still have principles, we can and must push back in the meantime, as summarised in Kyle’s slide:

1 Stephanie Palazzolo. Microsoft and OpenAI’s Secret AGI Definition. The Information, December 2024
This METM25 presentation was chronicled by Mary Savage.
Featured photo by METM25 photographer Julian Mayers. Slide reproduced with presenter’s permission.
Interesting and informative review, Mary. Thank you! I will certainly look up the slides.
Thank you, Fiona. I could have written so much more, but suffice to say that Kyle’s talk was like a good book or film: it stays with you and you see bits of it reflected in daily life. Critical voices are everywhere now, thank goodness!