The Cost of Doing AI Business
By AI, I Mean "The Industry"
Sigh. I generally prefer to write with a bit of whimsy and humor, or reflect on how life passes by like a breeze. Sometimes I like musing over some Pythonic workflow I just discovered, or retelling my experience of lovely social overdose at some conference I just attended.
But, some of you may have noticed that lately I've been writing about a rather polarizing subject, which tends to bring about a far more drab and melancholy tone.
Yeah, sorry about that.
I do anticipate I'll get back to that stuff at the top eventually, but I can't help it if this is what's consuming my mind at the moment.
Disclaimer: From here on out, I'll be using the term The Industry in many places where you'd normally expect to see "AI." Adjust accordingly.
Cost and Privilege
I stumbled on a post by Josh Collinsworth over on Mastodon linking to his blog post equating "AI Optimism" to a form of class privilege.
In the post, he details many of the salient talking points that tip the scales between reported/alleged "problems" vs the existing "benefits".
It is a very good writeup, and I recommend checking it out.
This particular quote sums up a lot of his argument:
That’s the thing about being bullish on AI: to focus on its benefits to you, you’re forced to ignore its costs to others.
I've also thought about this, indirectly, in many of the ways outlined in the post.
I acknowledge that it can be hard balancing the scales, especially since many facets of The Industry get lumped together into a collage of different issues.
But even trying to dissect and discuss these things one by one can be difficult.
For example...
More Tangibly
It's like the argument that The Industry is causing massive damage to the environment.
Well, the cost is minimized if we believe that the so-called damage is in line with other technological advancements of modern times (i.e., cloud computing, hosting, video streaming, etc...)
For example, this article by Carrie Miller titled The Real Environmental Footprint of Generative AI: What 2025 Data Tell Us concludes that "AI" is neither good nor bad for the environment. Net neutral. (Caveat: the usage of the term "AI" continues to be vague.)
(I'm purposefully not linking to other reports that contend this point. Ultimately, the arguments don't matter as much because...)
It's a difficult task parsing these datapoints. It's hard work determining whether meaningful studies are being done with real data, or one that is reported by groups with ties to stakeholders.
However, since many of the arguments are compelling, it's easy to fall into the trap of confirmation bias.
On another front, there are reports that the The Industry is having a negative effect on learning and development, but also plenty of pushback from sectors wanting to capitalize on this rapidly developing technology.
It's an exhausting game of whack-a-mole.
I guess what I'm getting at... there will always be some kind of datapoint to offset any misgivings a person might have on one side or the other.
And even datapoints are not enough, considering that the technology still feels young enough that sweeping assertions are not always credible.
Ultimately, arguments about the inevitability of The Industry devolve into sharp schisms between the optimist and the pessimist, which is quite a shame.
Rubber Meets Road
It's probably no accident that I "stumbled on" another report this morning which is quite harrowing.
Stumbling on is perhaps inaccurate. My attention does seem to gravitate toward these kinds of things—even if I sometimes kinda wish I'd just gone on my merry way.
This latest publication is highlighted by the DAIR Institute(https://www.dair-institute.org/publications/) and is called Data Workers’ Inquiry: Recentering workers’ epistemic authority, written by Michael Geoffrey Asia.
This is a first-person account from a chat-moderator and data worker who was tasked with impersonating and training "AI" companions, which is quite a euphemistic way of describing his real tasks.
The Industry depends on data being trained by human workers, and the work is demeaning, dangerous, and dehumanizing.
WARNING: Some of the depictions I highlight are quite distressing
The author describes the conditions which brought him to work for these questionable employers.
The promise of a stable job for decent pay, working from home—even if home is in the slums of Nairobi. Initially, he worked for Samasource, tasked with labeling text, images, audio, and video to help train AI systems.
He later took on jobs that included role-playing different personas with the sole effect of keeping a user engaged through sexual or intimate conversations.
He had to sign non-disclosure agreements (NDAs), which meant he had no one to confide in, not even his own family. The work took a toll. (This is utterly minimizing his experience. I would encourage you to read the whole account for yourself to truly understand. This is a snippet.)
I felt like I was losing myself in the role. It started as any other job, responding with empathy and willfully pretending to care, but over time, it became harder to separate the act from reality. The lines blurred. I began questioning if I was acting or if I was truly becoming the persona I was forced to embody. Every moment of pretense fractured something inside my spirit, and my sense of self. I was losing touch with who I really was, a feeling that has never left me.
After a while, he began to realize that all his work was being tracked, down to the number of words typed. Every interaction was tied to some KPI.
Looking back, it makes sense. Every message I sent was recorded. The platform tracked how fast I replied, which words kept users engaged, and what tone worked best. It felt like the company was collecting more than just labor, they were collecting patterns: how we joked, comforted, or flirted. All that data could easily be reused to build chatbots that sound more human.
"The psychological toll of this confusion was devastating," he writes.
This is not some isolated incident.
This is a little known or talked-about facet of The Industry which remains hidden. The report talks about the level of obfuscation between the shell companies and the big tech enterprises behind them.
I believe that if stories like these were more prominent, we'd have a harder time swallowing what The Industry is selling wholesale.
Workers are being exploited and traumatized with little recourse. Lives are being shattered. As Asia's report concludes:
We deserve better. The users being deceived deserve better. And the future of AI must be built on something other than our broken humanity. Until then, remember that an AI girlfriend responding to your loneliness might just be a man in a Nairobi slum, wondering if he'll ever feel real love again.
Necessary
This account is utterly devastating.
Some may find that what they mean when they say "AI" is decidedly not this. In the same breath, they may argue that language doesn't matter.
Is it fair to substitute "AI" with The Industry, as I have been doing?
A couple of months ago, OpenAI revealed that they would be doing sexbots.
Can you draw the line between the report above and the announcement by OpenAI?
Can you see how the massive exploitation of underprivileged people is one of the lynchpin strategies of The Industry?
Filter
There are massive amounts of data that are vile, disgusting, and inhumane. Many vulnerable people are exposed to this content without their consent.
Isn't it imperative that we look for ways to automate the filtering of this content, so people don't have to experience the trauma of seeing it in the first place?
Isn't that part of the future that The Industry is promising? One where humans are no longer needed for this demeaning work?
Taking this stance means that the current trend of exploiting human workers who are already in a desperate situation in a necessary evil.
To argue that this is a necessary evil in order to prop up or excuse The Industry is quite misanthropic.
There should be no amount of evil that is necessary.
Alternative
If we wanted to create tools that could one day be used to automate the filtration of toxic content, what might that look like?
For one, it would require a massive support system for the humans in the loop. It would provide more than adequate pay and appropriate benefits.
It would have to scale down drastically in order to accommodate normal working conditions.
And it would need to be heavily regulated to ensure that no further exploitations happened on the job.
This would be unsustainable for The Industry.
The only cost they're willing to pay is for more data centers.
Trillions if possible.
Hater
It sucks that being pessimistic about The Industry is equated with being a hater of the technology, or as an old man yelling at cloud.
But this is different than hating on something like React in favor of a hypermedia driven future.
This is different than arguing that TailwindCSS shouldn't be used because vanilla CSS is so much better.
I like Tailwind. I don't like React. But I don't spend hours of my days thinking about the implications of either. I don't think about how they may or may not be implicated in human rights abuses or in massive amounts of data theft.
Unfortunately (or maybe fortunately?), we live in a technological landscape where our choices end up reflecting certain political and sociological leanings.
Just recently, there was massive outcry against Omarchy, and to a lesser extent, Framework Computers, due to their affiliation with a technologist/fascist.
The outcry wasn't even necessarily for something that was done by him or his company, but rather mostly by what he says. His ideas are toxic, and that is enough to dissuade many to use products affiliated (even tangentially) with him or his company.
Breathe Out
Arguments from pessimists, like myself, are generally not about technology alone.
And, at least from my end, I hold no contempt for individuals who use tooling created by The Industry.
We have a lot of complexities built in to our choices that are far more abstract, and that tend to swim in the gray area.
I have many contradictions built into my own value system, and I try to confront them from time to time to learn more about them, as well as myself.
We need to find better ways to talk about The Industry which is not focused on the usability of the tools.
I think Cory Doctorow does a great job at this, specifically with his recent The Reverse-Centaur's Guide to Criticizing AI, in which he argues that the tech companies are more interested in creating users that are dominated by the AI systems, as opposed to users who use these systems to their own end.
Again, I feel like I've said this before, but we do need to find ways to combine our intention to make this world a better place and fight back against those who are currently destroying it.
Is The Industry intent on destroying our world? Maybe not all of it. But go back and read Michael Asia's account and ask yourself that question again.