What Does It Take

Jagged Edges

February 26, 2026

What Does It Take

Recently, I mused over what it would take, from my perspective, to significantly change my view that the tech industry's infatuation with non-intelligent "intelligence" is a net-negative for society.

I decided to test those boundaries.

Previously, I wrote about the internal cost/benefit analysis each person makes (whether consciously or not), balancing the alleged harms against the perceived benefits of technological adoption.

Seeing as to how this is a self-indulgent, inward exercise, I am defaulting to my own understanding of reported harms, and I'll try to see what would be the minimum viable alternative to tip the scales in the other direction.

Follow The Leader

I'll start at the top.

The leaders of this technology are categorically unethical and detached from society, and I believe their leadership is taking us into a xenophobic future only fit for technocrats subsisting off of slave labor.

That may sound alarmist, but I really don't think I have to look far to prove why I think so.

This is one of the primary pillars that would need to crumble (though hardly the only one)—a complete dismantling of the existing power dynamics.

This could happen through anti-monopolistic legislation and hyper-aggressive taxation of billionaires.

Existing companies like Meta and Google would need to be broken up, divesting the operation of "AI" technologies into separate organizations.

Ideally, there would be several organizations incorporated to tackle different facets, such as infrastructure, research, data gathering, reinforcement/refinement, B2B, and retail.

Companies like Anthropic or OpenAI, as they are today, would cease to exist. They could either return to non-profit research or work exclusively on product, but they would lose control of models and datacenter operations.

Without that, we have to buy into the idea that technologists are the face of progress, and the rest of society follows along like blind lemmings.

Technologists and industrialist are the least equipped to know (or care) what is good for our world. Their ethics are warped. Their egos are gluttonous. And their ambitions are delusional hallucinations.

Just look at the most recent news out of Anthropic, often hailed as one of the more socially responsible of the LLM-hungry organizations. They once pledged never to train a model/system without guarantees that the company's safety measures were adequate to address any potential pitfalls.

On their decision to abandon their much touted Responsible Scaling Policy (RSP):

“We felt that it wouldn't actually help anyone for us to stop training AI models,” Anthropic’s chief science officer Jared Kaplan told TIME in an exclusive interview. “We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”

Blazing ahead indeed.

Regulation

This next section is nearly impossible to write about meaningfully without barreling in many different directions.

That is because, as should be plainly obvious, Artificial Intelligence is not one thing.

Hence, it is nearly impossible to think of blanket regulations and/or solutions to the different harms.

But here's where I'd start.

Branding and Advertising

I would look for laws or policies that require more accurate branding specific to the application of any kind of LLM-based technology.

Using language like "now with AI" or "AI-powered" should be penalized.

At best, that sort of language obfuscates the utility of any given product, and at worst, it purposely misleads consumers who misunderstand a particular dynamic.

For example, look at this blogpost that talks about "AI-powered pricing," which is a euphemism for predatory and exploitative pricing strategies based on surveillance tech that often times violates a user's privacy.

If your product reads "Now with AI," it should be clear what that means.

"Now with an artificial and unmoderated chat interface."

"Now with algorithm-based suggestions based on hidden user data."

"Now with desperation-based pricing."

Yes, I know, those are cynical product epithets dreamed up to drive a point home.

My major gripe here is not limited to LLM-based products (see "New and Improved"), but that does not mean that we shouldn't demand greater transparency of these features.

Deceptive designs that profit off of anthropomorphism, and dark patterns used to gather private data should be outlawed. (This would have the added benefit of also crippling the predatory ad-tech industry.)

Worker's Rights

If we could get to a place where the companies are more explicit about their LLM-based products, they should also have an obligation to disclose the manner of data gathering and refinement.

Large Language Models do not magically emerge from within the belly of a datacenter.

Not only is there a methodology to how the data is acquired, but the data must also be massaged and sanitized before extracted and consumed.

There are already immediate reports of human rights violations experienced by laborers who participate in a model's RLHF (Reinforcement learning from human feedback).

They are tasked with labeling and moderating disturbing content so that Western audiences are protected from the horrors. They do so for extremely low wages, terrible working conditions, and a toll on their mental health and real-world relationships.

The refusal of the industry to abate this trend, as well as the willful ignorance of consumers of this harm is already shameful as is.

In order to counteract this ongoing abuse, I would need to see a transparent attempt to compensate the "humans in the loop" with salaries commensurate to the tasks that they are asked to perform, as well as benefits for any mental health strain or other risks associated with these tasks.

Enforcing fair labor practices, not just for Western workers, but for workers worldwide should be standard.

This would be extremely costly, but it's laughable to think that when valuations and investments are nearing the trillion dollar mark, these harms are not being mitigated at all.

In addition, workers should be allowed to unionize and be given collective bargaining rights, as corporations always seem to fall back on exploitation of labor.

Education

As far as adopting LLM tooling in the educational sector, I would like to see a focus on tools that are tested and built specifically for learning.

To think that technologists know what classrooms need is both laughable and irredeemably shortsighted.

Engineers often build tools with an idea of how the world works, and usually that lens is tainted by a culture of investment and return.

Instead of building and releasing an "everything" chatbot into the classroom—a so called "tool" that has, in many cases, already created irreversible damage to children (and adults)—it would be much more productive to see educator led programs that draw on experts outside of the tech sector.

And really, much of the "problems" that tech companies are trying to solve in the classroom through technology are not technological problems. They are resource problems.

If these tools are to be introduced into the classroom at all, it should be done as slowly as possible, and to fulfill specific needs without a profit motive.

There should be pilot programs that are led by knowledgeable researchers. There should be buy-in and consent from parents and guardians. There should be a heavy set of guardrails to protect this vulnerable population of young students.

Mind you, I can definitely see how these tools can (and perhaps should) be used for individuals with accessibility needs. But if these tools are not built specifically for those users, it leaves them exposed and at greater risk.

Explicit regulation should prevent for profit companies proliferating their tools into the educational sector without any form of oversight.

Environment

People skeptical of big tech's obsession with growth point to the high environmental costs associated with datacenter buildouts. While there are voices that tend to temper and downplay those threats, there are still troubling aspects to keep in mind.

For example, there is an undeniable, local impact that these buildouts are having on local communities, and when big tech money is entangled with existing utility monopolies—especially in rural or poorer communities—it is the residents that end up losing.

Without proper regulatory enforcement, these small communities are left to fend for themselves.

In some cases, citizens are being strong-armed into submission.

Instead of alienating these communities, independent third parties should be allowed to accurately measure environmental impact of future buildouts. (It goes without saying that it must be proven without a shadow of doubt that these third parties have no conflict of interest.)

Even if we follow the argument that datacenters are on par with existing cloud infrastructure (as it relates to environmental impact), it is imperative to clearly understand these costs, when the scope of future growth seems to be limitless.

Mostly, oligarchs and technologists should not be the ones making decisions as to whether the environmental impact is significant or not.

These decisions should be based on best available science and consensus from the scientific community—not from profit-driven corporations and lobbyists.

Scams and Slop

Scams and trash content are not caused by language models.

They have always existed, and they will continue to exist with or without the help of tech companies.

However, the scale and impact facilitated by the text-extruding machines is nearly unmanageable.

Just today, I read a thread on Mastodon from authors bemoaning the existence of scam book clubs, designed to defraud authors at scale.

Also, again today, I saw this article about deepfake videos depicting Australian immigrants in a disparaging light, and thus promoting anti-immigrant narratives.

These are just tiny blips within a catastrophic deluge of scams and slop in all facets of media and beyond. News, ads, social media posts, videos, scientific papers, books, stories, software, emails, resumes, recipes, reviews, porn, and just about any other medium you can imagine.

The problem with this section is... I'm not even sure what the solution is.

Since I'm trying to imagine what it would take to tip the scales, I suppose it would be a world where scammers and slop hurlers were slowed down one way or the other.

Accountability could be one way (penalizing the slop hurlers), but section 230 might make this difficult (and weakening the law would be a worse outcome for other reasons).

Having better detection tools might help fight back, but this kind of cat and mouse game is untenable.

Maybe if this was the only vector/problem area, I might be willing to see if humans can adjust to this new reality of fire-hose slop.

But even then, it's a hard pill to swallow. The proliferation of motivated bad actors can disrupt and distort how people see the world, how police and armed forces exert violent actions, or how government officials peddle it to elicit political extremism.

CSAM

Child sexual abuse material is a deplorable reality, whether LLM-aided image manipulation exists or not.

The burden of responsibility to attack this social failing should not solely fall on the tech companies.

It can be argued that cameras used to create this kind of content are not themselves responsible for said images/videos. Hence, using LLM-based image generators to create abusive content does not necessarily mean that the image generators themselves are to blame.

Yet, there are very strict laws concerning the generation and possession of such content.

And one would hope that the tech companies that facilitate the generation of CSAM would be extremely eager to discourage, prevent, or disallow the creation of said content.

In the case of some companies, they're not only being passive about this, they are actively encouraging it.

What's that, it's not easy to prevent or disallow because of the nature of generative systems?

Fuck that.

How is it that these models are able to realistically undress unsuspecting subjects, by the way?

Here's a clue. Amazon found a high volume of CSAM in their training data. Any other person in possession of CSAM content would be facing criminal charges. Corporations (and those who run them) should be prosecuted in the same way.

Copyright

This is a far more complex topic, and I am not at all claiming that my view here is "correct" by any means.

But it matters nonetheless in terms of how I feel about these technologies, so the section stays.

While the Electronic Frontier Foundation (EFF) is in favor of allowing tech companies to use copyrighted works to train LLMs (under fair use), this is not necessarily an opinion held amongst all artists, authors, and other creators. (Bartz v. Anthropic proves the friction with authors, for example.)

I tend to agree with the EFF that expanding copyright would not necessarily help creators as much as they would like. Historically, copyright has allowed powerful companies to exert more control.

Additionally, restrictions on public works (like, say, a person's blog) in order to curtail model scrapers might actually be limiting to the open web or those with special accessibility needs.

However, I would like to emphatically reject the idea that this gives the tech companies free reign over anything and everything that's in their sinister reach.

I'd like to illustrate by virtue of metaphor, which, while not entirely satisfactory to prove my point, I hope it communicates my general feelings about this.

If I go into a store, minding my own business as I do some casual shopping, I tend to think that hardly anyone notices whether I have messy hair (I usually do) or if my shoes are untied (they usually are). I don't have anything to hide, per se, and I'm happy to chat with an acquaintance if they so happen to be walking down the same aisle.

However, if I found out someone was following me from the moment I walked in the store to the moment I walked out—keeping track of my hair, my shoes, my clothes, what I purchased, what I put back on the shelf, who I talked to, for how long and about what, and if I smiled at the cash register—I would never go shopping there again.

Data scrapers are this, but in reverse. They are actively coming to our spaces and pillaging anything and everything they can (as it relates to data). I don't care if it's fair use or not

I generally despise this practice.

I should reserve the right to broadcast only to those I want to broadcast to. To those who respect my boundaries.

Maybe our laws are not equipped for this moment, and copyright is toothless against this onslaught.

But unless something changes, existing in a world where tech companies have access to my content (or any content) and can swallow it up wholesale without explicit consent is utterly demoralizing.

A Few Other Things

Damn, this is getting way longer than I thought, so I'll do this as a disjointed rapid fire round...

The financial gymnastics that this ouroboros circus is putting on (aka, big tech) is absurd and depressing.

I suppose if the initial pillar were toppled (dismantling the companies led by hallucinatory oligarchs), then maybe this would be less of an issue? Perhaps.

What else?

The shift from software engineering as we know it into some sort of higher abstraction pseudo-programming based on natural language that resembles middle-management more than anything—I'd likely find very little enjoyment out of that.

If that's the way of the future... pass.

Software security is a concern, though one I am very ill equipped to talk about with any sort of authority. I generally don't feel confident in LLM-generated code—so there would need to be strong messaging and demonstrations of actual security to sway me in the other direction.

And lastly, LLM-text extruders value speed over craft... quantity over quality. Even if this isn't explicitly a goal, it is diluting the essence of what it means to be an individual.

When I moved to the US from Honduras, as a little boy, I was embarrassed by the pronunciation of my name. I pined for blandness. I longed for being average. I wanted the soft "R".... arrrrrr....

These days, I think it sounds odd when someone pronounces my name in Spanish.

This is a sadness, and just a small example of how bad it will get.

It will be 100 times worse with the proliferation of semantic ablation.

For the ones that think LLM tools will allow people to be more creative, as they have more power and resources at their disposal—I'd say just take a look at how it's being used now.

Create bland essays. Answer bland emails. Write bland README docs. Produce bland code.

There are no sharp edges.

Whatabout

In many conversations I've had or witnessed on this subject, it's easy to fall into whataboutisms in order to dilute any particular argument.

For example, it's inescapable to achieve technical purity in lieu of the big tech landscape.

Whether it's your phone, computer, operating system, online shopping platform, TV streamer, music player, and on and on—it's quite easy to find contradictions.

And you know what, those criticisms are valid.

But, as I've been advised about software or art or any other endeavor, don't let perfect be the enemy of good.

One of the reasons my criticism of this specific industry feels so prescient is precisely because it is largely nascent—and it would be "easier" to curtail the damage earlier rather than later.

And since this post is mostly introspective, I'm not as heavily concerned with the contradictions as much as I am with a vision of how this all could be slightly better.

My challenge to folks on the other side of the divide...

What about you?

What is the worst thing that could possibly convince you that buying into LLM-usage is perhaps not all it's cracked up to be?

What if it deletes all your email? Or your hard drive?

Is it a self-serving metric, like how expensive is the privilege of using a tool? If the companies charged you $500/month or $1000/month, would you say no?

Are there other factors that might sway you outside of how useful (or useless) it is to you?

You don't have to tell me... It's very unlikely that you'll change your own mind.

But I do think that if we spend some time on self-reflection, we might find some of those edges—and maybe within that space, we might find something we can agree on.

The Future

As I conclude, I want to very briefly touch on a subject I've alluded to in previous posts, but haven't tackled directly. I also won't do so now, but thought to briefly address it.

Much of the counterweight against many of these harsh realities comes from visions of progress, and what it means for future generations.

I'm unsure if anyone can make an argument that any current usage of LLM-powered tools is worth all the harm the industry has produced and is producing at the present moment.

At best, maybe individuals are producing faster—although it's unclear what that means and how good (for society) this actually is.

Even if it were working for me, and I created 10 apps in the period of one month that suit my own personal needs—would that change the equation for me?

No. Not with the way I see things.

But what if thousands of people are out there producing their own personal applications that make their lives easier, more comfortable, or more productive (whatever that means)?

Still, that's a "Nope!" from me.

That's just my vibe, though. I don't know if all those small experiments are going to make the world a better place.

From what I understand, however, is that the real counterbalance is not in what's here now, but in what is promised.

That is at the heart (or at least at the message) of what many of the tech oligarchs are constantly going on about.

Sam Altman's cure for cancer (or whatever).

Dario Amodei's "Machines of Loving Grace," promising machines "smarter than a Nobel Prize" winner.

(I get bored finding and citing references like this, over and over again...)

In their vision of the future, maybe investing in these systems tips the scales. The ends justify the means. The future of humanity is saved through our sacrifice of enduring through the Age of Slop.

It's not disingenuous to hope for a better tomorrow.

That is why we are concerned about climate change. That is why we plant trees.

But I'm not swayed by an ethics that values an unknown future, no matter the cost. Technologists make poor visionaries. Their prophecy is not one I believe in, nor one that I would ever want fulfilled.

I'll likely have more to write about this at a later time.

If that future ever arrives, that is... 😁