The Python Software Foundation (PSF) recently announced that they have accepted a $1.5 million sponsorship from Anthropic. The funds will be used to "make progress on the security roadmap" as well as toward "the PSF's core work."
I've made no secret of my disdain toward the stated goals of industry leaders and venture capitalists in lieu of advancing AI technologies.
But, in spite of having to let out a long sigh due to the optics, I'm encouraged by the prospect of a healthier PSF and the work they have been doing.
Nadia Nadesan & Digit / Deceptive Dialogues / Licenced by CC-BY 4.0
Track Record
The amount echoes exactly the previously rejected grant amount from the US government.
That move was widely applauded by the community.
Why?
Because the government grant came with strings attached. Per the PSF's announcement:
These terms included affirming the statement that we “do not, and will not during the term of this financial assistance award, operate any programs that advance or promote DEI, or discriminatory equity ideology in violation of Federal anti-discrimination laws.
"In the end," reads their statement, "the PSF simply can’t agree to a statement that we won’t operate any programs that “advance or promote” diversity, equity, and inclusion, as it would be a betrayal of our mission and our community."
While yes, this was a laudable action to take, it should not be surprising.
The PSF—and let's be real here, it's not an organization that is making these decisions, but rather real human beings—was merely standing true to their mission statement to protect the language and grow its community.
Even with the Anthropic sponsorship in the backdrop, the PSF has demonstrated recently and in the past that they value their community above all else.
Sure, it's not always perfect.
But at least they try to be better and are willing to be held accountable.
And also, briefly, the legal framework behind a charitable organization prevents it from being operated for the benefit of private interests.
Of course, this can and is abused by other 501(c)(3) organizations, but that falls outside of the confines of the law and through the wishes of a governing body.
As Christopher Neugebauer (PSF Officer, Vice Chair) points out rather completely sarcastically on Mastodon:
"a foundation that is legally prevented from giving commercial consideration to donors and has an excellent track record of not giving commercial consideration to large donors will certainly be corrupted by this particular donation"
(Do no) Harm
I still don't think enough is being done to curb the harm perpetuated by tech companies, especially those operating under the moniker of AI.
For example, in late 2024, Palantir announced a partnership with Anthropic to bring their technology into their intelligence and surveillance tech.
While the $1.5 billion dollar settlement Anthropic agreed to pay for copyright infringement seems to have doused public anger at the company's razing of copyrighted works, it is still obviously a sore point for many affected.
Even if you buy the judge's side on that case (that Anthropic's usage was considered fair use), there is very little (perhaps even the opposite) being done to acknowledge and compensate the value that published works bring to these frontier models.
(There's a new round of lawsuits alleging that Anthropic used pirated books for it's training data.)
It seems almost vulgar at a point where AI poses a threat to vulnerable individuals that the same company would be releasing a Healthcare app.
These, to me, are just some surface level issues that are easily observed through news stories from varied sources.
It does not seem like tech companies are held to a standard of valuable risk assessment, transparency, and accountability.
But going back to the original point.
Does the fact that the PSF is accepting sponsorship from Anthropic mean that they are complicit and/or in part responsible for sane-washing the harms that are being perpetuated?
I don't think so. But that doesn't mean that trouble isn't brewing.
In or Out
I believe there is a greater rift that is forming within the F/OSS community at large.
Simon Willison blogged about whether he sees LLM technology as antithetical to open source work.
When musing about whether there are open source contributors who are so incensed at the current situation, that they may consider not keeping their code open, he opines:
I’ll be brutally honest about that question: I think that if “they might train on my code / build a derived version with an LLM” is enough to drive you away from open source, your open source values are distinct enough from mine that I’m not ready to invest significantly in keeping you. I’ll put that effort into welcoming the newcomers instead.
This framing is unfortunate, as it sets up a very psychologically (and community) damaging framing of "in group" vs "out group."
And obviously, this is not a one-sided view. Many detractors have also drawn a line in the sand and refuse to be complicit in what they see as an insult to "openness" as we know it.
Another unfortunate part of Simon's speculation is that in the very next section, he concedes that the issue of copyright is not even settled yet—but his hunch is that he exerts "'enough creative control" in directing models so as to count for human intervention.
This might be true in his case, though I highly doubt this is true in most cases.
On A Sidequest
Other big news, on this front comes from antirez, creator of Redis.
(This is admittedly a complete sidequest from my initial premise. Apologies.)
He posted a couple of days ago a post saying, "Don't fall into the anti-AI hype".
This is a bit of a mind-bender for me.
What exactly is the anti-AI hype?
Do you mean you want me to stop talking about the harms that are being committed by the tech companies?
Do you want me to stop worrying about whether humans will be held accountable for harming other human beings?
Do you mean that the oligarchs and VC bosses shouldn't be scrutinized for their words and actions?
Like what the actual fuck?
Oops, sorry, got a little emotional there.
But seriously, you're asking a bunch of workers to stop pushing back against a massive campaign wrought by billionaires because you're concerned that if I don't buy in, I'll put my career in jeopardy?
And for what?
Let's take a look...
It is now a lot more interesting to understand what to do, and how to do it (and, about this second part, LLMs are great partners, too.)
As in, you think that it will be "more interesting" to use non-deterministic systems in heavily orchestrated ways by mimicking managerial commands? That is somehow more interesting than what I'm doing now? Says who?
It does not matter if AI companies will not be able to get their money back and the stock market will crash.
It does not matter to who? To you? Or to the people that will suffer because of it?
As a programmer, I want to write more open source than ever, now.
Oh really? For who? For other humans or for the frontier models to ingest?
But I also look forward to the good AI could bring: new progress in science, that could help lower the suffering of the human condition, which is not always happy.
This statement makes me very sad. Ignoring the harms of today for an unproven promise of tomorrow... Even if there are good intentions in this statement, there is a whole lot of gold paving going on.
But really, what bugs me most about this post is how myopic it reads.
It talks about anti-AI hype with blinders on.
I hesitantly concede that the technology is getting good at what it does, but the detractor in me is not arguing about its effects on productivity.
(Caveat: I do believe this prior paragraph is arguable, but that's a whole other topic.)
This is about the social, political, and economic impacts that this technology is already affecting.
Seriously, if you're fed up with the anti-AI hype, you're just sounding like Nadella or Huang. Is that the company you want to keep?
On The Money
Big tech has a horrible track record.
Anthropic won't be the first questionable company to sponsor the PSF.
I have tremendous respect for the PSF and those in leadership who make the tough decisions, particularly with respect to funding.
I think the PSF can still maintain true to their mission (protecting the language, growing the community) while utilizing the funds promised by these enshittified companies.
And I'm not saying this due to blind hope, or just because I happen to think well of the people in charge.
I think back particularly to PyCon US last year where the invited guest, Cory Doctorow presented his keynote directly after the sponsorship presentations from Google, et al. Sure, there was cognitive dissonance, but the organizers knew full well that Doctorow would not hold back from his message eviscerating said sponsors.
Even so, I understand that it does take a certain amount of faith in leadership to believe that some sort of ethical stance can remain intact, in spite of these seemingly incongruous facts.
In my view, Anthropic's investment into the Python ecosystem does not absolve them of their responsibility to us, their customers at large, or the countless creators/writers/artists that are still left begging for scraps at the bottom of the table.
Luddite
I recently saw some pushback against the anti-AI folks, condescendingly using the term luddite as part of their argument.
This seems strange, considering that the luddite movement was always a pro-worker movement.
We are past a place where we could have reduced the irreversible harm these tech companies have created through their reckless and unconsented pillaging.
If we are to live in this future of unbound productivity wherein mostly the privileged (read: white men) will benefit, then at least embrace the luddites in their fight against digital imperialism.
As I was finishing this, I noticed this statement by Yarn Spinner.
TL;DR: AI companies make tools for hurting people and we don’t want to support that.
What I find somewhat amusing is that the last question in their FAQ, they answer the question, Are you zealots or luddites who just hate AI?
While they claim that they are not, I beg to differ.
This kind of statement is very much luddite. And even if you dislike all the anti-AI hype, you should at least honor the pro-worker spirit.
If not, we'll be quibbling amongst ourselves while we watch oligarchs and fascists controlling the narrative.