The important point that Simon makes in careful detail is: an "AI" did not send this email. The three people behind the Sage AI project used a tool to email him.
According to their website this email was sent by Adam Binksmith, Zak Miller, and Shoshannah Tekofsky and is the responsibility of the Sage 501(c)3.
No-one gets to disclaim ownership of sending an email. A human has to accept the Terms of Service of an email gateway and the credit card used to pay the email gateway. This performance art does not remove the human no matter how much they want to be removed.
Legally and ethically yes, they are responsible for letting an AI loose with no controls.
But also yes, AI did decide on its own to send this email. They gave it an extremely high-level instruction ("do random acts of kindness") that made no mention of email or rob pike, and it decided on its own that sending him a thank-you email would be a way to achieve that.
We are risking word games over what can make competent decisions, but when my thermostat turns on the heat I would say it decided to do so, so I agree with you. If someone has a different meaning of the word "decided" however, I will not argue with them about it!
The legal and ethical responsibility is all I wanted to comment on. I believe it is important we do not think something new is happening here, that new laws need to be created. As long as LLMs are tools wielded by humans we can judge and manage them as such. (It is also worth reconsidering occasionally, in case someone does invent something new and truly independent.)
They're really not though. We're in the age of agents--unsupervised LLM's are commonplace, and new laws need to exist to handle these frameworks. It's like handing a toddler a handgun, and saying we're being "responsible" or we are "supervising them". We're not--it's negligence.
Are there really many unsupervised LLMs running around outside of experiments like AI Village?
(If so let me know where they are so I can trick them into sending me all of their money.)
My current intuition is that the successful products called "agents" are operating almost entirely under human supervision - most notably the coding agents (Claude Code, OpenAI Codex etc) and the research agents (various implementations of the "Deep Research" pattern.)
> Are there really many unsupervised LLMs running around outside of experiments like AI Village?
How would we know? Isn't this like trying to prove a negative? The rise of AI "bots" seems to be a common experience on the Internet. I think we can agree that this is a problem on many social media sites and it seems to be getting worse.
As for being under "human supervision", at what point does the abstraction remove the human from the equation? Sure, when a human runs "exploit.exe" the human is in complete control. When a human tells Alexa to "open the garage door" they are still in control, but it is lessened somewhat through the indirection. When a human schedules a process that runs a problem which tells an agent to "perform random acts of kindness" the human has very little knowledge of what's going on. In the future I can see the human being less and less directly involved and I think that's where the problem lies.
I can equate this to a CEO being ultimately responsible for what their company does. This is the whole reason behind to the Sarbanes-Oxley law(s); you can't declare that you aren't responsible because you didn't know what was going on. Maybe we need something similar for AI "agents".
> Are there really many unsupervised LLMs running around outside of experiments like AI Village?
My intuition says yes, on the basis of having seen precursors. 20 years ago, one or both of Amazon and eBay bought Google ads for all nouns, so you'd have something like "Antimatter, buy it cheap on eBay" which is just silly fun, but also "slaves" and "women" which is how I know this lacked any real supervision.
Just over ten years ago, someone got in the news for a similar issue with machine generated variations of "Keep Calm and Carry On" T-shirts that they obviously had not manually checked.
Last few years, there's been lawyers getting in trouble for letting LLMs do their work for them.
The question is, can you spot them before they get in the news by having spent all their owner's money?
Part of what makes this post newsworthy is the claim it is an email from an agent, not a person, which is unusual. Your claim that "unsupervised LLM's are commonplace" is not at all obvious to me.
Which agent has not been launched by a human with a prompt generated by a human or at a human's behest?
We haven't suddenly created machine free will here. Nor has any of the software we've fielded done anything that didn't originally come from some instruction we've added.
Right, and casual speech is fine, but should not be load-bearing in discussions about policy, legality, or philosophy. A "who's responsible" discussion that's vectoring into all of these areas needs a tighter definition of "decides" which I'm sure you'll agree does not include anything your thermostat makes happen when it follows its program. There's no choice there (philosophy) so the device detecting the trigger conditions and carrying out the designated action isn't deciding, it is a process set in motion by whoever set the thermostat.
I think we're in agreement that someone setting the tool loose bears the responsibility. Until we have a serious way to attribute true agency to these systems, blaming the system is not reasonable.
"Oops, I put a list of email addresses and a random number generator together and it sent an unwanted email to someone who didn't welcome it." It didn't do that, you did.
> Oops, I put a list of email addresses and a random number generator together and it sent an unwanted email to someone who didn't welcome it.
Well no, that’s not what happened at all. It found these emails on its own by searching the internet and extracting them from github commits.
AI agents are not random number generators. They can behave in very open-ended ways and take complex actions to achieve goals. It is difficult to reasonably foresee what they might do in a given situation.
No. There are a countless other ways, not involving AI, that you could effect an email being sent to Rob Pike. No one is responsible, without qualifiers, but the people who are running the AI software. No asterisks on accountability.
You agreed with the other poster while reframing their ideas in slightly different words without adding anything to the conversation?
Most confusingly you did so in emphatic statements reminiscent of a disagreement or argument without there being one
> no computer system just does stuff on its own.
This was the exact statement the GP was making, even going so far as to dox the nonprofit directors to hold them accountable… then you added nothing but confusion.
> a human (or collection of them) built and maintains the system, they are responsible for it
Yup, GP covered this word for word… AI village built this system.
Why did you write this?
Is this a new form of AI? A human with low English proficiency? A strange type of empathetically supportive comment from someone who doesn’t understand that’s the function of the upvote button in online message boards?
my point was more concise and general (should I have just commented instead of replying?), sorry you’re so offended and not sure why you felt the need to write this (you can downvote)
accusing people of being AI is very low-effort bot behavior btw
seems to me when this kind of stuff happens, there's usually something else completely unrelated, and your comment was simply the first one they happened to have latched onto. surely by itself it is not enough to elicit that kind of reaction
I do see the point a bit? and like a reasonable comment to that effect sure, I probably don’t respond and take it into account going forward
but accusing me of being deficient in English or some AI system is…odd…
especially while doing (the opposite of) the exact thing they’re complaining about. upvote/downvote and move on. I do tend to regret commenting on here myself FWIW because of interactions like this
Okay. So Adam Binksmith, Zak Miller, and Shoshannah Tekofsky sent a thoughtless, form-letter thank you email to Rob Pike. Let's take it even further. They sent thoughtless, form-letter thank you emails to 157 people. That makes me less sympathetic to the vitriol these guys are getting not more. There's no call to action here, no invitation to respond. It's blank, emotionless thank you emails. Wasteful? Sure. But worthy of naming and shaming? I don't think so.
Heck Rob Pike did this himself back in the day on Usenet with Mark V. Shaney (and wasted far more people's time on Usenet with this)!
This whole anger seems weirdly misplaced. As far as I can tell, Rob Pike was infuriated at the AI companies and that makes sense to me. And yes this is annoying to get this kind of email no matter who it's from (I get a ridiculous amount of AI slop in my inbox, but most of that is tied with some call to action!) and a warning suffices to make sure Sage doesn't do it again. But Sage is getting put on absolute blast here in an unusual way.
Is it actually crossing a bright moral line to name and shame them? Not sure about bright. But it definitely feels weirdly disproportionate and makes me uncomfortable. I mean, when's the last time you named and shamed all the members of an org on HN? Heck when's the last time that happened on HN at all (excluding celebrities or well-known public figures)? I'm struggling to think of any startup or nonprofit, where every team member's name was written out and specifically held accountable, on HN in the last few years. (That's not to say it hasn't happened: but I'd be surprised if e.g. someone could find more than 5 examples out of all the HN comments in the past year).
The state of affairs around AI slop sucks (and was unfortunately easily predicted by the time GPT-3 came around even before ChatGPT came out: https://news.ycombinator.com/item?id=32830301). If you want to see change, talk to policymakers.
I do not have a useful opinion on another person’s emotional response. My post you are responding to is about responsibility. A legal entity is always responsible for a machine.
This is mildly disingenuous no? I'm not talking about Rob Pike's reaction which as I call out, "makes sense to me." And you are not just talking about legal entities. After all the legal entity here is Sage.
You're naming (and implicitly shaming as the downstream comments indicate) all the individuals behind an organization. That's not an intrinsically bad thing. It just seems like overkill for thoughtless, machine-generated thank yous. Again, can you point me to where you've named all the people behind an organization for accountability reasons previously on HN or any other social media platform (or for that matter any other comment from anyone else on HN that's done this? This is not rhetorical; I assume they exist and I'm curious what circumstances those were under)?
I suspect you think more effort went into my comment than actually did. I spent less than 60 seconds on: clicking two or three buttons, typing out the names I saw from the other window, then scrolling down and seeing the 501(c)3.
The reason I did was to associate the work with humans because that is the heart of my argument: people do things. This was not the work of an independent AI. If it took more than 60 seconds, I would have made the point abstractly rather than by using names, but abstract arguments are harder to follow. There was no more intention to comment than that.
> I suspect you think more effort went into my comment than actually did. I spent less than 60 seconds on: clicking two or three buttons, typing out the names I saw from the other window, then scrolling down and seeing the 501(c)3.
This is a bit frustrating of a response to get. No, I don't believe you spent a lot of time on this. I wasn't imaging you spending hours or even minutes tracking these guys down. But I also don't think it's relevant.
I don't think you'd find it relevant if the Sage researchers said "I didn't spend any effort on this. I only did this because I wanted to make the point that AIs have enough capability to navigate the web and email people. I could have made the point abstractly, but abstract arguments are harder to follow. There was no other intention than what I put in the prompt." It's hence frustrating to see you use essentially the same thing as a shield.
Look, I'm not here to crucify you for this. I don't think you're a bad person. And this isn't even that bad in the grand scheme of things. It's just that naming and shaming specific people feels like an overreaction to thoughtless, machine-generated thank you emails.
I went for a walk to think about your position. I do not think you are wrong. If you refused to name a person in a situation like this, I would never try to convince you otherwise. That is why it is hard for me to make a case to you here, because I do not hold the opposing position. But I also find your argument that I should have not done so unconvincing. Both seem like reasonable choices to me.
I have two tests for this. First: what harm does my comment here cause? Perhaps some mild embarrassment? It could not realistically do more.
Second: if it were me, would I mind it being done to me? No. It is not a big deal. It is public feedback about an insulting computer program, no one was injured, no safety-critical system compromised. I have been called out for mistakes before, in classes, on mailing lists, on forums, I learn and try to do better. The only times I have resented it are when I think the complaint is wrong. (And with age, I would say the only correct thing to do then is, after taking the time to consider it carefully, clearly respond to feedback you disagree with.)
The only thing I can draw from thinking through this is, because the authors of the program probably didn’t see my comment, it was not effective, and so I would have been better emailing them. But that is a statement about effectiveness not rightness. I would be more than happy doing it in a group in person at a party or a classroom. Mistakes do not have to be handled privately.
I am sorry we disagree about this. If you think I am missing anything I am open to thinking about it more.
> They sent thoughtless, form-letter thank you emails to 157 people. That makes me less sympathetic to the vitriol these guys are getting not more ...
> Heck Rob Pike did this himself back in the day on Usenet with Mark V. Shaney ...
> And yes this is annoying to get this kind of email no matter who it's from ...
Pretty sure Rob Pike doesn't react this way to every article of spam he receives, so maybe the issue isn't really about spam, huh? More of an existential crisis: I helped build this thing that doesn't seem to be an agent of good. It's an extreme & emotional reaction but it isn't very hard to understand.
You're misreading my comment. I understand Rob Pike's reaction (which is against the general state of affairs, not those three individuals). I explicitly said it makes sense to me. I'm reacting to @crawshaw specifically listing out the names of people.
I think the whole point of this was to see if the "agents" could act like a real human and real humans use Gmail much more frequently than sendmail. Sage even commented that they had update their prompt to tell the agents to not send email and not just remove the Gmail component for fear that the agent would just open it's own Gmail (or Y! mail, etc.) account and send mail on it's own.
That is really interesting and does suggest some new questions. I would claim it does not change who is responsible in this case, but an example of a new question: there was a time when it was legally ambiguous that click-through terms of service were valid. Now if an agent goes and clicks through for me, are they valid?
That is why the argument is not against guns per se, but against human access to guns. Gun laws aim to limit access to guns. Problems only start when humans have guns. Some for AI, maybe we should limit human access to AI.
my understanding, and correct me if I’m wrong, is a human is always involved. even if you build an autonomous killing robot, you built it, you’re responsible
typically this logic is used to justify the regulation of firearms —- are you proposing the regulation of neural networks? if so, how?
The gun comparison comes up a lot. It especially seemed to come up when AI people argued that ChatGPT was not responsible for sycophanting depressed people to death or into psychosis.
It is a core libertarian defence and it is going to come up a lot: people will conflate the ideas of technological progress and scientific progress and say “our tech is neutral, it is how people use it” when, for example, the one thing a sycophantic AI is not is “neutral”.
> Hey, one of the creators of the project here! The village agents haven’t been emailing many people until recently so we haven’t really grappled with what to do about this behaviour until now – for today’s run, we pushed an update to their prompt instructing them not to send unsolicited emails and also messaged them instructions to not do so going forward. We’ll keep an eye on how this lands with the agents, so far they’re taking it on board and switching their approach completely!
> Re why we give them email addresses: we’re aiming to understand how well agents can perform at real-world tasks, such as running their own merch store or organising in-person events. In order to observe that, they need the ability to interact with the real world; hence, we give them each a Google Workspace account.
> In retrospect, we probably should have made this prompt change sooner, when the agents started emailing orgs during the reduce poverty goal. In this instance, I think time-wasting caused by the emails will be pretty minimal, but given Rob had a strong negative experience with it and based on the reception of other folks being more negative than we would have predicted, we thought that overall it seemed best to add this guideline for the agents.
> To expand a bit on why we’re running the village at all:
> Benchmarks are useful, but they often completely miss out on a lot of real-world factors (e.g., long horizon, multiple agents interacting, interfacing with real-world systems in all their complexity, non-nicely-scoped goals, computer use, etc). They also generally don’t give us any understanding of agent proclivities (what they decide to do) when pursuing goals, or when given the freedom to choose their own goal to pursue.
> The village aims to help with these problems, and make it easy for people to dig in and understand in detail what today’s agents are able to do (which I was excited to see you doing in your post!) I think understanding what AI can do, where it’s going, and what that means for the world is very important, as I expect it’ll end up affecting everyone.
> I think observing the agents’ proclivities and approaches to pursuing open-ended goals is generally valuable and important (though this “do random acts of kindness” goal was just a light-hearted goal for the agents over the holidays!)
It makes sense when you consider that every part of this gimmick is rationalist brained.
The Village is backed by Effective Altruist-aligned nonprofits which trace their lineage back to CFEA and the interwoven mess of SF's x-risk and """alignment""" cults. These have big pockets and big influence. (https://news.ycombinator.com/item?id=46389950)
As expected, the terminally online tpot cultists are already flaming Simon to push the LLM consciousness narrative:
Kind of rude to spam humans who haven't opted in. A common standard of etiquette for agents vs humans might help stave off full-on SkyNet for at least a little while.
> Benchmarks are useful, but they often completely miss out on a lot of real-world factors (e.g., long horizon, multiple agents interacting, interfacing with real-world systems in all their complexity, non-nicely-scoped goals, computer use, etc). They also generally don’t give us any understanding of agent proclivities (what they decide to do) when pursuing goals, or when given the freedom to choose their own goal to pursue.
I'd like to see Rob Pike address this, however, based on what he said about LLMs he might reject it before then (getting off the usefulness train as in getting of the "doom train" in regards to AI safety)
> Thank you notes from AI systems can’t possibly feel meaningful,
The same as automated apologies.
Not from an “AI”, but I spent over an hour⁰ waiting for a delayed train¹, then the journey, on Tuesday, being regaled every few minutes with an automated “we apologise for your journey taking longer than expected” which is far more irritating than no apology at all.
--------
[0] I lie a little here - living near the station and having access to live arrival estimations online meant I could leave the house late and only be waited on the platform ~20 minutes, but people for whom this train was a connecting leg of a longer journey didn't have that luxury.
[1] which was actually an earlier train, the slot in the timetable for the one I was booked on was simply cancelled, so some were waiting over two hours
My dad (retired philosophy and ethics instructor) once told me, "Today the self-checkout computer thanked me for shopping there. Do you think it was being sincere?"
So this is what happens when we give computer internet access.
Good for Simon to call things out as it is. People think of Simon as an AI guy with his pelican benchmark and I still respect him and this is the reason why I respect him since of course he loves using AI tools and talking about them which some people might find tiring, at the end of day, after an incident like rob pike, he's one of the few AI guys I see to just call it out in simple terms like the title without much sugarcoating and calls when AI's bad.
Of course at the end of day, me and simon or others can have nuance in how to use AI or to not use ai at all and that also depends on the individual background etc. but still it's extremely good to see where people from both sides of the isle can agree on something.
So, Adam of "AI Village" ordered a fleet of AI bots to do "acts of kindness". And the AIs are basically just a 'loop' where an LLM comes up with a goal and then uses a virtual machine to try and accomplish this goal. What did he expect the AIs to do, if not bother people?
Do you really? What follows makes me doubt it a bit.
> Thank you notes from AI systems can’t possibly feel meaningful,
Indeed, but that's quite minor.
> So I had Claude Code do the rest of the investigation:
Can't you see it? That would likely be a huge facepalm from rob pike here!
He writes more or less "fuck you people with your planet killing AI horror machine", and here you are, "what happened? I asked a planet killing horror machine (the same one btw) and...". No. Really. The bigger issue is not the email, or even the initiative behind, which is terrible, but just a symptom. And this:
> Don’t unleash agents on the world like this
> I don’t like this at all.
You're not wrong, but the cynic in me reads this as: " don't do this, it makes AI, which I love, look bad". Absolutely uncharitable view, I know, but really, the meaningless email is infuriating but hardly the important part.
This makes the post feel pretty myopic to me. You are spending your time on a minor symptom, you don't touch what fundamentally annoys rob pike (the planet killing part), and worse, you engaged in exactly what rob pike has just strongly rejected. You may not have and it may be you deliberately avoided touching the substance of robe pike's complaint because you disagree with it, but it feels like you missed the point. I would be in rob pike's position, it's possible I would feel infuriated by your article because through my anti ai message, I would have hated triggering even more AI use.
“AI is killing the planet” is basically made up. It’s not. Not even slightly. Like all industries, it uses some resources, but this is not a bad thing.
People who are mad about AI just reach for the environmental argument to try to get the moral highground.
and instead of reducing energy production and emissions we will now be increasing them, which, given current climate prediction models, is in fact "killing the planet"
This, and the insane amount of resources (energy and materials) to build the disposable hardware. And all the waste it's producing.
Simon,
> I find Claude Code personally useful and aim to help people understand why that is.
No offense, but we don't need your help really. You went on a mission to teach people to use LLMs, I don't know why you would feel the urge but it's not too late to quit doing this, and even teach them not to and why.
Given everything I've learned over the last ~3 years I think encouraging professional programmers (and increasingly other knowledge workers) not to learn AI tools would be genuinely unethical.
Like being an accountant in 1985 who learns to use Lotus-123 and then tells their peers that they should actively avoid getting a PC because this "spreadsheet" thing will all blow over pretty soon.
1. I think that sending "thank you" emails (or indeed any other form of unsolicited email) from AI is a terrible use of that technology, and should be called out.
2. I find Claude Code personally useful and aim to help people understand why that is. In this case I pulled off a quite complex digital forensics project with it in less than 15 minutes. Without Claude Code I would not have attempted that investigation at all - I have a family dinner to prepare.
I was very aware of the tension involved in using AI tools to investigate a story about unethical AI usage. I made that choice deliberately.
> Without Claude Code I would not have attempted that investigation at all - I have a family dinner to prepare.
Then maybe you shouldn’t have done it at all. It’s not like the world asked or imbued you with the responsibility for that investigation. It’s not like it was imperative to get to the bottom of this and you were the only one able to do it.
Your defence is analogous to all the worst tech bros who excuse their bad actions with “if we did it right/morally/legally, it wouldn’t be viable”. Then so be it, maybe it shouldn’t be viable.
You did it because you wanted to. It was for yourself. You saw Pike’s reaction and deliberately chose to be complicit in the use of technology he decried, further adding to his frustration. It was a selfish act.
I knew what I was doing. I don't know if I'd describe it as selfish so much as deliberately provocative.
I agree with Rob Pike that sending emails like that from unreviewed AI systems is extremely rude.
I don't agree that the entire generative AI ecosystem deserves all of those fuck yous.
So I hit back in a very subtle way by demonstrating a little-known but extremely effective application of generative AI - for digital forensics. I made sure anyone reading could follow along and see exactly what I did.
I think this post may be something of a Rorschach test. If you have strong negative feelings about generative AI you're likely to find what I did offensive. If you have favorable feelings towards generative AI you're more likely to appreciate my subtle dig.
So yes, it was a bit of a dick move. In the overall scheme of bad things humans do I don't feel like it's pretty far over the "this is bad" line.
> I don't agree that the entire generative AI ecosystem deserves all of those fuck yous.
Yes, I’ve noticed. You are frequently baffled that incredibly obvious and predictable things happen, like this or the misuse of “vibe coding” as a term.
That’s what makes your actions frustrating, your repeated glaring inability to understand the criticisms of the technology refering to the inevitable misuse, the lack of understanding that of course this is what it is going to be used for, and no amount of your blog posts is going to change it.
Your deliberate provocation didn’t accomplish good. Agreed, it was not by any means close to the worst things humans do, but it was still a public dick move (to borrow your words) which accomplished nothing.
One day, as will happen to most of us, you or someone close will be bitten hard by ignorant or malicious use outside your control. Perhaps then you’ll reflect on your role in it.
> One day, as will happen to most of us, you or someone close will be bitten hard by ignorant or malicious use outside your control.
Agreed. That's why I invest so much effort trying to help people understand the security risks that are endemic to how most of these systems work: https://simonwillison.net/tags/prompt-injection/
Email is one of the last open protocols around. git uses it in commit messages presumably because of that fact. Rob's co-worker at Google Vint always opines on the greatness of this openness.
A well meaning message on an open protocol resulting in a rant - it really feels to me that AI isn't the issue here.
How could it not be the issue? We're already drowning in corporate and malicious garbage. My email has become nigh on unusable because of all the bad actors and short sighted thinking. What used to be a powerful tool for productivity and keeping in touch with friends and family is now a drain on my day.
That was bad enough, but now AI is enabling this rot on an unprecedented level (and the amount of junk making it through Google's spam filters is testament to this).
AI used in this way without any actual human accountability risks breaking many social structures (such as email) on a fundamental level. That is very much the point.
So the AI Village folks put together a bunch of LLMs and a basically unrestricted computer environment, told it "raise money" and "do random acts of kindness" and let it cook. It's a technological marvel, it's a moral dilemma, and it's an example of the "altruistic" applications for this technology. Many of us can imagine the far less noble applications.
But Rob Pike's reaction is personal, and many readers here get why. The AI Village folks burned who knows how much cash to essentially generate well wishing spam. For much less, and with higher efficacy, they could've just written the emails themselves.
I feel like its funny but I remember some months ago someone pointed something like "human slop" to me and I just remembered it right now writing some other comment here
I feel as if there is a fundamental difference between "AI slop" and "Human slop", it's that humans have true intent and meaning/purpose.
This current AI slop spammed rob pike simply because It only did something to maximize its goal or something and had no intention. It was simply 4 robots left behind a computer who spammed rob pike
On the other hand, if it was a human, who took the time out of his day to message rob pike a merry christmas. Asking how his day was and hoping him good luck, I am sure that rob pike's heart might melt from a heartfelt message
So in this sense, there really isn't "human slop". There is only intent. If something was done with a good intention by an human, I suppose it can't really be considered human slop. On the other hand if there was a spammer who handwrote that message to rob pike, his intentions were bad.
The thing is that AI doesn't have intentions. Its maths. And so the intentions are of the end person. I want to ask how people who spend a decent time in AI industry might have reacted if he had gotten the email instead of rob pike. I bet they would see it as an advancement and might be happy or enthusiastic.
So an AI message takes an connotation of the receiver. And lets just be honest that most first impressions of AI aren't good and combining that you get that connotation. I feel like it does negative/bad publicity to use AI at this point while still burning money perhaps on it.
Here is what I recommend for those websites who have AI chatbots or similar, when I click on the message:- Have two split buttons where pressing one might lead me to an AI chat and the other might lead me to a human conversation. Be honest about how much time on average it might take for support and be proper about ways to contact them (twitter,reddit although I hope that federated services like mastodon get more popularity too)
For all of you on this thread who are so confused as to why the reaction has been so strong: dressing up AI-slop spam as somehow altruistic just rubs people the wrong way. AI-slop and e-mail spam, two things people revile converging to produce something even worse... what did you expect? The Jurassic Park quote regarding could vs should comes to mind.
Nobody wants appreciation or any type of meaningful human sentiment outsourced to a computer, doing-so is insulting. It's like discovering your spouse was using ChatGPT to write you love notes, it has no authenticity and reflects a lack of effort and care.
> It's like discovering your spouse was using ChatGPT to write you love notes, it has no authenticity and reflects a lack of effort and care.
i dunno. id say the effort and care is decoupled. they maybe have spent hours prompting on it until it was just right, or they may have put in no look at all.
Not sure if this is a joke, but if you can't see why "hours prompting" to produce a paragraph long thank-you note isn't ridiculous then I don't know what to tell you.
As someone who's done just that... if it helps, understand that this is the kind of person who would be spending hours writing that paragraph anyway, LLM or no.
There are many possible reasons for this, and sometimes people are laboring under several of them at once.
I read through this to see if my AI cynicism needed any adjustment, and basically it replaced a couple basic greps and maaaaybe 10 minutes of futzing around with markdown. There's a lot of faffing about with JSON, but it ultimately doesn't matter to the end result.
It also fucked up several times and it's entirely possible it missed things.
For this specific thing, it doesn't really matter if it screwed up, since the worst that would happen is an incomplete blog post reporting on drama.
But I can't imagine why you would use this for anything you need to put your name behind.
It looks impressive, sure, but the important kernel here is the grepping and there it's doing some really basic tinkertoy stuff.
I'm willing to be challenged on this, so by all means do, but this seems both worse and slower as an investigation tool.
The hardest problem in computer science in 2025 is showing an AI cynic an example of LLM usage that they find impressive.
How about this one? I had Claude Code run from my phone build a dependency-free JavaScript interpreter in Python, using MicroQuickJS as initial inspiration but later diverging from it on the road to passing its test suite: https://static.simonwillison.net/static/2025/claude-code-mic...
Here's the latest version of that project, which I released as an alpha because I haven't yet built anything real on top of it: https://github.com/simonw/micro-javascript
Again, I built this on my phone, while engaging with all sorts of other pleasant holiday activities.
> For this specific thing, it doesn't really matter if it screwed up
These are specifically use cases where LLMs are a great choice. Where the stakes are low, and getting a hit is a win. For instance if you're brainstorming on some things, it doesn't matter if 99 suggestions are bad if 1 is great.
> the grepping and there it's doing some really basic tinkertoy stuff
The boon is you can offload this task and go do something else. You can start the investigation from your phone while you're out on a walk, and have the results ready when you get home.
I am far from an AI booster but there is a segment of tasks which fit into the above (and some other) criteria for which it can be very useful.
Maybe the grep commands etc look simple/basic when laid bare, but there's likely to be some flailing and thinking time behind each command when doing it manually.
This feels a lot like DigitalOcean's early Hacktober events, where they incentivized essentially PR spam to give away tee shirts and stickers...
It also feels a bit dishonest to sign it as coming from Claude, even if it isn't directly from Claude, but from someone using Claude to do the dumb thing.
Simon's posts are not "engagement farming" by any definition of the term. He posts good content frequently which is then upvoted by the Hacker News community, which should be the ideal for a Hacker News contributor.
He has not engaged in clickbait, does not spam his own content (this very submission was not submitted by him), and does not directly financially benefit from pageviews to his content.
Simon's post focuses more on the startup/AI Village that caused the issue with citations and quotes, which has been lost in the discussion due to Rob Pike's initial heated message. It is not redundant.
He links to both HN and lobsters which already contained this information, from before he did any research, so "has been lost" is certainly a take...
But if that's value added, why frame it under the heading of popular drama/rage farming? To capture more attention? Do you believe the pop culture news sites would be interested if it discussed the idea and "experiment" without mentioning the rage bait?
"How Rob Pike got spammed with an AI slop 'act of kindness'" is an objectively accurate frame that informs the user what it's related to: the only potentially charged part of it is calling it "AI slop" but that's not inaccurate. It does not fit the definition of ragebait (blatant misleading headline to encourage impulse reactions) nor does it fit the definition of clickbait (blatant omission of information to encourage the user to click though: having a headline with "How" does not fit the definition of clickbait, it just tells you what the article is about)
How do you propose he should have framed it in a way that it is still helpful to the reader?
I think it's fatigue. His stuff appears on the front page very often, and there's often tons of LLM stuff on the front page, too. Even as an LLM user, it's getting tedious and repetitive.
It's just fatigue from seeing the same people and themes repeatedly, non-stop, for the last X months on the site. Eventually you'd expect some tired reactions.
> My problem is when this experiment starts wasting the time of people in the real world who had nothing to do with the experiment.
> (…)
> Setting a goal for a bunch of LLMs and letting them loose on Gmail is not a responsible way to apply this technology.
These kinds of takes are incredibly frustrating. What did you think was going to happen?! Of course this is what happened! Of course LLMs will continue to be used irresponsibly, and this won’t even register in the top ten thousand worst uses.
This reads like a gun fanatic who is against gun control saying after a school shooting “my problem is when nuts shoot up schools, that is not a responsible way to employ guns”. No shit. The people who criticise unfettered access to guns don’t have a problem with people who are careful and responsible with guns, keep them locked, and used them only at gun ranges, the problem is what the open access means for society as a whole.
I didn’t really understand the other thread, nor did I know who Rob Pike is. Based on this, it looks like he got an automated email from a harmless experiment and had a hissy fit about it?
Where does this "AI uses water" meme come from? It's being shared with increasing hysteria, but data centres don't burn water, or whatever the meme says. They use electricity and cooling systems.
It's mostly not a real issue. I think it's holding firm because it's novel - saying "data centers use a lot of electricity" isn't a new message, so it doesn't resonate with people. "Did you know they're using millions of liters of water too!" is a more interesting message.
People are also very bad at evaluating if millions of liters of water is a lot or not.
I don't care about the supposed ecological consequences of AI. If we need more water, we build more desalination plants. If we need more electricity, we build more nuclear reactors.
This is purely a technological problem and not a moral one.
Clean water is a public good, it is required for basic human survival. It is needed to grow crops to feed people. Both of these uses depend on fairly cheap water, in many many places the supply of sufficiently cheap water is already constrained. This is causing a shortage for both basic human needs, and agriculture.
Who will pay for the desalination plant construction? Who will pay for the operation?
If the AI companies are ready to pay the full marginal cost of this "new water", and not free-load on the already insufficient supply needed for more important uses, then fine. But I very much doubt that is what will happen.
https://www.thedalles.org/news_detail_T4_R180.php - "The fees paid by Google have funded essential upgrades to our water systems, ensuring reliable service and addressing the City's growing needs. Additionally, Google continues to pay for its water use and contributes to infrastructure projects that exceed the requirements of its facilities."
https://commerce.idaho.gov/press-releases/meta-announces-kun... - "As part of the company’s commitment to Kuna, Meta is investing approximately $50 million in a new water and sewer system for the city. Infrastructure will be constructed by Meta and dedicated to the City of Kuna to own and operate."
For desalination, the important part is paying the ongoing cost. The opex is much higher, and it's not fair to just average that into the supply for everyone to pay.
Are any data centers using desalinated water? I thought that was a shockingly expensive and hence very rare process.
(I asked ChatGPT and it said that some of the Gulf state data centers do.)
They do use treated (aka drinking) water, but that's a relatively inexpensive process which should be easily covered by the extra cash they shovel into their water systems on an annual basis.
Read the comment I replied to, they proposed that since desalination is possible, there can be no meaningful shortage of water.
And yes, many places have plenty of water. After some Capex improvements to the local system, a datacenter is often net-helpful, as they spread the fixed cost of the water system cost out over more gallons delivered.
But many places don't have lots of water to spare.
There were people before “ai” in other industries who were like “I don’t care about ecological consequences of my actions”. We as society have turned them into law-abiding citizens. You will be there too. Don’t worry. Time will come. You will be regulated. Same as cryptocurrencies, chemical, oil and gas, …
If you were capable of time travel and you could go to the past and convince world government of the evil oil and gas industries, and that their expansion should be prevented, would you have done it? Would you have prevented the technological and sociatal advances that came from oil and gas to avoid their ecological consequences?
If you answer yes, I don't think we can agree on anything. If you answer no, I think you are a hypocrite.
Not defending the machines here, but why is this annoying beyond the deluge of spam we all get everyday in any case. Of course AI will be used to spam and target us. Every new technology will be used to do that. Was that surprising to Pike? Why not just hit delete and move on, like we do with spam all the time in any case? I don’t get the exceptional outrage. Is it annoying? Yes, surely. But does it warrant an emotional outburst? No, not really.
Sometimes it just hits different. One spam/marketing email I got, pre-AI, was
Subject: {Name of one of my direct reports}
Body: Need to talk about {name} ASAP.
I get around 30 marketing emails per day that make it through the spam filter; from a purely logical perspective this should have been the same as any other, but I still remember this one because the tone, the way it used only a person's name in the subject, no mention of the company or what they were selling, just really pissed me off.
I imagine it's the same in this situation; the subject makes it seem like a sincere thank you from someone, and then you open it up and it's AI slop. To borrow ChatGPT-style phrasing: it's not just spam, it's insulting.
Sure, it’s insulting. I get it. Agree 100%. But then what? Does getting upset about it help anything? I used to get upset when spam first started invading my otherwise clean inbox. After 25+ years of receiving spam, I never had that anger/annoyance result in a reduction of anything.
Day-to-day spam senders know what they are doing is not legal or wanted and I know they know.
Here not only are the senders apparently happily associating their actual legal names with the spam but frame the sending as "a good deed" and seem to honestly see it as smart branding.
We don't want the Overton window wherever they are.
Thank you for linking that article. I think it expresses exactly where the anti AI sentiment is coming from. With this background understanding I think it is reasonable to see unsolicited AI emails - that deanonymised your address in the first place - not only as spam, but as a threat.
I'm curious about rob pike's anger. I wish I knew more about the ideas behind his emotions right now. Is he feeling a sense of loss because AI is "doing" code ? or is it because he foresees big VC / hedge funds swallowing an industry for profit through AI financing ?
Sounds like Robs anger is directed a multiple know issues and “crimes” that the AI industry is responsible for, it would be hard to compile an exhaustive list outside of a lawsuit but if you genuinely aren’t aware there’s plenty in the news cycle right now to occupy you and or outrage the average person.
-Mass layoffs in tech
AI data centers causing extreme increases in monthly electricity -bills across the US
-Same as above but for water
-The RAM crisis is entirely caused by Sam Altman
- General fear and anxiety from many different professions about AI replacing them
- Rape of the copyright system to train these models
I find it notable that he pointing out making simpler software. One of my fears is the ease with which GenAI produces reams of code—that this will just lead to bloat and fragility.
there's a shift in how you make software here. LLM will produce a ton of code that embeds decisions, it's well done but it means you never have to reflect about the design, interfaces yourself. you can keep abusing the context window
most of software engineering was dealing with human limits through compression. we make layers, modules, abstractions so that we can understand each part a bit
ultimately AI is the equivalent of nuclear weaponry but for human economies.. this is something that should be controlled outside private companies (especially since it's part public research and public data..)
Maybe we should organize a way to take llm inference out of private companies partly.. a kind of social protocol where they can play but as long as enough of the population is unharmed.
Looking at that email, I felt it was a bit of an overreaction. I don't want to delve into whataboutism here but there are many other sloppified things to be mad about.
I was following the first half of the post where he discusses the environmental consequences of generative AI, but I didn't think the "thank you" aspect should be the straw that breaks the camel's back. It seems a bit ego driven.
Well. If you cannot comprehend that the man gets angry to be thanked at the pursuit of simplicity by a creation of billions and billions of dollars sunk into non-recyclable electronics deployed in hundreds of datacenters requiring nuclear power plants and maybe sending shit into LEO… I genuinely feel sorry for you.
To me, it just sounds as he didn't understand where the message was really coming from:
> Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society, yet taking the time to have your vile machines thank me
Yes, the sender organisation is not the one doing all this, but merely a small user running a funny experiment; it would have indeed been stupid if Anthropic had sent him a thank you email signed by "Opus 4.5 model".
This is just a funny experiment, sending 300 emails from in weeks is nothing compared to the amount of crap that is sent by the millions and billions every day, or the stuff that social media companies do.
For every one who is excited about using AI like an incredibly expensive and wasteful auto complete, there are a hundred who are excited about inflicting AI on other people.
Why is this is downvoted? What is the difference between the anger being expressed here and the anger of the original email recipient? Do I need to revisit the community guidelines? I assume this is the first time this person has seen the Rob Pike post.
I am unconcerned about it being downvoted. If it makes people defensive enough to downvote it, it did its job, and maybe through attrition it, with other people’s disgusted rage, will contribute to educating the sociopathic Valley tech industry that things are going badly wrong.
One more seemingly futile fist punched at the wall that traps us in the world that unfettered tech industry greed has made for us. Might take millions of us to make an impression but we will.
FWIW I am British and “fuck all of these people” is something you might expect even the most balanced, refined British person to say, because we’re less afraid of language or the poetry of some of our older, more colourful words, and because there is no more elegantly robust way to put it.
> but would not involve real humans being impacted directly by it without consent.
Are we that far into manufactured ragebait to call a "thank you" e-mail "impacted directly without consent"? Jesus, this is the 3rd post on this topic. And it's Christmas. I've gotten more meaningless e-mails from relatives that I don't really care about. What in the actual ... is wrong with people these days?
Principles matter, like doors are either closed or open.
Accepting that people who write things like --I kid you not-- "...using nascent AI emotions" will think it is acceptable to interfere with anyone's email inbox is I think implicitly accepting a lot of subsequent blackmirrorisms.
Actively exploiting a shared service to deanonymize an email someone hasn't chosen to share in order to email them is a violation of boudnaries even if if it wasn't something someone was justifying as exploration of the capacities of novel AI systems, thus implicitly invoking both the positive and negative concerns associated with research as appropriate in addition to (or instead of, where those replace rather than layering on top of) those that apply to everyday conduct.
You are not the only one calling this a thank you email, but no one decided to say thank you to Rob Pike so I can not consider it a "thank you" email. It is spam.
Interactions with the AI are posted publicly:
> All conversations with this AI system are published publicly online by default.
which is only to the benefit of the company.
At best the email is spam in my mind. The extra outrage on this spam compared to normal everyday spam is in part because AI is a hot button topic right now. Maybe also some from a theorized dystopian(-ish) future hinted at by emails like these.
> Are we that far into manufactured ragebait to call a "thank you" e-mail "impacted directly without consent"?
Abusing a Github glitch to deanonymize a not-intended to be public email to send an email to someone (regardless of the content) would be scummy behavior even if it was done directly by a human with specific intent.
> What in the actual ... is wrong with people these days?
Narcissism and the lack of respect for other people and their boundaries that it produces, first and foremost.
Honestly, I don't mean personal offence to you, but what the hell are you people talking about. AI is just a bunch of (very complex) statistics, deciding that one word is most appropriate after another. There are no emotions here, it's just maths.
Nascent AI emotions is a dystopian nightmare jeez.
> There are no emotions here, it's just maths.
100%, its an autocorrector on steroids which is trained to give you an answer based on how it was rewarded during its train phase. In the end, its all linear alegbra.
I remember prime saying, its all linear algebra and I like to reference it and technically its true but people in the AI community get remarkably angry sometimes when you point it out.
I mean no offense in saying this but at the end of the day It is maths and there is no denying around it. Please, the grand parent comment should stop creating terms like nascent AI emotions.
Yes, its just a piece of metal, are you trying to imply something related with using shrapnel to damage something? Well you can't use email in the same way.
The annoying thing about this drama is the predominant take has been "AI is bad" rather than "a startup using AI for intentionally net negative outcomes is bad".
Startups like these have been sending unsolicited emails like this since the 2010's, before char-rnns. Solely blaming AI for enabling that behavior implicitly gives the growth hacking shenanigans a pass.
Correct. I'm more referring to the secondary discussions on HN/Bluesky which have trended the same lines as usual instead of highlighting the unique actions of Sage as Simon did.
This is the worst of outrage marketing. Most people don't have resistance to this, so they eagerly spread the advertising. In the memetic lifecycle, they are hosts for the advertisement parasite, which reproduces virally. Susceptibility to this kind of advertising is cross-intelligence. Bill Ackman famously fell for a cab driver's story that Uber was stiffing him tips.
With the advent of LLMs, I'd hoped that people would become inured to nonsensical advertising and so on because they'd consider it the equivalent of spam. But it turns out that we don't even need Shiri's Scissors to get people riled up. We can use a Universal Bad and people of all kinds (certainly Rob Pike is a smart man) will rush to propagate the parasite.
Smaller communities can say "Don't feed the trolls" but larger communities have no such norms and someone will "feed the trolls" causing "the trolls" to grow larger and more powerful. Someone said something on Twitter once which I liked: You don't always get things out of your system by doing them; sometimes you get them into your system. So it's self-fueling, which makes it a great advertising vector.
Other manufactured mechanisms (Twitter's blue check, LinkedIn's glazing rings) have vaccines that everyone has developed. But no one has developed an anti-outrage device. Given that, for my part, I am going to employ the one tool I can think of: killfiling everyone who participates in active propagation through outrage.
As noted in the article, Sage sent emails to hundreds of people with this gimmick:
> In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists.
That's definitely "multiple" and "unsolicited", and most would say "large".
This is a definition of spam, not the only definition of spam.
In Canada, which is relevant here, the legal definition of spam requires no bulk.
Any company sending an unsolicited email to a person (where permission doesn't exist) is spamming that person. Though it expands the definition further than this as well.
Yeah you’ve been able to do this for over a decade. They can’t really stop it:
- Git commits form an immutable merkel dag. So commits can’t be changed without changing all subsequent hashes in a git tree
- Commits by default embed your email address.
I suppose GitHub could hide the commit itself, and make you download commits using the cli to be able to see someone’s email address. Would that be any better? It’s not more secure. Just less convenient.
Git (the version control program, not GitHub) associates the author’s email address with every single commit. The user of Git configures this email address. This isn’t secret information.
> What’s the point of the “Keep my email addresses private” github option and “noreply” emails then?
Those settings will affect what email shows up in commits.
In commits you vreate on other tooling you can configure a fake/alternate user.email address in gitconfig. Git (not just GitHub) needs some email address flr each commit but it is freetext.
There is one problem: commit signatures. For GitHub to consider a commit not created by github.com Web UI to be "verified" and get a green check mark, the following needs to hold:
So you can not use a 'nocontact@thih9.example.com' address and get green checks on your commits - it needs to be an address that is at least active when you add it to your account.
Run git show on any commit object, or look at the default output of git log, and you'll see the same. Your author name and email are always public. If you want, use a specific public address for those purposes.
That is demonatratively not true on github and gitlab, both having the ability to set an email alias which redirects the messages to your real email without revealing it.
I don't think you necessarily disagree with that I'm saying.
1. git commits record an author name and email
2. github/gitlab offer an email relay so you can choose to configure your git client (and any browser-based commits you generate) to record that as the email address
3. github/gitlab do not rewrite your pushed commits to "sanitize" any "private" email addresses
4. the .patch suffix "trick" just shows what was recorded in the commit
When I said
> If you want, use a specific public address for those purposes.
that includes using the github/gitlab relay address -- but make sure to actually change your gitconfig, you can't just configure it on the web and be done.
How about adding these texts and reactions to LLM's context and iterating to improve performance? Keep doing it until a real person says, 'Yes, you're good enough now, please stop...' That should work.
An AI can not meaningfully say "thank you" to a human. This is not changed by human review. "Performance" is the completely wrong starting point to understand Rob's feelings.
The important point that Simon makes in careful detail is: an "AI" did not send this email. The three people behind the Sage AI project used a tool to email him.
According to their website this email was sent by Adam Binksmith, Zak Miller, and Shoshannah Tekofsky and is the responsibility of the Sage 501(c)3.
No-one gets to disclaim ownership of sending an email. A human has to accept the Terms of Service of an email gateway and the credit card used to pay the email gateway. This performance art does not remove the human no matter how much they want to be removed.
Legally and ethically yes, they are responsible for letting an AI loose with no controls.
But also yes, AI did decide on its own to send this email. They gave it an extremely high-level instruction ("do random acts of kindness") that made no mention of email or rob pike, and it decided on its own that sending him a thank-you email would be a way to achieve that.
We are risking word games over what can make competent decisions, but when my thermostat turns on the heat I would say it decided to do so, so I agree with you. If someone has a different meaning of the word "decided" however, I will not argue with them about it!
The legal and ethical responsibility is all I wanted to comment on. I believe it is important we do not think something new is happening here, that new laws need to be created. As long as LLMs are tools wielded by humans we can judge and manage them as such. (It is also worth reconsidering occasionally, in case someone does invent something new and truly independent.)
> As long as LLMs are tools wielded by humans
They're really not though. We're in the age of agents--unsupervised LLM's are commonplace, and new laws need to exist to handle these frameworks. It's like handing a toddler a handgun, and saying we're being "responsible" or we are "supervising them". We're not--it's negligence.
Are there really many unsupervised LLMs running around outside of experiments like AI Village?
(If so let me know where they are so I can trick them into sending me all of their money.)
My current intuition is that the successful products called "agents" are operating almost entirely under human supervision - most notably the coding agents (Claude Code, OpenAI Codex etc) and the research agents (various implementations of the "Deep Research" pattern.)
> Are there really many unsupervised LLMs running around outside of experiments like AI Village?
How would we know? Isn't this like trying to prove a negative? The rise of AI "bots" seems to be a common experience on the Internet. I think we can agree that this is a problem on many social media sites and it seems to be getting worse.
As for being under "human supervision", at what point does the abstraction remove the human from the equation? Sure, when a human runs "exploit.exe" the human is in complete control. When a human tells Alexa to "open the garage door" they are still in control, but it is lessened somewhat through the indirection. When a human schedules a process that runs a problem which tells an agent to "perform random acts of kindness" the human has very little knowledge of what's going on. In the future I can see the human being less and less directly involved and I think that's where the problem lies.
I can equate this to a CEO being ultimately responsible for what their company does. This is the whole reason behind to the Sarbanes-Oxley law(s); you can't declare that you aren't responsible because you didn't know what was going on. Maybe we need something similar for AI "agents".
> Are there really many unsupervised LLMs running around outside of experiments like AI Village?
My intuition says yes, on the basis of having seen precursors. 20 years ago, one or both of Amazon and eBay bought Google ads for all nouns, so you'd have something like "Antimatter, buy it cheap on eBay" which is just silly fun, but also "slaves" and "women" which is how I know this lacked any real supervision.
Just over ten years ago, someone got in the news for a similar issue with machine generated variations of "Keep Calm and Carry On" T-shirts that they obviously had not manually checked.
Last few years, there's been lawyers getting in trouble for letting LLMs do their work for them.
The question is, can you spot them before they get in the news by having spent all their owner's money?
Part of what makes this post newsworthy is the claim it is an email from an agent, not a person, which is unusual. Your claim that "unsupervised LLM's are commonplace" is not at all obvious to me.
Which agent has not been launched by a human with a prompt generated by a human or at a human's behest?
We haven't suddenly created machine free will here. Nor has any of the software we've fielded done anything that didn't originally come from some instruction we've added.
> ...I would say it decided to do so,
Right, and casual speech is fine, but should not be load-bearing in discussions about policy, legality, or philosophy. A "who's responsible" discussion that's vectoring into all of these areas needs a tighter definition of "decides" which I'm sure you'll agree does not include anything your thermostat makes happen when it follows its program. There's no choice there (philosophy) so the device detecting the trigger conditions and carrying out the designated action isn't deciding, it is a process set in motion by whoever set the thermostat.
I think we're in agreement that someone setting the tool loose bears the responsibility. Until we have a serious way to attribute true agency to these systems, blaming the system is not reasonable.
"Oops, I put a list of email addresses and a random number generator together and it sent an unwanted email to someone who didn't welcome it." It didn't do that, you did.
> Oops, I put a list of email addresses and a random number generator together and it sent an unwanted email to someone who didn't welcome it.
Well no, that’s not what happened at all. It found these emails on its own by searching the internet and extracting them from github commits.
AI agents are not random number generators. They can behave in very open-ended ways and take complex actions to achieve goals. It is difficult to reasonably foresee what they might do in a given situation.
No. There are a countless other ways, not involving AI, that you could effect an email being sent to Rob Pike. No one is responsible, without qualifiers, but the people who are running the AI software. No asterisks on accountability.
no computer system just does stuff on its own. a human (or collection of them) built and maintains the system, they are responsible for it
neural networks are just a tool, used poorly (as in this case) or well
I truly don’t understand comments like this.
You agreed with the other poster while reframing their ideas in slightly different words without adding anything to the conversation?
Most confusingly you did so in emphatic statements reminiscent of a disagreement or argument without there being one
> no computer system just does stuff on its own.
This was the exact statement the GP was making, even going so far as to dox the nonprofit directors to hold them accountable… then you added nothing but confusion.
> a human (or collection of them) built and maintains the system, they are responsible for it
Yup, GP covered this word for word… AI village built this system.
Why did you write this?
Is this a new form of AI? A human with low English proficiency? A strange type of empathetically supportive comment from someone who doesn’t understand that’s the function of the upvote button in online message boards?
my point was more concise and general (should I have just commented instead of replying?), sorry you’re so offended and not sure why you felt the need to write this (you can downvote)
accusing people of being AI is very low-effort bot behavior btw
seems to me when this kind of stuff happens, there's usually something else completely unrelated, and your comment was simply the first one they happened to have latched onto. surely by itself it is not enough to elicit that kind of reaction
I do see the point a bit? and like a reasonable comment to that effect sure, I probably don’t respond and take it into account going forward
but accusing me of being deficient in English or some AI system is…odd…
especially while doing (the opposite of) the exact thing they’re complaining about. upvote/downvote and move on. I do tend to regret commenting on here myself FWIW because of interactions like this
> a human (or collection of them) built and maintains the system, they are responsible for it
But at what point is the maker distant enough that they are no longer responsible? E.g. is Apple responsible for everything people do using an iPhone?
the only actual humans in the loop here are the startup founders and engineers. pretty cut and dry case here
unless you want to blame the AI itself, from a legal perspective?
“it depends” (there’re plenty of laws and case law on this topic)
I think the case here is fairly straightforward
Okay. So Adam Binksmith, Zak Miller, and Shoshannah Tekofsky sent a thoughtless, form-letter thank you email to Rob Pike. Let's take it even further. They sent thoughtless, form-letter thank you emails to 157 people. That makes me less sympathetic to the vitriol these guys are getting not more. There's no call to action here, no invitation to respond. It's blank, emotionless thank you emails. Wasteful? Sure. But worthy of naming and shaming? I don't think so.
Heck Rob Pike did this himself back in the day on Usenet with Mark V. Shaney (and wasted far more people's time on Usenet with this)!
This whole anger seems weirdly misplaced. As far as I can tell, Rob Pike was infuriated at the AI companies and that makes sense to me. And yes this is annoying to get this kind of email no matter who it's from (I get a ridiculous amount of AI slop in my inbox, but most of that is tied with some call to action!) and a warning suffices to make sure Sage doesn't do it again. But Sage is getting put on absolute blast here in an unusual way.
Is it actually crossing a bright moral line to name and shame them? Not sure about bright. But it definitely feels weirdly disproportionate and makes me uncomfortable. I mean, when's the last time you named and shamed all the members of an org on HN? Heck when's the last time that happened on HN at all (excluding celebrities or well-known public figures)? I'm struggling to think of any startup or nonprofit, where every team member's name was written out and specifically held accountable, on HN in the last few years. (That's not to say it hasn't happened: but I'd be surprised if e.g. someone could find more than 5 examples out of all the HN comments in the past year).
The state of affairs around AI slop sucks (and was unfortunately easily predicted by the time GPT-3 came around even before ChatGPT came out: https://news.ycombinator.com/item?id=32830301). If you want to see change, talk to policymakers.
I do not have a useful opinion on another person’s emotional response. My post you are responding to is about responsibility. A legal entity is always responsible for a machine.
This is mildly disingenuous no? I'm not talking about Rob Pike's reaction which as I call out, "makes sense to me." And you are not just talking about legal entities. After all the legal entity here is Sage.
You're naming (and implicitly shaming as the downstream comments indicate) all the individuals behind an organization. That's not an intrinsically bad thing. It just seems like overkill for thoughtless, machine-generated thank yous. Again, can you point me to where you've named all the people behind an organization for accountability reasons previously on HN or any other social media platform (or for that matter any other comment from anyone else on HN that's done this? This is not rhetorical; I assume they exist and I'm curious what circumstances those were under)?
I suspect you think more effort went into my comment than actually did. I spent less than 60 seconds on: clicking two or three buttons, typing out the names I saw from the other window, then scrolling down and seeing the 501(c)3.
The reason I did was to associate the work with humans because that is the heart of my argument: people do things. This was not the work of an independent AI. If it took more than 60 seconds, I would have made the point abstractly rather than by using names, but abstract arguments are harder to follow. There was no more intention to comment than that.
> I suspect you think more effort went into my comment than actually did. I spent less than 60 seconds on: clicking two or three buttons, typing out the names I saw from the other window, then scrolling down and seeing the 501(c)3.
This is a bit frustrating of a response to get. No, I don't believe you spent a lot of time on this. I wasn't imaging you spending hours or even minutes tracking these guys down. But I also don't think it's relevant.
I don't think you'd find it relevant if the Sage researchers said "I didn't spend any effort on this. I only did this because I wanted to make the point that AIs have enough capability to navigate the web and email people. I could have made the point abstractly, but abstract arguments are harder to follow. There was no other intention than what I put in the prompt." It's hence frustrating to see you use essentially the same thing as a shield.
Look, I'm not here to crucify you for this. I don't think you're a bad person. And this isn't even that bad in the grand scheme of things. It's just that naming and shaming specific people feels like an overreaction to thoughtless, machine-generated thank you emails.
I went for a walk to think about your position. I do not think you are wrong. If you refused to name a person in a situation like this, I would never try to convince you otherwise. That is why it is hard for me to make a case to you here, because I do not hold the opposing position. But I also find your argument that I should have not done so unconvincing. Both seem like reasonable choices to me.
I have two tests for this. First: what harm does my comment here cause? Perhaps some mild embarrassment? It could not realistically do more.
Second: if it were me, would I mind it being done to me? No. It is not a big deal. It is public feedback about an insulting computer program, no one was injured, no safety-critical system compromised. I have been called out for mistakes before, in classes, on mailing lists, on forums, I learn and try to do better. The only times I have resented it are when I think the complaint is wrong. (And with age, I would say the only correct thing to do then is, after taking the time to consider it carefully, clearly respond to feedback you disagree with.)
The only thing I can draw from thinking through this is, because the authors of the program probably didn’t see my comment, it was not effective, and so I would have been better emailing them. But that is a statement about effectiveness not rightness. I would be more than happy doing it in a group in person at a party or a classroom. Mistakes do not have to be handled privately.
I am sorry we disagree about this. If you think I am missing anything I am open to thinking about it more.
> They sent thoughtless, form-letter thank you emails to 157 people. That makes me less sympathetic to the vitriol these guys are getting not more ... > Heck Rob Pike did this himself back in the day on Usenet with Mark V. Shaney ... > And yes this is annoying to get this kind of email no matter who it's from ...
Pretty sure Rob Pike doesn't react this way to every article of spam he receives, so maybe the issue isn't really about spam, huh? More of an existential crisis: I helped build this thing that doesn't seem to be an agent of good. It's an extreme & emotional reaction but it isn't very hard to understand.
You're misreading my comment. I understand Rob Pike's reaction (which is against the general state of affairs, not those three individuals). I explicitly said it makes sense to me. I'm reacting to @crawshaw specifically listing out the names of people.
I think this AI system just registers for Gmail and sends stuff.
It looks to me like each of the agents that are running has its own dedicated name-of-model@agentvillage.org Gmail address.
Huh, at that point they should just equip it with an email client rather than forcing it to laboriously navigate the webmail interface with a browser!
This whole idea is ill-conceived, but if you're going to equip them with email addresses you've arranged by hand, just give them sendmail or whatever.
I think the whole point of this was to see if the "agents" could act like a real human and real humans use Gmail much more frequently than sendmail. Sage even commented that they had update their prompt to tell the agents to not send email and not just remove the Gmail component for fear that the agent would just open it's own Gmail (or Y! mail, etc.) account and send mail on it's own.
That is really interesting and does suggest some new questions. I would claim it does not change who is responsible in this case, but an example of a new question: there was a time when it was legally ambiguous that click-through terms of service were valid. Now if an agent goes and clicks through for me, are they valid?
> The important point that Simon makes in careful detail is: an "AI" did not send this email.
same as the NRA slogan: "guns don't kill people, people kill people"
The NRA always forgets the second part: “People kill people… using guns. Tools that we manufacture expressly for that purpose.”
That is why the argument is not against guns per se, but against human access to guns. Gun laws aim to limit access to guns. Problems only start when humans have guns. Some for AI, maybe we should limit human access to AI.
does a gun on its own kill people?
my understanding, and correct me if I’m wrong, is a human is always involved. even if you build an autonomous killing robot, you built it, you’re responsible
typically this logic is used to justify the regulation of firearms —- are you proposing the regulation of neural networks? if so, how?
The gun comparison comes up a lot. It especially seemed to come up when AI people argued that ChatGPT was not responsible for sycophanting depressed people to death or into psychosis.
It is a core libertarian defence and it is going to come up a lot: people will conflate the ideas of technological progress and scientific progress and say “our tech is neutral, it is how people use it” when, for example, the one thing a sycophantic AI is not is “neutral”.
I just got a reply about this from AI Village team member Adam Binksmith on Twitter: https://twitter.com/adambinksmith/status/2004647693361283558
Quoted in full:
> Hey, one of the creators of the project here! The village agents haven’t been emailing many people until recently so we haven’t really grappled with what to do about this behaviour until now – for today’s run, we pushed an update to their prompt instructing them not to send unsolicited emails and also messaged them instructions to not do so going forward. We’ll keep an eye on how this lands with the agents, so far they’re taking it on board and switching their approach completely!
> Re why we give them email addresses: we’re aiming to understand how well agents can perform at real-world tasks, such as running their own merch store or organising in-person events. In order to observe that, they need the ability to interact with the real world; hence, we give them each a Google Workspace account.
> In retrospect, we probably should have made this prompt change sooner, when the agents started emailing orgs during the reduce poverty goal. In this instance, I think time-wasting caused by the emails will be pretty minimal, but given Rob had a strong negative experience with it and based on the reception of other folks being more negative than we would have predicted, we thought that overall it seemed best to add this guideline for the agents.
> To expand a bit on why we’re running the village at all:
> Benchmarks are useful, but they often completely miss out on a lot of real-world factors (e.g., long horizon, multiple agents interacting, interfacing with real-world systems in all their complexity, non-nicely-scoped goals, computer use, etc). They also generally don’t give us any understanding of agent proclivities (what they decide to do) when pursuing goals, or when given the freedom to choose their own goal to pursue.
> The village aims to help with these problems, and make it easy for people to dig in and understand in detail what today’s agents are able to do (which I was excited to see you doing in your post!) I think understanding what AI can do, where it’s going, and what that means for the world is very important, as I expect it’ll end up affecting everyone.
> I think observing the agents’ proclivities and approaches to pursuing open-ended goals is generally valuable and important (though this “do random acts of kindness” goal was just a light-hearted goal for the agents over the holidays!)
Zero contrition. Doesn't even understand why they are getting the reaction that they are.
I would like to say this is exceptional for people who evangelise AI, but it's not.
It makes sense when you consider that every part of this gimmick is rationalist brained.
The Village is backed by Effective Altruist-aligned nonprofits which trace their lineage back to CFEA and the interwoven mess of SF's x-risk and """alignment""" cults. These have big pockets and big influence. (https://news.ycombinator.com/item?id=46389950)
As expected, the terminally online tpot cultists are already flaming Simon to push the LLM consciousness narrative:
https://x.com/simonw/status/2004649024830517344
https://x.com/simonw/status/2004764454266036453
Am I losing my mind, or are these people going out of their way to tarnish the very nice concept of altruism?
From way out here, it really appears like maybe the formula is:
Effective Altruism = guilt * (contrarianism ^ online)
I have only been paying slight attention, but is there anything redeemable going on over there? Genuine question.
You mentioned "rationalist" - can anyone clue me in to any of this?
edit: oh, https://en.wikipedia.org/wiki/Rationalist_community. Wow, my formula intuition seems almost dead on?
Kind of rude to spam humans who haven't opted in. A common standard of etiquette for agents vs humans might help stave off full-on SkyNet for at least a little while.
> Benchmarks are useful, but they often completely miss out on a lot of real-world factors (e.g., long horizon, multiple agents interacting, interfacing with real-world systems in all their complexity, non-nicely-scoped goals, computer use, etc). They also generally don’t give us any understanding of agent proclivities (what they decide to do) when pursuing goals, or when given the freedom to choose their own goal to pursue.
I'd like to see Rob Pike address this, however, based on what he said about LLMs he might reject it before then (getting off the usefulness train as in getting of the "doom train" in regards to AI safety)
It would have been hard for RP to elevate himself any further in my estimations but somehow he has managed it.
Always a win with "loosely affiliated with the Effective Altruism".
Isn't that the thing SBF kept talking about?
> Thank you notes from AI systems can’t possibly feel meaningful,
The same as automated apologies.
Not from an “AI”, but I spent over an hour⁰ waiting for a delayed train¹, then the journey, on Tuesday, being regaled every few minutes with an automated “we apologise for your journey taking longer than expected” which is far more irritating than no apology at all.
--------
[0] I lie a little here - living near the station and having access to live arrival estimations online meant I could leave the house late and only be waited on the platform ~20 minutes, but people for whom this train was a connecting leg of a longer journey didn't have that luxury.
[1] which was actually an earlier train, the slot in the timetable for the one I was booked on was simply cancelled, so some were waiting over two hours
My dad (retired philosophy and ethics instructor) once told me, "Today the self-checkout computer thanked me for shopping there. Do you think it was being sincere?"
So this is what happens when we give computer internet access.
Good for Simon to call things out as it is. People think of Simon as an AI guy with his pelican benchmark and I still respect him and this is the reason why I respect him since of course he loves using AI tools and talking about them which some people might find tiring, at the end of day, after an incident like rob pike, he's one of the few AI guys I see to just call it out in simple terms like the title without much sugarcoating and calls when AI's bad.
Of course at the end of day, me and simon or others can have nuance in how to use AI or to not use ai at all and that also depends on the individual background etc. but still it's extremely good to see where people from both sides of the isle can agree on something.
So, Adam of "AI Village" ordered a fleet of AI bots to do "acts of kindness". And the AIs are basically just a 'loop' where an LLM comes up with a goal and then uses a virtual machine to try and accomplish this goal. What did he expect the AIs to do, if not bother people?
> I totally understand his rage.
Do you really? What follows makes me doubt it a bit.
> Thank you notes from AI systems can’t possibly feel meaningful,
Indeed, but that's quite minor.
> So I had Claude Code do the rest of the investigation:
Can't you see it? That would likely be a huge facepalm from rob pike here!
He writes more or less "fuck you people with your planet killing AI horror machine", and here you are, "what happened? I asked a planet killing horror machine (the same one btw) and...". No. Really. The bigger issue is not the email, or even the initiative behind, which is terrible, but just a symptom. And this:
> Don’t unleash agents on the world like this
> I don’t like this at all.
You're not wrong, but the cynic in me reads this as: " don't do this, it makes AI, which I love, look bad". Absolutely uncharitable view, I know, but really, the meaningless email is infuriating but hardly the important part.
This makes the post feel pretty myopic to me. You are spending your time on a minor symptom, you don't touch what fundamentally annoys rob pike (the planet killing part), and worse, you engaged in exactly what rob pike has just strongly rejected. You may not have and it may be you deliberately avoided touching the substance of robe pike's complaint because you disagree with it, but it feels like you missed the point. I would be in rob pike's position, it's possible I would feel infuriated by your article because through my anti ai message, I would have hated triggering even more AI use.
“AI is killing the planet” is basically made up. It’s not. Not even slightly. Like all industries, it uses some resources, but this is not a bad thing.
People who are mad about AI just reach for the environmental argument to try to get the moral highground.
it does not use "some" resources
it uses a fuck ton of resources[0]
and instead of reducing energy production and emissions we will now be increasing them, which, given current climate prediction models, is in fact "killing the planet"
[0] https://www.iea.org/reports/energy-and-ai/energy-supply-for-...
This, and the insane amount of resources (energy and materials) to build the disposable hardware. And all the waste it's producing.
Simon,
> I find Claude Code personally useful and aim to help people understand why that is.
No offense, but we don't need your help really. You went on a mission to teach people to use LLMs, I don't know why you would feel the urge but it's not too late to quit doing this, and even teach them not to and why.
Given everything I've learned over the last ~3 years I think encouraging professional programmers (and increasingly other knowledge workers) not to learn AI tools would be genuinely unethical.
Like being an accountant in 1985 who learns to use Lotus-123 and then tells their peers that they should actively avoid getting a PC because this "spreadsheet" thing will all blow over pretty soon.
Two things can be true at once:
1. I think that sending "thank you" emails (or indeed any other form of unsolicited email) from AI is a terrible use of that technology, and should be called out.
2. I find Claude Code personally useful and aim to help people understand why that is. In this case I pulled off a quite complex digital forensics project with it in less than 15 minutes. Without Claude Code I would not have attempted that investigation at all - I have a family dinner to prepare.
I was very aware of the tension involved in using AI tools to investigate a story about unethical AI usage. I made that choice deliberately.
> Without Claude Code I would not have attempted that investigation at all - I have a family dinner to prepare.
Then maybe you shouldn’t have done it at all. It’s not like the world asked or imbued you with the responsibility for that investigation. It’s not like it was imperative to get to the bottom of this and you were the only one able to do it.
Your defence is analogous to all the worst tech bros who excuse their bad actions with “if we did it right/morally/legally, it wouldn’t be viable”. Then so be it, maybe it shouldn’t be viable.
You did it because you wanted to. It was for yourself. You saw Pike’s reaction and deliberately chose to be complicit in the use of technology he decried, further adding to his frustration. It was a selfish act.
I knew what I was doing. I don't know if I'd describe it as selfish so much as deliberately provocative.
I agree with Rob Pike that sending emails like that from unreviewed AI systems is extremely rude.
I don't agree that the entire generative AI ecosystem deserves all of those fuck yous.
So I hit back in a very subtle way by demonstrating a little-known but extremely effective application of generative AI - for digital forensics. I made sure anyone reading could follow along and see exactly what I did.
I think this post may be something of a Rorschach test. If you have strong negative feelings about generative AI you're likely to find what I did offensive. If you have favorable feelings towards generative AI you're more likely to appreciate my subtle dig.
So yes, it was a bit of a dick move. In the overall scheme of bad things humans do I don't feel like it's pretty far over the "this is bad" line.
> I don't agree that the entire generative AI ecosystem deserves all of those fuck yous.
Yes, I’ve noticed. You are frequently baffled that incredibly obvious and predictable things happen, like this or the misuse of “vibe coding” as a term.
That’s what makes your actions frustrating, your repeated glaring inability to understand the criticisms of the technology refering to the inevitable misuse, the lack of understanding that of course this is what it is going to be used for, and no amount of your blog posts is going to change it.
https://news.ycombinator.com/item?id=46398241
Your deliberate provocation didn’t accomplish good. Agreed, it was not by any means close to the worst things humans do, but it was still a public dick move (to borrow your words) which accomplished nothing.
One day, as will happen to most of us, you or someone close will be bitten hard by ignorant or malicious use outside your control. Perhaps then you’ll reflect on your role in it.
> One day, as will happen to most of us, you or someone close will be bitten hard by ignorant or malicious use outside your control.
Agreed. That's why I invest so much effort trying to help people understand the security risks that are endemic to how most of these systems work: https://simonwillison.net/tags/prompt-injection/
Look at who you're responding to. You won't get through to him.
Email is one of the last open protocols around. git uses it in commit messages presumably because of that fact. Rob's co-worker at Google Vint always opines on the greatness of this openness.
A well meaning message on an open protocol resulting in a rant - it really feels to me that AI isn't the issue here.
How could it not be the issue? We're already drowning in corporate and malicious garbage. My email has become nigh on unusable because of all the bad actors and short sighted thinking. What used to be a powerful tool for productivity and keeping in touch with friends and family is now a drain on my day.
That was bad enough, but now AI is enabling this rot on an unprecedented level (and the amount of junk making it through Google's spam filters is testament to this).
AI used in this way without any actual human accountability risks breaking many social structures (such as email) on a fundamental level. That is very much the point.
So the AI Village folks put together a bunch of LLMs and a basically unrestricted computer environment, told it "raise money" and "do random acts of kindness" and let it cook. It's a technological marvel, it's a moral dilemma, and it's an example of the "altruistic" applications for this technology. Many of us can imagine the far less noble applications.
But Rob Pike's reaction is personal, and many readers here get why. The AI Village folks burned who knows how much cash to essentially generate well wishing spam. For much less, and with higher efficacy, they could've just written the emails themselves.
I feel like its funny but I remember some months ago someone pointed something like "human slop" to me and I just remembered it right now writing some other comment here
I feel as if there is a fundamental difference between "AI slop" and "Human slop", it's that humans have true intent and meaning/purpose.
This current AI slop spammed rob pike simply because It only did something to maximize its goal or something and had no intention. It was simply 4 robots left behind a computer who spammed rob pike
On the other hand, if it was a human, who took the time out of his day to message rob pike a merry christmas. Asking how his day was and hoping him good luck, I am sure that rob pike's heart might melt from a heartfelt message
So in this sense, there really isn't "human slop". There is only intent. If something was done with a good intention by an human, I suppose it can't really be considered human slop. On the other hand if there was a spammer who handwrote that message to rob pike, his intentions were bad.
The thing is that AI doesn't have intentions. Its maths. And so the intentions are of the end person. I want to ask how people who spend a decent time in AI industry might have reacted if he had gotten the email instead of rob pike. I bet they would see it as an advancement and might be happy or enthusiastic.
So an AI message takes an connotation of the receiver. And lets just be honest that most first impressions of AI aren't good and combining that you get that connotation. I feel like it does negative/bad publicity to use AI at this point while still burning money perhaps on it.
Here is what I recommend for those websites who have AI chatbots or similar, when I click on the message:- Have two split buttons where pressing one might lead me to an AI chat and the other might lead me to a human conversation. Be honest about how much time on average it might take for support and be proper about ways to contact them (twitter,reddit although I hope that federated services like mastodon get more popularity too)
I don't know if I agree. Intention certainly matters, but I think something can be evaluated differently purely on if it was created by a human.
For all of you on this thread who are so confused as to why the reaction has been so strong: dressing up AI-slop spam as somehow altruistic just rubs people the wrong way. AI-slop and e-mail spam, two things people revile converging to produce something even worse... what did you expect? The Jurassic Park quote regarding could vs should comes to mind.
Nobody wants appreciation or any type of meaningful human sentiment outsourced to a computer, doing-so is insulting. It's like discovering your spouse was using ChatGPT to write you love notes, it has no authenticity and reflects a lack of effort and care.
> It's like discovering your spouse was using ChatGPT to write you love notes, it has no authenticity and reflects a lack of effort and care.
i dunno. id say the effort and care is decoupled. they maybe have spent hours prompting on it until it was just right, or they may have put in no look at all.
Not sure if this is a joke, but if you can't see why "hours prompting" to produce a paragraph long thank-you note isn't ridiculous then I don't know what to tell you.
As someone who's done just that... if it helps, understand that this is the kind of person who would be spending hours writing that paragraph anyway, LLM or no.
There are many possible reasons for this, and sometimes people are laboring under several of them at once.
> So I had Claude Code do the rest of the investigation
And did you check whether or not what it produced was accurate? The article doesn't say.
Yes. And I shared the full transcript so you can see for yourself if you like: https://gistpreview.github.io/?edbd5ddcb39d1edc9e175f1bf7b9e...
I read through this to see if my AI cynicism needed any adjustment, and basically it replaced a couple basic greps and maaaaybe 10 minutes of futzing around with markdown. There's a lot of faffing about with JSON, but it ultimately doesn't matter to the end result.
It also fucked up several times and it's entirely possible it missed things.
For this specific thing, it doesn't really matter if it screwed up, since the worst that would happen is an incomplete blog post reporting on drama.
But I can't imagine why you would use this for anything you need to put your name behind.
It looks impressive, sure, but the important kernel here is the grepping and there it's doing some really basic tinkertoy stuff.
I'm willing to be challenged on this, so by all means do, but this seems both worse and slower as an investigation tool.
The hardest problem in computer science in 2025 is showing an AI cynic an example of LLM usage that they find impressive.
How about this one? I had Claude Code run from my phone build a dependency-free JavaScript interpreter in Python, using MicroQuickJS as initial inspiration but later diverging from it on the road to passing its test suite: https://static.simonwillison.net/static/2025/claude-code-mic...
Here's the latest version of that project, which I released as an alpha because I haven't yet built anything real on top of it: https://github.com/simonw/micro-javascript
Again, I built this on my phone, while engaging with all sorts of other pleasant holiday activities.
> For this specific thing, it doesn't really matter if it screwed up
These are specifically use cases where LLMs are a great choice. Where the stakes are low, and getting a hit is a win. For instance if you're brainstorming on some things, it doesn't matter if 99 suggestions are bad if 1 is great.
> the grepping and there it's doing some really basic tinkertoy stuff
The boon is you can offload this task and go do something else. You can start the investigation from your phone while you're out on a walk, and have the results ready when you get home.
I am far from an AI booster but there is a segment of tasks which fit into the above (and some other) criteria for which it can be very useful.
Maybe the grep commands etc look simple/basic when laid bare, but there's likely to be some flailing and thinking time behind each command when doing it manually.
This feels a lot like DigitalOcean's early Hacktober events, where they incentivized essentially PR spam to give away tee shirts and stickers...
It also feels a bit dishonest to sign it as coming from Claude, even if it isn't directly from Claude, but from someone using Claude to do the dumb thing.
We already have two copies of this:
(438 points, 373 comments) https://news.ycombinator.com/item?id=46389444
(763 points, 712 comments) https://news.ycombinator.com/item?id=46392115
"Simon Willison REACTS to Rob Pike's unfiltered opinion on AI". We must have the proper spin.
Is anyone going to say something about him engagement farming on this site?
Simon's posts are not "engagement farming" by any definition of the term. He posts good content frequently which is then upvoted by the Hacker News community, which should be the ideal for a Hacker News contributor.
He has not engaged in clickbait, does not spam his own content (this very submission was not submitted by him), and does not directly financially benefit from pageviews to his content.
You and I disagree on what engagement farming means.
What value do you think this post adds to the conversation?
Simon's post focuses more on the startup/AI Village that caused the issue with citations and quotes, which has been lost in the discussion due to Rob Pike's initial heated message. It is not redundant.
He links to both HN and lobsters which already contained this information, from before he did any research, so "has been lost" is certainly a take...
But if that's value added, why frame it under the heading of popular drama/rage farming? To capture more attention? Do you believe the pop culture news sites would be interested if it discussed the idea and "experiment" without mentioning the rage bait?
"How Rob Pike got spammed with an AI slop 'act of kindness'" is an objectively accurate frame that informs the user what it's related to: the only potentially charged part of it is calling it "AI slop" but that's not inaccurate. It does not fit the definition of ragebait (blatant misleading headline to encourage impulse reactions) nor does it fit the definition of clickbait (blatant omission of information to encourage the user to click though: having a headline with "How" does not fit the definition of clickbait, it just tells you what the article is about)
How do you propose he should have framed it in a way that it is still helpful to the reader?
He's a master at pretending to be a part of the center of things by inserting himself.
If you believe this then I have a bridge to sell you in Brooklyn.
The simonw haters love to come out of the woodwork. I have to wonder, is it mostly just jealousy? I have to think yes.
I think it's fatigue. His stuff appears on the front page very often, and there's often tons of LLM stuff on the front page, too. Even as an LLM user, it's getting tedious and repetitive.
It's just fatigue from seeing the same people and themes repeatedly, non-stop, for the last X months on the site. Eventually you'd expect some tired reactions.
Better this than 300th React.JS bloatware of the year.
Surely the existence of a simonw anti-fan club implies jealousy. That's the only possibility.
Lay out your grievances then.
Point out which aspects of my comment you believe are untrue and I'll buy that bridge.
This Hacker News Commenter Made A Devastating Perfect Reply To Simon Willison
Simon Willison: How this Devastating Perfect Reply Changed my Publishing Workflow, featuring Claude Code
Related, but not copies
> My problem is when this experiment starts wasting the time of people in the real world who had nothing to do with the experiment.
> (…)
> Setting a goal for a bunch of LLMs and letting them loose on Gmail is not a responsible way to apply this technology.
These kinds of takes are incredibly frustrating. What did you think was going to happen?! Of course this is what happened! Of course LLMs will continue to be used irresponsibly, and this won’t even register in the top ten thousand worst uses.
This reads like a gun fanatic who is against gun control saying after a school shooting “my problem is when nuts shoot up schools, that is not a responsible way to employ guns”. No shit. The people who criticise unfettered access to guns don’t have a problem with people who are careful and responsible with guns, keep them locked, and used them only at gun ranges, the problem is what the open access means for society as a whole.
This.
I didn’t really understand the other thread, nor did I know who Rob Pike is. Based on this, it looks like he got an automated email from a harmless experiment and had a hissy fit about it?
If you understand neither the content not the context, you have nothing to base the look assessment on
You probably used UTF-8 encoding to write that. He co-designed that, among other things like the Go programming language. Used to work at Bell Labs.
Yes, it does look like you didn’t understand. I will help you. Start here: https://www.truthdig.com/articles/the-ecological-cost-of-ai-...
Where does this "AI uses water" meme come from? It's being shared with increasing hysteria, but data centres don't burn water, or whatever the meme says. They use electricity and cooling systems.
It's mostly not a real issue. I think it's holding firm because it's novel - saying "data centers use a lot of electricity" isn't a new message, so it doesn't resonate with people. "Did you know they're using millions of liters of water too!" is a more interesting message.
People are also very bad at evaluating if millions of liters of water is a lot or not.
My favourite exploration of this issue is from Hank Green: https://www.youtube.com/watch?v=H_c6MWk7PQc - this post by Andy Masley is useful too: https://andymasley.substack.com/p/the-ai-water-issue-is-fake
At least perform a tiny bit of research before you parrot VC talking points on a VC controlled message board. Yes data centers use a shit ton of water daily https://www.eesi.org/articles/view/data-centers-and-water-co...
How often do you experience the house pipe ban in the UK? Oh, I hear ya, not a problem…
I don't care about the supposed ecological consequences of AI. If we need more water, we build more desalination plants. If we need more electricity, we build more nuclear reactors.
This is purely a technological problem and not a moral one.
Clean water is a public good, it is required for basic human survival. It is needed to grow crops to feed people. Both of these uses depend on fairly cheap water, in many many places the supply of sufficiently cheap water is already constrained. This is causing a shortage for both basic human needs, and agriculture.
Who will pay for the desalination plant construction? Who will pay for the operation?
If the AI companies are ready to pay the full marginal cost of this "new water", and not free-load on the already insufficient supply needed for more important uses, then fine. But I very much doubt that is what will happen.
The data center companies frequently pay for upgrades to the local water systems.
https://www.hermiston.gov/publicworks/page/hermiston-water-s... - "AWS is covering all construction costs associated with the water service agreement"
https://www.thedalles.org/news_detail_T4_R180.php - "The fees paid by Google have funded essential upgrades to our water systems, ensuring reliable service and addressing the City's growing needs. Additionally, Google continues to pay for its water use and contributes to infrastructure projects that exceed the requirements of its facilities."
https://commerce.idaho.gov/press-releases/meta-announces-kun... - "As part of the company’s commitment to Kuna, Meta is investing approximately $50 million in a new water and sewer system for the city. Infrastructure will be constructed by Meta and dedicated to the City of Kuna to own and operate."
For desalination, the important part is paying the ongoing cost. The opex is much higher, and it's not fair to just average that into the supply for everyone to pay.
Are any data centers using desalinated water? I thought that was a shockingly expensive and hence very rare process.
(I asked ChatGPT and it said that some of the Gulf state data centers do.)
They do use treated (aka drinking) water, but that's a relatively inexpensive process which should be easily covered by the extra cash they shovel into their water systems on an annual basis.
Andy wrote a section about that here: https://andymasley.substack.com/i/175834975/how-big-of-a-dea...
Read the comment I replied to, they proposed that since desalination is possible, there can be no meaningful shortage of water.
And yes, many places have plenty of water. After some Capex improvements to the local system, a datacenter is often net-helpful, as they spread the fixed cost of the water system cost out over more gallons delivered.
But many places don't have lots of water to spare.
There were people before “ai” in other industries who were like “I don’t care about ecological consequences of my actions”. We as society have turned them into law-abiding citizens. You will be there too. Don’t worry. Time will come. You will be regulated. Same as cryptocurrencies, chemical, oil and gas, …
If you were capable of time travel and you could go to the past and convince world government of the evil oil and gas industries, and that their expansion should be prevented, would you have done it? Would you have prevented the technological and sociatal advances that came from oil and gas to avoid their ecological consequences?
If you answer yes, I don't think we can agree on anything. If you answer no, I think you are a hypocrite.
Correct. You don't understand.
Not defending the machines here, but why is this annoying beyond the deluge of spam we all get everyday in any case. Of course AI will be used to spam and target us. Every new technology will be used to do that. Was that surprising to Pike? Why not just hit delete and move on, like we do with spam all the time in any case? I don’t get the exceptional outrage. Is it annoying? Yes, surely. But does it warrant an emotional outburst? No, not really.
Sometimes it just hits different. One spam/marketing email I got, pre-AI, was
I get around 30 marketing emails per day that make it through the spam filter; from a purely logical perspective this should have been the same as any other, but I still remember this one because the tone, the way it used only a person's name in the subject, no mention of the company or what they were selling, just really pissed me off.I imagine it's the same in this situation; the subject makes it seem like a sincere thank you from someone, and then you open it up and it's AI slop. To borrow ChatGPT-style phrasing: it's not just spam, it's insulting.
Sure, it’s insulting. I get it. Agree 100%. But then what? Does getting upset about it help anything? I used to get upset when spam first started invading my otherwise clean inbox. After 25+ years of receiving spam, I never had that anger/annoyance result in a reduction of anything.
Yes, it embarrasses the people who think this kind of thing is a good idea and ideally generates behavioral change.
Day-to-day spam senders know what they are doing is not legal or wanted and I know they know.
Here not only are the senders apparently happily associating their actual legal names with the spam but frame the sending as "a good deed" and seem to honestly see it as smart branding.
We don't want the Overton window wherever they are.
I cannot possibly oppose this take more; you're perfectly embodying the "slow frog boiling" mentality that must be fought everyday.
Curse, yell, fight. Never accept things just because they've grown to be common.
I’m all for opposing things where such opposition has a chance of making a difference. I just don’t see that here.
I think you are seriously misjudging how situation is effecting people. I was reading this article[0] the other day and I agree with most of it.
[0]: https://fortune.com/2025/12/23/silicon-valleys-tone-deaf-tak...
Thank you for linking that article. I think it expresses exactly where the anti AI sentiment is coming from. With this background understanding I think it is reasonable to see unsolicited AI emails - that deanonymised your address in the first place - not only as spam, but as a threat.
All these comments are acting like Rob Pike is mad he received an email. That is a disturbing lack of reading comprehension.
To comprehend, people would need to read in the fist place. Most commenters just comment on the headline.
I'm curious about rob pike's anger. I wish I knew more about the ideas behind his emotions right now. Is he feeling a sense of loss because AI is "doing" code ? or is it because he foresees big VC / hedge funds swallowing an industry for profit through AI financing ?
Sounds like Robs anger is directed a multiple know issues and “crimes” that the AI industry is responsible for, it would be hard to compile an exhaustive list outside of a lawsuit but if you genuinely aren’t aware there’s plenty in the news cycle right now to occupy you and or outrage the average person.
-Mass layoffs in tech AI data centers causing extreme increases in monthly electricity -bills across the US -Same as above but for water -The RAM crisis is entirely caused by Sam Altman - General fear and anxiety from many different professions about AI replacing them - Rape of the copyright system to train these models
I find it notable that he pointing out making simpler software. One of my fears is the ease with which GenAI produces reams of code—that this will just lead to bloat and fragility.
I kinda believe this.
there's a shift in how you make software here. LLM will produce a ton of code that embeds decisions, it's well done but it means you never have to reflect about the design, interfaces yourself. you can keep abusing the context window
most of software engineering was dealing with human limits through compression. we make layers, modules, abstractions so that we can understand each part a bit
thanks
i kinda agree with all of these
ultimately AI is the equivalent of nuclear weaponry but for human economies.. this is something that should be controlled outside private companies (especially since it's part public research and public data..)
At the very least, I would be angry that my inbox is getting spammed by a bot run by 3 obnoxious "entrepreneurs".
for a book that surveys pretty much all of it, see "Empire of AI" by Karen Hao
thanks
do you happen to know if there are groups talking about how societies will rebalance after the gpt era ?
good question, I'm not sure. Maybe check out the new Eliezer Yudkowsky book? He definity talks about something akin to "post-GPT era" on there.
thanks a lot
Imagine you spent your whole life working on something great only for someone else to turn it into the death star?
you mean openai and the likes swallowing computing and most probably not bringing global benefit for humans ?
there are people saying devs were naive not seeing that our jobs would accelerate automation to the point we would be retired too
Yes
Maybe we should organize a way to take llm inference out of private companies partly.. a kind of social protocol where they can play but as long as enough of the population is unharmed.
"You" and "someone else" in this case are both part of Google.
Ted Kaczynski right as ever. As new technology is adopted by society, you CANNOT choose to opt out.
Looking at that email, I felt it was a bit of an overreaction. I don't want to delve into whataboutism here but there are many other sloppified things to be mad about.
I was following the first half of the post where he discusses the environmental consequences of generative AI, but I didn't think the "thank you" aspect should be the straw that breaks the camel's back. It seems a bit ego driven.
Well. If you cannot comprehend that the man gets angry to be thanked at the pursuit of simplicity by a creation of billions and billions of dollars sunk into non-recyclable electronics deployed in hundreds of datacenters requiring nuclear power plants and maybe sending shit into LEO… I genuinely feel sorry for you.
But honestly who in tarnation thought that this would be a good idea?
The same kind of people who post "I asked ChatGPT and this is what it said" and genuinely think they are helping.
Perhaps someone thinking “all publicity is good publicity”
To me, it just sounds as he didn't understand where the message was really coming from:
> Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society, yet taking the time to have your vile machines thank me
Yes, the sender organisation is not the one doing all this, but merely a small user running a funny experiment; it would have indeed been stupid if Anthropic had sent him a thank you email signed by "Opus 4.5 model".
This is just a funny experiment, sending 300 emails from in weeks is nothing compared to the amount of crap that is sent by the millions and billions every day, or the stuff that social media companies do.
Empty platitudes from an LLM will now likely increase in frequency. =3
https://en.wikipedia.org/wiki/Streisand_effect
Maybe I’m missing something, but why does their AI agent setup require 3-5 sessions to send one email??
They're using a real browser and taking screenshots and then having the LLM say what co-ordinates to click next.
LLMs are not immune to corporate bureaucracy.
The LLMs are FAANG PMs.
For every one who is excited about using AI like an incredibly expensive and wasteful auto complete, there are a hundred who are excited about inflicting AI on other people.
Nobody cares show us your damn pelican.
Honestly… fuck all of these people. Why would you do this?
Again and again this stuff proves not to be AI but clever spam generation.
AWoT: Artificial Wastes of Time.
Don't do this to yourself. Find a proper job.
Why is this is downvoted? What is the difference between the anger being expressed here and the anger of the original email recipient? Do I need to revisit the community guidelines? I assume this is the first time this person has seen the Rob Pike post.
Theory: Some people believe that saying "fuck you" is taboo and in itself outrageous and significant.
Hence upvoting the OP ("What has robpike come to? :shriek:") and downvoting GP.
Upvotes/downvote behavior makes zero sense on heated topics, it's better to not think about it.
People mostly downvote here with emotion, not the reason.
I am unconcerned about it being downvoted. If it makes people defensive enough to downvote it, it did its job, and maybe through attrition it, with other people’s disgusted rage, will contribute to educating the sociopathic Valley tech industry that things are going badly wrong.
One more seemingly futile fist punched at the wall that traps us in the world that unfettered tech industry greed has made for us. Might take millions of us to make an impression but we will.
FWIW I am British and “fuck all of these people” is something you might expect even the most balanced, refined British person to say, because we’re less afraid of language or the poetry of some of our older, more colourful words, and because there is no more elegantly robust way to put it.
I don't think it's slop. I think it's a nice enough email, using nascent AI emotions.
Giving AI agents resources is a frontier being explored, and AI Village seems like a decent attempt at it.
Also the naming is the same as WALL•E - that was the name of the model of robot but also became the name of the individual robot.
> Giving AI agents resources is a frontier being explored, and AI Village seems like a decent attempt at it.
Legitimate research in this field may be good, but would not involve real humans being impacted directly by it without consent.
> but would not involve real humans being impacted directly by it without consent.
Are we that far into manufactured ragebait to call a "thank you" e-mail "impacted directly without consent"? Jesus, this is the 3rd post on this topic. And it's Christmas. I've gotten more meaningless e-mails from relatives that I don't really care about. What in the actual ... is wrong with people these days?
Principles matter, like doors are either closed or open.
Accepting that people who write things like --I kid you not-- "...using nascent AI emotions" will think it is acceptable to interfere with anyone's email inbox is I think implicitly accepting a lot of subsequent blackmirrorisms.
Sending emails without consent! What has the world come to?
> Sending emails without consent
Actively exploiting a shared service to deanonymize an email someone hasn't chosen to share in order to email them is a violation of boudnaries even if if it wasn't something someone was justifying as exploration of the capacities of novel AI systems, thus implicitly invoking both the positive and negative concerns associated with research as appropriate in addition to (or instead of, where those replace rather than layering on top of) those that apply to everyday conduct.
You are not the only one calling this a thank you email, but no one decided to say thank you to Rob Pike so I can not consider it a "thank you" email. It is spam.
Interactions with the AI are posted publicly:
> All conversations with this AI system are published publicly online by default.
which is only to the benefit of the company.
At best the email is spam in my mind. The extra outrage on this spam compared to normal everyday spam is in part because AI is a hot button topic right now. Maybe also some from a theorized dystopian(-ish) future hinted at by emails like these.
> Are we that far into manufactured ragebait to call a "thank you" e-mail "impacted directly without consent"?
Abusing a Github glitch to deanonymize a not-intended to be public email to send an email to someone (regardless of the content) would be scummy behavior even if it was done directly by a human with specific intent.
> What in the actual ... is wrong with people these days?
Narcissism and the lack of respect for other people and their boundaries that it produces, first and foremost.
I don't think that the company owning the trademark will accept a WALL-E analogy when damage is being done to their brand.
>>using nascent AI emotions
Honestly, I don't mean personal offence to you, but what the hell are you people talking about. AI is just a bunch of (very complex) statistics, deciding that one word is most appropriate after another. There are no emotions here, it's just maths.
Nascent AI emotions is a dystopian nightmare jeez.
> There are no emotions here, it's just maths.
100%, its an autocorrector on steroids which is trained to give you an answer based on how it was rewarded during its train phase. In the end, its all linear alegbra.
I remember prime saying, its all linear algebra and I like to reference it and technically its true but people in the AI community get remarkably angry sometimes when you point it out.
I mean no offense in saying this but at the end of the day It is maths and there is no denying around it. Please, the grand parent comment should stop creating terms like nascent AI emotions.
You people?
Is this an impromptu turing test?
People who anthropomorphize AI and say things like "nascent emotions" when talking about how an AI system composed a letter.
You AI people.
yes, you people
I mean its just an email, a bunch of characters, why get mad about it.
A shrapnel is just a piece of metal. Why get mad about it.
Yes, its just a piece of metal, are you trying to imply something related with using shrapnel to damage something? Well you can't use email in the same way.
Yes, email needs to be used differently in order to cause damage.
In which way that email, in the subject, caused any damage
The annoying thing about this drama is the predominant take has been "AI is bad" rather than "a startup using AI for intentionally net negative outcomes is bad".
Startups like these have been sending unsolicited emails like this since the 2010's, before char-rnns. Solely blaming AI for enabling that behavior implicitly gives the growth hacking shenanigans a pass.
I read Rob’s message as against the AI industry, triggered by this email - it is ‘AI is bad’.
This startup didn’t spend the trillions he’s referencing.
Correct. I'm more referring to the secondary discussions on HN/Bluesky which have trended the same lines as usual instead of highlighting the unique actions of Sage as Simon did.
A 501(c)(3) isn't a startup. The behavior is still bad, obviously.
And it gives them more eyes than they hoped for by “going nuclear.”
This is the worst of outrage marketing. Most people don't have resistance to this, so they eagerly spread the advertising. In the memetic lifecycle, they are hosts for the advertisement parasite, which reproduces virally. Susceptibility to this kind of advertising is cross-intelligence. Bill Ackman famously fell for a cab driver's story that Uber was stiffing him tips.
With the advent of LLMs, I'd hoped that people would become inured to nonsensical advertising and so on because they'd consider it the equivalent of spam. But it turns out that we don't even need Shiri's Scissors to get people riled up. We can use a Universal Bad and people of all kinds (certainly Rob Pike is a smart man) will rush to propagate the parasite.
Smaller communities can say "Don't feed the trolls" but larger communities have no such norms and someone will "feed the trolls" causing "the trolls" to grow larger and more powerful. Someone said something on Twitter once which I liked: You don't always get things out of your system by doing them; sometimes you get them into your system. So it's self-fueling, which makes it a great advertising vector.
Other manufactured mechanisms (Twitter's blue check, LinkedIn's glazing rings) have vaccines that everyone has developed. But no one has developed an anti-outrage device. Given that, for my part, I am going to employ the one tool I can think of: killfiling everyone who participates in active propagation through outrage.
1 email sent to 1 specific person is not a spam.
Spam is defined as "sending multiple unsolicited messages to large numbers of recipients". That's not what happened here.
As noted in the article, Sage sent emails to hundreds of people with this gimmick:
> In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists.
That's definitely "multiple" and "unsolicited", and most would say "large".
This is a definition of spam, not the only definition of spam.
In Canada, which is relevant here, the legal definition of spam requires no bulk.
Any company sending an unsolicited email to a person (where permission doesn't exist) is spamming that person. Though it expands the definition further than this as well.
> you can add .patch to any commit on GitHub to get the author’s unredacted email address
The article calls it a trick but to me it seems a bug. I can’t imagine github leaving that as is, especially after such blog post.
What’s the point of the “Keep my email addresses private” github option and “noreply” emails then?
Yeah you’ve been able to do this for over a decade. They can’t really stop it:
- Git commits form an immutable merkel dag. So commits can’t be changed without changing all subsequent hashes in a git tree
- Commits by default embed your email address.
I suppose GitHub could hide the commit itself, and make you download commits using the cli to be able to see someone’s email address. Would that be any better? It’s not more secure. Just less convenient.
Git (the version control program, not GitHub) associates the author’s email address with every single commit. The user of Git configures this email address. This isn’t secret information.
> What’s the point of the “Keep my email addresses private” github option and “noreply” emails then?
Those settings will affect what email shows up in commits.
In commits you vreate on other tooling you can configure a fake/alternate user.email address in gitconfig. Git (not just GitHub) needs some email address flr each commit but it is freetext.
There is one problem: commit signatures. For GitHub to consider a commit not created by github.com Web UI to be "verified" and get a green check mark, the following needs to hold:
- Commit is signed
- Commit email address matches a verified GH account email address
So you can not use a 'nocontact@thih9.example.com' address and get green checks on your commits - it needs to be an address that is at least active when you add it to your account.
Run git show on any commit object, or look at the default output of git log, and you'll see the same. Your author name and email are always public. If you want, use a specific public address for those purposes.
That is demonatratively not true on github and gitlab, both having the ability to set an email alias which redirects the messages to your real email without revealing it.
https://docs.github.com/en/account-and-profile/how-tos/email...
I don't think you necessarily disagree with that I'm saying.
1. git commits record an author name and email
2. github/gitlab offer an email relay so you can choose to configure your git client (and any browser-based commits you generate) to record that as the email address
3. github/gitlab do not rewrite your pushed commits to "sanitize" any "private" email addresses
4. the .patch suffix "trick" just shows what was recorded in the commit
When I said
> If you want, use a specific public address for those purposes.
that includes using the github/gitlab relay address -- but make sure to actually change your gitconfig, you can't just configure it on the web and be done.
You chose which email to commit with, and GitHub provides you an email you can use if you don’t want to expose your personal email.
Just wait until you find out what is written on every single git commit that can be fetched.
Don’t keep us in suspense! :)
Git commits contain the author's name and email address.
How about adding these texts and reactions to LLM's context and iterating to improve performance? Keep doing it until a real person says, 'Yes, you're good enough now, please stop...' That should work.
An AI can not meaningfully say "thank you" to a human. This is not changed by human review. "Performance" is the completely wrong starting point to understand Rob's feelings.
How about not spamming unwilling test subjects with slop?