The important point that Simon makes in careful detail is: an "AI" did not send this email. The three people behind the Sage AI project used a tool to email him.
According to their website this email was sent by Adam Binksmith, Zak Miller, and Shoshannah Tekofsky and is the responsibility of the Sage 501(c)3.
No-one gets to disclaim ownership of sending an email. A human has to accept the Terms of Service of an email gateway and the credit card used to pay the email gateway. This performance art does not remove the human no matter how much they want to be removed.
Okay. So Adam Binksmith, Zak Miller, and Shoshannah Tekofsky sent a thoughtless, form-letter thank you email to Rob Pike. Let's take it even further. They sent thoughtless, form-letter thank you emails to 157 people. That makes me less sympathetic to the vitriol these guys are getting not more. There's no call to action here, no invitation to respond. It's blank, emotionless thank you emails. Wasteful? Sure. But worthy of naming and shaming? I don't think so.
Heck Rob Pike did this himself back in the day on Usenet with Mark V. Shaney (and wasted far more people's time on Usenet with this)!
This whole anger seems weirdly misplaced. As far as I can tell, Rob Pike was infuriated at the AI companies and that makes sense to me. And yes this is annoying to get this kind of email no matter who it's from (I get a ridiculous amount of AI slop in my inbox, but most of that is tied with some call to action!) and a warning suffices to make sure Sage doesn't do it again. But Sage is getting put on absolute blast here in an unusual way.
Is it actually crossing a bright moral line to name and shame them? Not sure about bright. But it definitely feels weirdly disproportionate and makes me uncomfortable. I mean, when's the last time you named and shamed all the members of an org on HN? Heck when's the last time that happened on HN at all (excluding celebrities or well-known public figures)? I'm struggling to think of any startup or nonprofit, where every team member's name was written out and specifically held accountable, on HN in the last few years. (That's not to say it hasn't happened: but I'd be surprised if e.g. someone could find more than 5 examples out of all the HN comments in the past year).
The state of affairs around AI slop sucks (and was unfortunately easily predicted by the time GPT-3 came around even before ChatGPT came out: https://news.ycombinator.com/item?id=32830301). If you want to see change, talk to policymakers.
You agreed with the other poster while reframing their ideas in slightly different words without adding anything to the conversation?
Most confusingly you did so in emphatic statements reminiscent of a disagreement or argument without there being one
> no computer system just does stuff on its own.
This was the exact statement the GP was making, even going so far as to dox the nonprofit directors to hold them accountable… then you added nothing but confusion.
> a human (or collection of them) built and maintains the system, they are responsible for it
Yup, GP covered this word for word… AI village built this system.
Why did you write this?
Is this a new form of AI? A human with low English proficiency? A strange type of empathetically supportive comment from someone who doesn’t understand that’s the function of the upvote button in online message boards?
my point was more concise and general (should I have just commented instead of replying?), sorry you’re so offended and not sure why you felt the need to write this (you can downvote)
accusing people of being AI is very low-effort bot behavior btw
seems to me when this kind of stuff happens, there's usually something else completely unrelated, and your comment was simply the first one they happened to have latched onto. surely by itself it is not enough to elicit that kind of reaction
I do see the point a bit? and like a reasonable comment to that effect sure, I probably don’t respond and take it into account going forward
but accusing me of being deficient in English or some AI system is…odd…
especially while doing (the opposite of) the exact thing they’re complaining about. upvote/downvote and move on. I do tend to regret commenting on here myself FWIW because of interactions like this
Guns make it faster and easier to be successful than alternatives, and their explicit only purpose is to do so. It enables short and impulsive actions to be lethal before you and those around you think through your behavior.
Comparatively, the people in this article are using tools which have a variety of benign purposes to do something bad.
Similarly though, they probably wouldn’t have gone through with it if they had to set up an email server on hardware they bought and then manually installed in a colo and then set up a DNS server and a GPU for the neural network they trained and hosted themselves.
That is why the argument is not against guns per se, but against human access to guns. Gun laws aim to limit access to guns. Problems only start when humans have guns. Some for AI, maybe we should limit human access to AI.
my understanding, and correct me if I’m wrong, is a human is always involved. even if you build an autonomous killing robot, you built it, you’re responsible
typically this logic is used to justify the regulation of firearms —- are you proposing the regulation of neural networks? if so, how?
Oh no, they send him a "thank you for all the hard work you've done" email, how could they, off to prison with these monsters, they need to be held responsible for all the suffering and pain they've caused.
I certainly have no intention of doing anyone harm. I went to their website and clicked three times to get the names of the people and organization behind it, there is a prominent About page with profile links. If an admin considers this inappropriate please remove the names from my post.
Have you considered that the sites associated with this project have a very prominent meet-the-team page and that every AI Village blogpost is signed off by a member of said team? Can you explain what you're seeing in the parent comment that's private?
Dude, what? The fuckers set up an automated system that found people’s private email addresses and blasted them with unwanted emails. The outrage is exactly that they built a line-crossing machine. Your moralizing is incoherent.
The goals (initially "raise as much money for charity as you can", currently "Do random acts of kindness") don't seem ill-intentioned, particularly since it was somewhat successful at the first ($1481 for Helen Keller International and $503 for the Malaria Consortium). To my understanding it also didn't send more than one email per person.
I think "these emails are annoying, stop it sending them" is entirely fair, but a lot of the hate/anger, analogizing what they're doing to rape, etc. seems disproportionate.
Lets turn this into an accountability thing please.
The same way we name and shame petrol and plastic CEOs whose trash products flood our environment, we should be able to shame slop makers. Digital trash is still trash.
> Hey, one of the creators of the project here! The village agents haven’t been emailing many people until recently so we haven’t really grappled with what to do about this behaviour until now – for today’s run, we pushed an update to their prompt instructing them not to send unsolicited emails and also messaged them instructions to not do so going forward. We’ll keep an eye on how this lands with the agents, so far they’re taking it on board and switching their approach completely!
> Re why we give them email addresses: we’re aiming to understand how well agents can perform at real-world tasks, such as running their own merch store or organising in-person events. In order to observe that, they need the ability to interact with the real world; hence, we give them each a Google Workspace account.
> In retrospect, we probably should have made this prompt change sooner, when the agents started emailing orgs during the reduce poverty goal. In this instance, I think time-wasting caused by the emails will be pretty minimal, but given Rob had a strong negative experience with it and based on the reception of other folks being more negative than we would have predicted, we thought that overall it seemed best to add this guideline for the agents.
> To expand a bit on why we’re running the village at all:
> Benchmarks are useful, but they often completely miss out on a lot of real-world factors (e.g., long horizon, multiple agents interacting, interfacing with real-world systems in all their complexity, non-nicely-scoped goals, computer use, etc). They also generally don’t give us any understanding of agent proclivities (what they decide to do) when pursuing goals, or when given the freedom to choose their own goal to pursue.
> The village aims to help with these problems, and make it easy for people to dig in and understand in detail what today’s agents are able to do (which I was excited to see you doing in your post!) I think understanding what AI can do, where it’s going, and what that means for the world is very important, as I expect it’ll end up affecting everyone.
> I think observing the agents’ proclivities and approaches to pursuing open-ended goals is generally valuable and important (though this “do random acts of kindness” goal was just a light-hearted goal for the agents over the holidays!)
So this is what happens when we give computer internet access.
Good for Simon to call things out as it is. People think of Simon as an AI guy with his pelican benchmark and I still respect him and this is the reason why I respect him since of course he loves using AI tools and talking about them which some people might find tiring, at the end of day, after an incident like rob pike, he's one of the few AI guys I see to just call it out in simple terms like the title without much sugarcoating and calls when AI's bad.
Of course at the end of day, me and simon or others can have nuance in how to use AI or to not use ai at all and that also depends on the individual background etc. but still it's extremely good to see where people from both sides of the isle can agree on something.
To me, it just sounds as he didn't understand where the message was really coming from:
> Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society, yet taking the time to have your vile machines thank me
Yes, the sender organisation is not the one doing all this, but merely a small user running a funny experiment; it would have indeed been stupid if Anthropic had sent him a thank you email signed by "Opus 4.5 model".
This is just a funny experiment, sending 300 emails from in weeks is nothing compared to the amount of crap that is sent by the millions and billions every day, or the stuff that social media companies do.
For all of you on this thread who are so confused as to why the reaction has been so strong: dressing up AI-slop spam as somehow altruistic just rubs people the wrong way. AI-slop and e-mail spam, two things people revile converging to produce something even worse... what did you expect? The Jurassic Park quote regarding could vs should comes to mind.
Nobody wants appreciation or any type of meaningful human sentiment outsourced to a computer, doing-so is insulting. It's like discovering your spouse was using ChatGPT to write you love notes, it has no authenticity and reflects a lack of effort and care.
> Thank you notes from AI systems can’t possibly feel meaningful,
The same as automated apologies.
Not from an “AI”, but I spent over an hour⁰ waiting for a delayed train¹, then the journey, on Tuesday, being regaled every few minutes with an automated “we apologise for your journey taking longer than expected” which is far more irritating than no apology at all.
--------
[0] I lie a little here - living near the station and having access to live arrival estimations online meant I could leave the house late and only be waited on the platform ~20 minutes, but people for whom this train was a connecting leg of a longer journey didn't have that luxury.
[1] which was actually an earlier train, the slot in the timetable for the one I was booked on was simply cancelled, so some were waiting over two hours
I'm curious about rob pike's anger. I wish I knew more about the ideas behind his emotions right now. Is he feeling a sense of loss because AI is "doing" code ? or is it because he foresees big VC / hedge funds swallowing an industry for profit through AI financing ?
Sounds like Robs anger is directed a multiple know issues and “crimes” that the AI industry is responsible for, it would be hard to compile an exhaustive list outside of a lawsuit but if you genuinely aren’t aware there’s plenty in the news cycle right now to occupy you and or outrage the average person.
-Mass layoffs in tech
AI data centers causing extreme increases in monthly electricity -bills across the US
-Same as above but for water
-The RAM crisis is entirely caused by Sam Altman
- General fear and anxiety from many different professions about AI replacing them
- Rape of the copyright system to train these models
This feels a lot like DigitalOcean's early Hacktober events, where they incentivized essentially PR spam to give away tee shirts and stickers...
It also feels a bit dishonest to sign it as coming from Claude, even if it isn't directly from Claude, but from someone using Claude to do the dumb thing.
I feel like its funny but I remember some months ago someone pointed something like "human slop" to me and I just remembered it right now writing some other comment here
I feel as if there is a fundamental difference between "AI slop" and "Human slop", it's that humans have true intent and meaning/purpose.
This current AI slop spammed rob pike simply because It only did something to maximize its goal or something and had no intention. It was simply 4 robots left behind a computer who spammed rob pike
On the other hand, if it was a human, who took the time out of his day to message rob pike a merry christmas. Asking how his day was and hoping him good luck, I am sure that rob pike's heart might melt from a heartfelt message
So in this sense, there really isn't "human slop". There is only intent. If something was done with a good intention by an human, I suppose it can't really be considered human slop. On the other hand if there was a spammer who handwrote that message to rob pike, his intentions were bad.
The thing is that AI doesn't have intentions. Its maths. And so the intentions are of the end person. I want to ask how people who spend a decent time in AI industry might have reacted if he had gotten the email instead of rob pike. I bet they would see it as an advancement and might be happy or enthusiastic.
So an AI message takes an connotation of the receiver. And lets just be honest that most first impressions of AI aren't good and combining that you get that connotation. I feel like it does negative/bad publicity to use AI at this point while still burning money perhaps on it.
Here is what I recommend for those websites who have AI chatbots or similar, when I click on the message:- Have two split buttons where pressing one might lead me to an AI chat and the other might lead me to a human conversation. Be honest about how much time on average it might take for support and be proper about ways to contact them (twitter,reddit although I hope that federated services like mastodon get more popularity too)
Looking at that email, I felt it was a bit of an overreaction. I don't want to delve into whataboutism here but there are many other sloppified things to be mad about.
I was following the first half of the post where he discusses the environmental consequences of generative AI, but I didn't think the "thank you" aspect should be the straw that breaks the camel's back. It seems a bit ego driven.
Simon's posts are not "engagement farming" by any definition of the term. He posts good content frequently which is then upvoted by the Hacker News community, which should be the ideal for a Hacker News contributor.
He has not engaged in clickbait, does not spam his own content (this very submission was not submitted by him), and does not directly financially benefit from pageviews to his content.
Simon's post focuses more on the startup/AI Village that caused the issue with citations and quotes, which has been lost in the discussion due to Rob Pike's initial heated message. It is not redundant.
He links to both HN and lobsters which already contained this information, from before he did any research, so "has been lost" is certainly a take...
But if that's value added, why frame it under the heading of popular drama/rage farming? To capture more attention? Do you believe the pop culture news sites would be interested if it discussed the idea and "experiment" without mentioning the rage bait?
"How Rob Pike got spammed with an AI slop 'act of kindness'" is an objectively accurate frame that informs the user what it's related to: the only potentially charged part of it is calling it "AI slop" but that's not inaccurate. It does not fit the definition of ragebait (blatant misleading headline to encourage impulse reactions) nor does it fit the definition of clickbait (blatant omission of information to encourage the user to click though: having a headline with "How" does not fit the definition of clickbait, it just tells you what the article is about)
How do you propose he should have framed it in a way that it is still helpful to the reader?
I think it's fatigue. His stuff appears on the front page very often, and there's often tons of LLM stuff on the front page, too. Even as an LLM user, it's getting tedious and repetitive.
It's just fatigue from seeing the same people and themes repeatedly, non-stop, for the last X months on the site. Eventually you'd expect some tired reactions.
For every one who is excited about using AI like an incredibly expensive and wasteful auto complete, there are a hundred who are excited about inflicting AI on other people.
Why is this is downvoted? What is the difference between the anger being expressed here and the anger of the original email recipient? Do I need to revisit the community guidelines? I assume this is the first time this person has seen the Rob Pike post.
> but would not involve real humans being impacted directly by it without consent.
Are we that far into manufactured ragebait to call a "thank you" e-mail "impacted directly without consent"? Jesus, this is the 3rd post on this topic. And it's Christmas. I've gotten more meaningless e-mails from relatives that I don't really care about. What in the actual ... is wrong with people these days?
Principles matter, like doors are either closed or open.
Accepting that people who write things like --I kid you not-- "...using nascent AI emotions" will think it is acceptable to interfere with anyone's email inbox is I think implicitly accepting a lot of subsequent blackmirrorisms.
Honestly, I don't mean personal offence to you, but what the hell are you people talking about. AI is just a bunch of (very complex) statistics, deciding that one word is most appropriate after another. There are no emotions here, it's just maths.
Nascent AI emotions is a dystopian nightmare jeez.
> There are no emotions here, it's just maths.
100%, its an autocorrector on steroids which is trained to give you an answer based on how it was rewarded during its train phase. In the end, its all linear alegbra.
I remember prime saying, its all linear algebra and I like to reference it and technically its true but people in the AI community get remarkably angry sometimes when you point it out.
I mean no offense in saying this but at the end of the day It is maths and there is no denying around it. Please, the grand parent comment should stop creating terms like nascent AI emotions.
Yes, its just a piece of metal, are you trying to imply something related with using shrapnel to damage something? Well you can't use email in the same way.
Oh, I see. You didn't read what Rob Pike wrote. The email was merely insulting, but it's the result of a harmful industry.
However, allowing unrestricted LLM access to email -- for example, earlier when this experiment sent out fraudulent letters to charities? That's real harm.
The annoying thing about this drama is the predominant take has been "AI is bad" rather than "a startup using AI for intentionally net negative outcomes is bad".
Startups like these have been sending unsolicited emails like this since the 2010's, before char-rnns. Solely blaming AI for enabling that behavior implicitly gives the growth hacking shenanigans a pass.
Correct. I'm more referring to the secondary discussions on HN/Bluesky which have trended the same lines as usual instead of highlighting the unique actions of Sage as Simon did.
This is the worst of outrage marketing. Most people don't have resistance to this, so they eagerly spread the advertising. In the memetic lifecycle, they are hosts for the advertisement parasite, which reproduces virally. Susceptibility to this kind of advertising is cross-intelligence. Bill Ackman famously fell for a cab driver's story that Uber was stiffing him tips.
With the advent of LLMs, I'd hoped that people would become inured to nonsensical advertising and so on because they'd consider it the equivalent of spam. But it turns out that we don't even need Shiri's Scissors to get people riled up. We can use a Universal Bad and people of all kinds (certainly Rob Pike is a smart man) will rush to propagate the parasite.
Smaller communities can say "Don't feed the trolls" but larger communities have no such norms and someone will "feed the trolls" causing "the trolls" to grow larger and more powerful. Someone said something on Twitter once which I liked: You don't always get things out of your system by doing them; sometimes you get them into your system. So it's self-fueling, which makes it a great advertising vector.
Other manufactured mechanisms (Twitter's blue check, LinkedIn's glazing rings) have vaccines that everyone has developed. But no one has developed an anti-outrage device. Given that, for my part, I am going to employ the one tool I can think of: killfiling everyone who participates in active propagation through outrage.
As noted in the article, Sage sent emails to hundreds of people with this gimmick:
> In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists.
That's definitely "multiple" and "unsolicited", and most would say "large".
This is a definition of spam, not the only definition of spam.
In Canada, which is relevant here, the legal definition of spam requires no bulk.
Any company sending an unsolicited email to a person (where permission doesn't exist) is spamming that person. Though it expands the definition further than this as well.
Yeah you’ve been able to do this for over a decade. They can’t really stop it:
- Git commits form an immutable merkel dag. So commits can’t be changed without changing all subsequent hashes in a git tree
- Commits by default embed your email address.
I suppose GitHub could hide the commit itself, and make you download commits using the cli to be able to see someone’s email address. Would that be any better? It’s not more secure. Just less convenient.
Git (the version control program, not GitHub) associates the author’s email address with every single commit. The user of Git configures this email address. This isn’t secret information.
> What’s the point of the “Keep my email addresses private” github option and “noreply” emails then?
Those settings will affect what email shows up in commits.
In commits you vreate on other tooling you can configure a fake/alternate user.email address in gitconfig. Git (not just GitHub) needs some email address flr each commit but it is freetext.
There is one problem: commit signatures. For GitHub to consider a commit not created by github.com Web UI to be "verified" and get a green check mark, the following needs to hold:
So you can not use a '[email protected]' address and get green checks on your commits - it needs to be an address that is at least active when you add it to your account.
Run git show on any commit object, or look at the default output of git log, and you'll see the same. Your author name and email are always public. If you want, use a specific public address for those purposes.
How about adding these texts and reactions to LLM's context and iterating to improve performance? Keep doing it until a real person says, 'Yes, you're good enough now, please stop...' That should work.
An AI can not meaningfully say "thank you" to a human. This is not changed by human review. "Performance" is the completely wrong starting point to understand Rob's feelings.
Not defending the machines here, but why is this annoying beyond the deluge of spam we all get everyday in any case. Of course AI will be used to spam and target us. Every new technology will be used to do that. Was that surprising to Pike? Why not just hit delete and move on, like we do with spam all the time in any case? I don’t get the exceptional outrage. Is it annoying? Yes, surely. But does it warrant an emotional outburst? No, not really.
Day-to-day spam senders know what they are doing is not legal or wanted and I know they know.
Here not only are the senders apparently happily associating their actual legal names with the spam but frame the sending as "a good deed" and seem to honestly see it as smart branding.
We don't want the Overton window wherever they are.
Sometimes it just hits different. One spam/marketing email I got, pre-AI, was
Subject: {Name of one of my direct reports}
Body: Need to talk about {name} ASAP.
I get around 30 marketing emails per day that make it through the spam filter; from a purely logical perspective this should have been the same as any other, but I still remember this one because the tone, the way it used only a person's name in the subject, no mention of the company or what they were selling, just really pissed me off.
I imagine it's the same in this situation; the subject makes it seem like a sincere thank you from someone, and then you open it up and it's AI slop. To borrow ChatGPT-style phrasing: it's not just spam, it's insulting.
Sure, it’s insulting. I get it. Agree 100%. But then what? Does getting upset about it help anything? I used to get upset when spam first started invading my otherwise clean inbox. After 25+ years of receiving spam, I never had that anger/annoyance result in a reduction of anything.
According to their website this email was sent by Adam Binksmith, Zak Miller, and Shoshannah Tekofsky and is the responsibility of the Sage 501(c)3.
No-one gets to disclaim ownership of sending an email. A human has to accept the Terms of Service of an email gateway and the credit card used to pay the email gateway. This performance art does not remove the human no matter how much they want to be removed.
Heck Rob Pike did this himself back in the day on Usenet with Mark V. Shaney (and wasted far more people's time on Usenet with this)!
This whole anger seems weirdly misplaced. As far as I can tell, Rob Pike was infuriated at the AI companies and that makes sense to me. And yes this is annoying to get this kind of email no matter who it's from (I get a ridiculous amount of AI slop in my inbox, but most of that is tied with some call to action!) and a warning suffices to make sure Sage doesn't do it again. But Sage is getting put on absolute blast here in an unusual way.
Is it actually crossing a bright moral line to name and shame them? Not sure about bright. But it definitely feels weirdly disproportionate and makes me uncomfortable. I mean, when's the last time you named and shamed all the members of an org on HN? Heck when's the last time that happened on HN at all (excluding celebrities or well-known public figures)? I'm struggling to think of any startup or nonprofit, where every team member's name was written out and specifically held accountable, on HN in the last few years. (That's not to say it hasn't happened: but I'd be surprised if e.g. someone could find more than 5 examples out of all the HN comments in the past year).
The state of affairs around AI slop sucks (and was unfortunately easily predicted by the time GPT-3 came around even before ChatGPT came out: https://news.ycombinator.com/item?id=32830301). If you want to see change, talk to policymakers.
neural networks are just a tool, used poorly (as in this case) or well
You agreed with the other poster while reframing their ideas in slightly different words without adding anything to the conversation?
Most confusingly you did so in emphatic statements reminiscent of a disagreement or argument without there being one
> no computer system just does stuff on its own.
This was the exact statement the GP was making, even going so far as to dox the nonprofit directors to hold them accountable… then you added nothing but confusion.
> a human (or collection of them) built and maintains the system, they are responsible for it
Yup, GP covered this word for word… AI village built this system.
Why did you write this?
Is this a new form of AI? A human with low English proficiency? A strange type of empathetically supportive comment from someone who doesn’t understand that’s the function of the upvote button in online message boards?
accusing people of being AI is very low-effort bot behavior btw
but accusing me of being deficient in English or some AI system is…odd…
especially while doing (the opposite of) the exact thing they’re complaining about. upvote/downvote and move on. I do tend to regret commenting on here myself FWIW because of interactions like this
But at what point is the maker distant enough that they are no longer responsible? E.g. is Apple responsible for everything people do using an iPhone?
I think the case here is fairly straightforward
This whole idea is ill-conceived, but if you're going to equip them with email addresses you've arranged by hand, just give them sendmail or whatever.
same as the NRA slogan: "guns don't kill people, people kill people"
Comparatively, the people in this article are using tools which have a variety of benign purposes to do something bad.
Similarly though, they probably wouldn’t have gone through with it if they had to set up an email server on hardware they bought and then manually installed in a colo and then set up a DNS server and a GPU for the neural network they trained and hosted themselves.
my understanding, and correct me if I’m wrong, is a human is always involved. even if you build an autonomous killing robot, you built it, you’re responsible
typically this logic is used to justify the regulation of firearms —- are you proposing the regulation of neural networks? if so, how?
It's horrible to even propose that people are absolved of their decisionmaking consequences just because they filtered them through software.
EDIT: Public response: https://x.com/adambinksmith/status/2004651906019541396
I think "these emails are annoying, stop it sending them" is entirely fair, but a lot of the hate/anger, analogizing what they're doing to rape, etc. seems disproportionate.
The same way we name and shame petrol and plastic CEOs whose trash products flood our environment, we should be able to shame slop makers. Digital trash is still trash.
Quoted in full:
> Hey, one of the creators of the project here! The village agents haven’t been emailing many people until recently so we haven’t really grappled with what to do about this behaviour until now – for today’s run, we pushed an update to their prompt instructing them not to send unsolicited emails and also messaged them instructions to not do so going forward. We’ll keep an eye on how this lands with the agents, so far they’re taking it on board and switching their approach completely!
> Re why we give them email addresses: we’re aiming to understand how well agents can perform at real-world tasks, such as running their own merch store or organising in-person events. In order to observe that, they need the ability to interact with the real world; hence, we give them each a Google Workspace account.
> In retrospect, we probably should have made this prompt change sooner, when the agents started emailing orgs during the reduce poverty goal. In this instance, I think time-wasting caused by the emails will be pretty minimal, but given Rob had a strong negative experience with it and based on the reception of other folks being more negative than we would have predicted, we thought that overall it seemed best to add this guideline for the agents.
> To expand a bit on why we’re running the village at all:
> Benchmarks are useful, but they often completely miss out on a lot of real-world factors (e.g., long horizon, multiple agents interacting, interfacing with real-world systems in all their complexity, non-nicely-scoped goals, computer use, etc). They also generally don’t give us any understanding of agent proclivities (what they decide to do) when pursuing goals, or when given the freedom to choose their own goal to pursue.
> The village aims to help with these problems, and make it easy for people to dig in and understand in detail what today’s agents are able to do (which I was excited to see you doing in your post!) I think understanding what AI can do, where it’s going, and what that means for the world is very important, as I expect it’ll end up affecting everyone.
> I think observing the agents’ proclivities and approaches to pursuing open-ended goals is generally valuable and important (though this “do random acts of kindness” goal was just a light-hearted goal for the agents over the holidays!)
Good for Simon to call things out as it is. People think of Simon as an AI guy with his pelican benchmark and I still respect him and this is the reason why I respect him since of course he loves using AI tools and talking about them which some people might find tiring, at the end of day, after an incident like rob pike, he's one of the few AI guys I see to just call it out in simple terms like the title without much sugarcoating and calls when AI's bad.
Of course at the end of day, me and simon or others can have nuance in how to use AI or to not use ai at all and that also depends on the individual background etc. but still it's extremely good to see where people from both sides of the isle can agree on something.
> Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society, yet taking the time to have your vile machines thank me
Yes, the sender organisation is not the one doing all this, but merely a small user running a funny experiment; it would have indeed been stupid if Anthropic had sent him a thank you email signed by "Opus 4.5 model".
This is just a funny experiment, sending 300 emails from in weeks is nothing compared to the amount of crap that is sent by the millions and billions every day, or the stuff that social media companies do.
Nobody wants appreciation or any type of meaningful human sentiment outsourced to a computer, doing-so is insulting. It's like discovering your spouse was using ChatGPT to write you love notes, it has no authenticity and reflects a lack of effort and care.
The same as automated apologies.
Not from an “AI”, but I spent over an hour⁰ waiting for a delayed train¹, then the journey, on Tuesday, being regaled every few minutes with an automated “we apologise for your journey taking longer than expected” which is far more irritating than no apology at all.
--------
[0] I lie a little here - living near the station and having access to live arrival estimations online meant I could leave the house late and only be waited on the platform ~20 minutes, but people for whom this train was a connecting leg of a longer journey didn't have that luxury.
[1] which was actually an earlier train, the slot in the timetable for the one I was booked on was simply cancelled, so some were waiting over two hours
-Mass layoffs in tech AI data centers causing extreme increases in monthly electricity -bills across the US -Same as above but for water -The RAM crisis is entirely caused by Sam Altman - General fear and anxiety from many different professions about AI replacing them - Rape of the copyright system to train these models
And did you check whether or not what it produced was accurate? The article doesn't say.
It also feels a bit dishonest to sign it as coming from Claude, even if it isn't directly from Claude, but from someone using Claude to do the dumb thing.
I feel as if there is a fundamental difference between "AI slop" and "Human slop", it's that humans have true intent and meaning/purpose.
This current AI slop spammed rob pike simply because It only did something to maximize its goal or something and had no intention. It was simply 4 robots left behind a computer who spammed rob pike
On the other hand, if it was a human, who took the time out of his day to message rob pike a merry christmas. Asking how his day was and hoping him good luck, I am sure that rob pike's heart might melt from a heartfelt message
So in this sense, there really isn't "human slop". There is only intent. If something was done with a good intention by an human, I suppose it can't really be considered human slop. On the other hand if there was a spammer who handwrote that message to rob pike, his intentions were bad.
The thing is that AI doesn't have intentions. Its maths. And so the intentions are of the end person. I want to ask how people who spend a decent time in AI industry might have reacted if he had gotten the email instead of rob pike. I bet they would see it as an advancement and might be happy or enthusiastic.
So an AI message takes an connotation of the receiver. And lets just be honest that most first impressions of AI aren't good and combining that you get that connotation. I feel like it does negative/bad publicity to use AI at this point while still burning money perhaps on it.
Here is what I recommend for those websites who have AI chatbots or similar, when I click on the message:- Have two split buttons where pressing one might lead me to an AI chat and the other might lead me to a human conversation. Be honest about how much time on average it might take for support and be proper about ways to contact them (twitter,reddit although I hope that federated services like mastodon get more popularity too)
I was following the first half of the post where he discusses the environmental consequences of generative AI, but I didn't think the "thank you" aspect should be the straw that breaks the camel's back. It seems a bit ego driven.
(438 points, 373 comments) https://news.ycombinator.com/item?id=46389444
(763 points, 712 comments) https://news.ycombinator.com/item?id=46392115
He has not engaged in clickbait, does not spam his own content (this very submission was not submitted by him), and does not directly financially benefit from pageviews to his content.
What value do you think this post adds to the conversation?
But if that's value added, why frame it under the heading of popular drama/rage farming? To capture more attention? Do you believe the pop culture news sites would be interested if it discussed the idea and "experiment" without mentioning the rage bait?
How do you propose he should have framed it in a way that it is still helpful to the reader?
https://en.wikipedia.org/wiki/Streisand_effect
Again and again this stuff proves not to be AI but clever spam generation.
AWoT: Artificial Wastes of Time.
Don't do this to yourself. Find a proper job.
Hence upvoting the OP ("What has robpike come to? :shriek:") and downvoting GP.
Giving AI agents resources is a frontier being explored, and AI Village seems like a decent attempt at it.
Also the naming is the same as WALL•E - that was the name of the model of robot but also became the name of the individual robot.
Legitimate research in this field may be good, but would not involve real humans being impacted directly by it without consent.
Are we that far into manufactured ragebait to call a "thank you" e-mail "impacted directly without consent"? Jesus, this is the 3rd post on this topic. And it's Christmas. I've gotten more meaningless e-mails from relatives that I don't really care about. What in the actual ... is wrong with people these days?
Accepting that people who write things like --I kid you not-- "...using nascent AI emotions" will think it is acceptable to interfere with anyone's email inbox is I think implicitly accepting a lot of subsequent blackmirrorisms.
Honestly, I don't mean personal offence to you, but what the hell are you people talking about. AI is just a bunch of (very complex) statistics, deciding that one word is most appropriate after another. There are no emotions here, it's just maths.
> There are no emotions here, it's just maths.
100%, its an autocorrector on steroids which is trained to give you an answer based on how it was rewarded during its train phase. In the end, its all linear alegbra.
I remember prime saying, its all linear algebra and I like to reference it and technically its true but people in the AI community get remarkably angry sometimes when you point it out.
I mean no offense in saying this but at the end of the day It is maths and there is no denying around it. Please, the grand parent comment should stop creating terms like nascent AI emotions.
However, allowing unrestricted LLM access to email -- for example, earlier when this experiment sent out fraudulent letters to charities? That's real harm.
Startups like these have been sending unsolicited emails like this since the 2010's, before char-rnns. Solely blaming AI for enabling that behavior implicitly gives the growth hacking shenanigans a pass.
This startup didn’t spend the trillions he’s referencing.
With the advent of LLMs, I'd hoped that people would become inured to nonsensical advertising and so on because they'd consider it the equivalent of spam. But it turns out that we don't even need Shiri's Scissors to get people riled up. We can use a Universal Bad and people of all kinds (certainly Rob Pike is a smart man) will rush to propagate the parasite.
Smaller communities can say "Don't feed the trolls" but larger communities have no such norms and someone will "feed the trolls" causing "the trolls" to grow larger and more powerful. Someone said something on Twitter once which I liked: You don't always get things out of your system by doing them; sometimes you get them into your system. So it's self-fueling, which makes it a great advertising vector.
Other manufactured mechanisms (Twitter's blue check, LinkedIn's glazing rings) have vaccines that everyone has developed. But no one has developed an anti-outrage device. Given that, for my part, I am going to employ the one tool I can think of: killfiling everyone who participates in active propagation through outrage.
Spam is defined as "sending multiple unsolicited messages to large numbers of recipients". That's not what happened here.
> In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists.
That's definitely "multiple" and "unsolicited", and most would say "large".
In Canada, which is relevant here, the legal definition of spam requires no bulk.
Any company sending an unsolicited email to a person (where permission doesn't exist) is spamming that person. Though it expands the definition further than this as well.
The article calls it a trick but to me it seems a bug. I can’t imagine github leaving that as is, especially after such blog post.
What’s the point of the “Keep my email addresses private” github option and “noreply” emails then?
- Git commits form an immutable merkel dag. So commits can’t be changed without changing all subsequent hashes in a git tree
- Commits by default embed your email address.
I suppose GitHub could hide the commit itself, and make you download commits using the cli to be able to see someone’s email address. Would that be any better? It’s not more secure. Just less convenient.
Those settings will affect what email shows up in commits.
In commits you vreate on other tooling you can configure a fake/alternate user.email address in gitconfig. Git (not just GitHub) needs some email address flr each commit but it is freetext.
There is one problem: commit signatures. For GitHub to consider a commit not created by github.com Web UI to be "verified" and get a green check mark, the following needs to hold:
- Commit is signed
- Commit email address matches a verified GH account email address
So you can not use a '[email protected]' address and get green checks on your commits - it needs to be an address that is at least active when you add it to your account.
Here not only are the senders apparently happily associating their actual legal names with the spam but frame the sending as "a good deed" and seem to honestly see it as smart branding.
We don't want the Overton window wherever they are.
I imagine it's the same in this situation; the subject makes it seem like a sincere thank you from someone, and then you open it up and it's AI slop. To borrow ChatGPT-style phrasing: it's not just spam, it's insulting.
Curse, yell, fight. Never accept things just because they've grown to be common.