I’ve encountered an even more nightmarish version of this recently: ai generated tickets. Basically dumping the output of “write a detailed product spec for a clinical trial data collection pipeline” into a jira ticket and handing it off.
Doesn’t match any of our internal product design, adds tons of extraneous features. When I brought this up with said PM they basically responded that these inaccuracies should just be brought up in the sprint review and “partnering” with the engineering team. AI etiquette is something we’ll all have to learn in the coming years.
This.
In my case I do write from time to time tickets with an LLM but it's always after a long exploratory session with Claude Code, when I go back and forth checking possibilities and gathering data, and then I tell it to create a ticket with the info gathered so far. But even in that case I tend to edit it because I don't like the style or add some useless data that I want to remove.
As someone who maintains open source projects, I can assure you that this has been a problem for about a year or so. But I reckon it took a bit longer for people to start doing this at work as well.
Yes. My Jira tickets used to be almost empty, but all of it was useful info. Now, my Jira tickets are way too long. The amount of useful info has also gone down.
Talk about an AI induced productivity increase ...
Same thing with PR descriptions. The signal-to-noise ratio has completely flipped. Before, a short PR description meant the dev was lazy. Now, a long detailed one might just mean they hit generate description and didn't even read it. The length went up, the usefulness went down, and the reader has no way to tell which kind they're looking at.
AI etiquette is a great term. AI is useful in general but some patterns of AI usage are annoying. Especially if the other side spent 10 seconds on something and expects you to treat it seriously.
Currently it's a bit of a wild west, but eventually we'll need to figure out the correct set of rules of how to use AI.
I've heard a great thing recently, more or less - If all you're doing is writing prompts, maybe you're not needed anymore. Stay behind the intent, own the output and understand it and then maybe it makes sense. sloppy prompt + c/p doesn't bring value and will be treated as such. As with anything in life, outcome is usually proportional to the effort put in.
As a senior engineer, I am getting extremely tired of reviewing AI slops. Today at work I have decided that I just have to build a POC project from scratch. I spent 2 weeks to review the code, to log the process, and to build toy examples to make my argument clear that some (actually most) parts were not working.
The funny thing is that I know my manager got this “working” within a week with Claude. I had to spend 2 weeks with 4 JIRA tasks, many commits for toy examples, and three reports.
I find that I don't have a lot of sympathy for people angry at this type of behavior, even though I share the disdain for someone else's AI output. The people doing this kind of thing are not the kind of people to be reading this manifesto. We've been creating bait content for a long time, and humans have never been given the tools to manage this in any sophisticated fashion. The internet was not a bastion of high quality content or discourse pre-AI. We need better tools as content consumers to filter content. Ironically, AI is what may actually make this possible.
I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.
I suspect the endgame of this is probably the fulfillment of Dead Internet Theory, where it's just AI creating content and AI browsing the internet for content, and users will never engage with it directly. That person who spent 10 seconds getting AI to write something will be consumed by AI as well, only to be surfaced to you when you ask the AI to summon and summarize.
And if that fills people with horror at the inefficiency of it all, well, like I said, it isn't like the internet was a bastion of efficiency before. We smiled and laughed for years that all of this technology and power is just being used to share cat videos.
> I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral.
Isn't it obvious? If I'd wanted to see AI response to my question, I'd ask it myself (maybe I already did). If I'm asking humans, I want to see human responses. I eat fast-food sometimes, but if I was served a Big Mac at a sit down restaurant I'd be properly upset.
> If I'm asking humans, I want to see human responses
I find this fascinating, honestly. It shouldn't matter as long as it addresses your ask, yet it does. I also wish I could filter social media on "it's not X. It's Y"
Because it's probably not actually about the content but the sense of connection. People want to feel like they're connecting to people. That they're being worthy of someone's else's time and attention.
And if that's what people are seeking, slack and social media are probably not the platforms for it (and, arguably, never were).
> It shouldn't matter as long as it addresses your ask, yet it does.
If the LLM output is concise and efficient I don’t actually care that it’s LLM output.
My problem is that much of the LLM prose feels like someone took their half-baked idea and asked the LLM to put a veneer of quality writing on top of it. Then you waste your time reading it to parse out the half-baked idea hiding among the wall of text.
If a person has a shitty idea that sounds good, they start writing about it. If they exercise some care in their writing, the act of writing itself is enough to make them realize that their idea is shitty.
By the way, it happens to me all the time! Even just on HN, I’ve bailed halfway through writing a comment because I realized that I didn’t know what I was talking about, lol.
But an LLM will gladly take that shitty idea and expand it into a very plausible article/message/post, that seems reasonable if you don’t think very critically about it. And it’ll be done with such a high-seeming level of care that any human author would’ve been fact checking themselves the whole time.
So it forces the reader to think even more critically, rather than letting our subconscious try to judge authenticity of the writer through the language they use.
For example, someone who says “my WiFi is broken” when referring to the fact that their computer is dead, we can quickly judge them as “not an expert at computers”. But if they say that “my M.2 drive has gone bad”, we inherently assume they have some understanding. —- when the first person uses LLMs to write, they sound as informed as the second person even if they are completely clueless and wrong
In my case, it's because it doesn't address my ask, which is why I didn't ask an ai in the first place. The only person I know who does sloppypasta is my brother in law. I know he means well, but when I ask his opinion I want the perspective of someone in his demographic. If a generic ai response met my needs, I wouldn't be asking him.
> It shouldn't matter as long as it addresses your ask
But it doesn't? I'm more than capable of using Google and chatgpt myself. If I was looking for a machine generated answer to my question I would have already found it myself and never made the post in the first place. If I went to the effort of posting the question, it means that either the slop answer is not sufficient for some reason or that I want to hear from actual humans that have subjective experiences that an LLM cannot.
Posting an AI response verbatim basically says "I think you're too stupid to click a couple of buttons, so let me show you how it's done". I think it's very reasonable to get upset at the implication.
As an example of this, I am currently comparing two different models of Android e-readers, from a Chinese brand where the tech specs are all published but there aren't a lot of good comparative reviews. Plus, the specs like battery life are close to the same mAh, but for e-readers especially with Android optimization/drivers/etc make a gigantic difference.
So I have been Googling for "Reader X vs Reader Y review"(/comparison/etc) hoping to find Reddit comments or non-spam blog posts from people who actually own both to compare screen and battery life. I found a reddit thread comparing them directly and lo and behold the first comment is someone saying "I own both but honestly you could just ask ChatGPT for this". Fortunately a couple other people responded...
When I ask Gemini or ChatGPT, all I get is regurgitation of the tech specs (that are all mostly identical) plus summarized SEO spam reviews (that were probably written by another LLM based on those same tech specs) and it's totally unhelpful. So for this, I absolutely do NOT want an OpenClaw bot to respond as if they've physically used the devices and it would be actively enraging to learn a "helpful" comment "answering" the question was actually just an LLM impersonator.
I think it is reasonable, yes, but I don’t think it’s ever been reasonable to expect reasonableness on the internet. We have a difficult enough time showing each other decency.
> shouldn't matter as long as it addresses your ask, yet it does. I also wish I could filter social media on "it's not X. It's Y”
The people copy-pasting slop almost never excerpt the relevant response. As a result, you get non-concise text you have to triple check. This is functionally useless to the point of being fine to skip.
>I find this fascinating, honestly. It shouldn't matter as long as it addresses your ask, yet it does. I also wish I could filter social media on "it's not X. It's Y". Because it's probably not actually about the content but the sense of connection.
It's also about the content. Generic slop I can get on demand from an LLM myself, vs a novel insight.
> People want to feel like they're connecting to people. That they're being worthy of someone's else's time and attention
They are achieving the exact opposite. I don't connect with the person who sends me slop. And they send me content that is a waste of my time and attention, because I have to vet it. Why would I trust someone - how can I ever connect with them - when the only thing I know about them is they take shortcuts?
I am really into this approach of AI being used as a user-agent.
In particular, I've been thinking a lot about educational content, and what I'd love to ask educational providers for is not AI-generated content, but rather carefully human-built curricula offered in a structured manner, which my own AI could then use to create dynamic content for me.
> The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.
Reading AI generated prose, even if it’s my prompt, always gives me the same feeling as when I read a LinkedIn post: Like a simple concept was stretched into an unnecessarily long, formulaic format to trick the reader into thinking it was more than it was.
Everyone taking their scraps of thoughts and putting them into an LLM likes it because the output agrees with them. It’s flattering. But other people don’t like it because we have to read walls of text to absorb what should have been a couple of their scattered bullet points.
Just give me the bullet points. Don’t run it through the LLM expander. That just wastes my time.
Everybody wants to use LLMs to produce things and absolutely nobody wants to consume the things that LLMs produce and this is the fundamental reason this is all going to collapse unless we find a way for producers to pay consumers to consume their LLM output.
Gotta disagree. I've found several great new YouTube channels that clearly use ai for everything but the script writing. I assume it's an enthusiastic and smart niche expert who lacks the charisma to make videos in addition to doing the research. In very glad ai is filling in those people's weak spots.
How would you know it’s an enthusiastic and smart expert creating the content you’re consuming, do you have the subject matter expertise to judge that?
The odds are far higher it’s somebody who knows very little about anything but wants to make money from the gullible.
>I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.
The problem is that getting an AI to answer a question is trivial. If I wanted to know what an AI has to say about the topic, I would just ask myself. Sending AI output has, as the author writes, the same connotation as sending a LMGTFY link. It does not provide me any value at all, I know how to write a question to an AI, just as I know how to use Google.
>I find that I don't have a lot of sympathy for people angry at this type of behavior, even though I share the disdain for someone else's AI output. The people doing this kind of thing are not the kind of people to be reading this manifesto. We've been creating bait content for a long time, and humans have never been given the tools to manage this in any sophisticated fashion. The internet was not a bastion of high quality content or discourse pre-AI.
Which is irrelevant. TFA is talking about personal communication (and the examples are from a business setting).
And their concern is not the mere quality or lack thereof, but also its origin, and this is something new.
>I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.
No, many of us hate "our AI" content too, and wouldn't impose it to other people, same way we wouldn't fling shit at them.
I am sorry, but in what way is everyone letting the "We've been creating bait content for a long time" comment slide?
Did you even read the article? It is about person to person interactions. The three examples weer:
* Someone butting in to an ongoing discussion with a solution (but it's generic and misfitting AIslop)
* Someone being asked for their expertise and responding (but it's generic and misfitting AIslop)
* Someone comes with a problem thesis looking for help (but it's generic and misfitting AIslop)
The only one of these that existed prior to AI was the middle one, and the article very specifically calls out how transparent it used to be, because it had the shape of a google link.
The first one would be impossible because the person would have to either write an unhelpful response, and they wouldn't find the words at length. You could ignore them or pick it apart easily. The last one would be impossible unless if they were copy pasting from a large PDF, which would look nothing like a chat message.
What kind of workplace hellscape do you work on where people posting low effort bait on SLACK was the norm? The premise of this reply is entirely non-sensical.
I don't think that "it's more of the same" is a good way to think about it. The internet contained a lot of low-quality content, but even low-quality content used to be fairly expensive and time-consuming to produce. Further, you could immediately discern bottom-of-the-barrel content-farmed nonsense by the writing style alone. Now, LLMs make it practically free to generate unlimited amounts of slop that drowns out human-written stuff, and they can imitate the style hints we used to depend on for quick screening.
Yet how are the alternative ways of thinking about it better? Spending your time angry about what others can do? In any era, that’s a poor life philosophy.
The problem is the same as it has always been. Figure out how to use your time and attention effectively,
Is it possible to be critical without being angry? Are the only options here misplaced ire or total queiescent fatalism? Does the site here even seem excessively angry?
Strategic, directed anger is an important component of using your time effectively. It sends a clear signal that certain kinds of behavior are unacceptable and people who'd like continued access to your time had best not engage in them. You shouldn't go around yelling at people every time you get a bit frustrated, but you should and I do express anger when someone signs their name to LLM-generated Slack responses.
I acknowledge that those likely to copypaste slop aren't likely to find this article themselves, but I built the page to be shared or guide discussions around etiquette like nohello.net or dontasktoask.com. IMO a common understanding of AI etiquette would provide social pressure to halt some of these behaviors.
I honestly don't mind someone else's AI as long as I can trust it/them. One problem I have with sloppypasta specifically is that it reads as raw LLM output and the user isn't transparent about how they worked with the AI or what they verified. "ChatGPT says" isn't enough; for me to avoid inheriting a verification burden, I'd also need to understand what they were prompting for, if they iterated with the AI, and if/what/how they validated.
(the other problem is that dumping a multi-paragraph response in the midst of a chat thread is just obnoxious, but that's true even if its artisanal human-written text)
Good point: RTFM and (wall of slop) are two ways of telling someone that responding to them is not worth your time that are both ruder and more time-consuming than simply saying nothing. Explaining the culture of RTFM, i.e. "if there was any way you could possibly have found the answer otherwise, you should never have asked the question" to non-tech friends usually results in disbelief.
But the slop-wall is even worse, as it wastes the questioner's time in figuring out that they're just getting slop. At least RTFM is efficient.
I think you will find you will get farther by offloading this unpleasantness to an AI and open sourcing it rather than teaching etiquette to the internet, a place not known for its decency.
There’s a certain very satisfying force to turning something into a static website that you can point people at. The Internet equivalent of “don’t make me tap the sign”; especially in an era of AI-slop.
Talking about bait, good job getting 42 responses on hacker news! Your opinions are controversial enough to draw out people who need to correct them, yet genuine enough to not be passed off as a troll and downvoted.
> The internet was not a bastion of high quality content or discourse pre-AI.
I have read thousands upon thousands of pages of AI-related discourse, watched hundreds of videos since 2022, maybe even a thousand now on it. NEVER at any point in time did people opine for the "high quality" internet of before. They opined for the imperfect HUMAN internet of before. We are now seeing once pristine, curated corners of the internet being infected with sloppypasta.
This is quite a broad brush to paint the internet with. It's like saying The Earth is not a bastion of warzones/peaceful places to live. That is HIGHLY dependent on location.
Even before AI, the human social internet was loaded with bots and disingenuous actors. You want the imperfect human internet that is also pristine and curated. I've been socializing on the internet since 1994, and I feel fairly confident in sharing that this never existed, except in nostalgia.
If that's what you're pining for, you're going to have to find a highly protected part of the internet that is walled off from untrusted actors. However, that's always been the solution, and AI doesn't change that.
Oh, I 100% acknowledge the site itself was LLM generated. I'm not a web designer, so I needed a lot of help making a visually appealing site, even if that design language is at this point LLM trope.
However, the essay and the guidelines were all human-written!
by "human-written" do you mean you just used LLM to help the grammar and spelling and formatting and to think up some use cases but its entirely "my own words"?
Hits you in the first row of buttons with the classic gen-AI slop "Why It Matters".
So trace* through ninerealmlabs and ahgraber and sure enough:
I used AI:
- to help build this website.
- to help generate examples of sloppypasta
based on my original guidance
- to proofread and review the human-written
copy to provide a critical review
- to improve my arguments and ensure clarity.
Kudos for being forthright.
---
* Turns out clicking "Open Source" bottom right gets there faster!
I talked myself in circles on that "why it matters" heading but ultimately couldn't come up with a better one. "The problem" has similar ai-slop feel, and "the rant" // "the rules" didn't really evoke the feeling I wanted.
I would suggest to follow the rules of the submission that has your username attached to it. e.g. not pass off slop as your own words. That is the grand irony that this sub thread is about. We see this so often. There's even a new HN guideline about it. Most do not care, they just want the answers given to them. The defences that you might give are the same as the defences from those who use it for their coding jobs.
It's not difficult to create a visually appealing website. You don't have to be a designer. Many of us here aren't designers and have beautiful sites. Have you tried doing it yourself?
What bullshit essentially misrepresents is neither the state of affairs to which it refers nor the beliefs of the speaker concerning that state of affairs. Those are what lies misrepresent, by virtue of being false. Since bullshit need not be false, it differs from lies in its misrepresentational intent. The bullshitter may not deceive us, or even intend to do so, either about the facts or about what he takes the facts to be. What he does necessarily attempt to deceive us about is his enterprise. His only indispensably distinctive characteristic is that in a certain way he misrepresents what he is up to.
Also related - Gish-gallop
During a typical Gish gallop, the galloper confronts an opponent with a rapid series of specious arguments, half-truths, misrepresentations, and outright lies, making it impossible for the opponent to refute all of them within the format of the debate.[2] Each point raised by the Gish galloper takes considerably longer to refute than to assert. The technique wastes an opponent's time and may cast doubt on the opponent's debating ability for an audience unfamiliar with the technique, especially if no independent fact-checking is involved, or if the audience has limited knowledge of the topics.[3]
This article's proposal for stopping sloppypasta is to convince the people who does it to stop doing it, but I am more interested on what someone who receives sloppypasta can do.
How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?
I've never did that so far because I feel like I am either exposing their serious lack of professionalism or, if I wrongly assumed it was AI, I am plainly telling them that their work looks like bad AI slop.
Yeah it's tough. I tend to take the path of just responding with one line to their wall of text. What are they going to do, send a second wall of text?
If I tell someone literally "What value do you have if you're just acting as a pipe to the AI?", I'm pretty sure my manager will schedule a quick 1:1 to ask me why I'm telling peers that they have no value.
Your manager should then have a meeting with those coworkers too, or their manager(s). Depending on whether the company's leadership position is "AI at all costs", they may reconsider if they realise blind trust in AI is creating problems.
I wrote this intending it to be directly sharable and/or to provide a framework for how to have that discussion, kind of like a nohello.net or dontasktoask.com.
I've found success having sidebar conversations with the colleague (e.g., not in the main public thread where they pasted slop), explaining why it was disruptive and suggesting how they might alter their behavior.
It may also be useful to see if you can propose or contribute to a broader policy on appropriate AI use/contribution with AI, and leverage that policy as the conversation justification?
I've had some luck pointing out where the AI is wrong in their sloppypasta, delicate as one can. Avoiding shame or embarrassment can be a powerful motivator.
The most interesting incident for me is having someone take our Discourse thread, paste it into AI to validate their feelings being hurt (I took a follow up prompt to go full sycophancy), and then posting the response back that lambasted me. The mods handled that one before I was aware, but I then did the same thing, giving different prompts, and never sharing the output. It was an intriguing experience and exploration. I've since been even more mindful of my writing, sometimes using similar prompts to adjust my tone or call me out. I still write the first pass myself, rarely relying on AI for editing.
Ooh, I saw a very similar situation. User went on AI and asked "Which user was disrespectful first" to dunk on another.
The person being targeted just prompted the same AI with "Which user has thin skin" and instantly the AI turn on the other person. Then the moderators got involved and told the first guy to stop using AI as a genital pleaser.
Talking with middle managers in fortune 100 companies, I often get 'send us the documents so we can make a decision'. It used to be that we carefully wrote things and no one would read them. Now we send 3000 pages of AI crap to make sure no one reads it and then we get approved to start working. Not great but the old situation was worse; no one would read anything and ask you to read it for them on a conference call with 36 people; now that does not happen anymore.
A lot of middle management is reading documents from those below them, giving feedback to improve the clarity of the doc, and then provide their thoughts and comments on the doc.
This is one role that I can't tell if it's completely useless in an AI powered world, or if that's basically what we all end up doing, reviewing and commenting on the work versus actually making it.
Tired of people at work pasting raw ChatGPT output into chats, I coined the term "sloppypasta" and have written this rant to explain why it's rude and some guidelines for what to do instead
sloppypasta: Verbatim LLM output copy-pasted at someone, unread, unrefined, and unrequested. From slop (low-quality AI-generated content) + copypasta (text copied and pasted, often as a meme, without critical thought). It is considered rude because it asks the recipient to do work the sender did not bother to do themselves.
I'm glad that the term "slop" really caught on. It's such a succinct way to describe the phenomenon, and at the same time it's so malleable. Sloppypasta, Microslop, Workslop, Ensloppification, etc.
I wouldn't call "ChatGPT says" an equivalent of LMGTFY. The former is people in awe with the oracle, the latter is people tired of having to look something up for others.
I would say LMAAFY is like LMGTFY, where as the sloppypasta is more like pasting search results list without vetting them. That is, there are two phases to this phenomenon, query and results.
When you must remind someone to “think” when using a technology because the least resistant path is to not think… it feels like the technology isn’t really helping.
They are stealing our work, turning it into a model, and then renting our decisions to less intelligent people.
They (tech companies) don’t want us to be smart any more. They are commodifying intelligence.
Dealing with people who copy-paste unread slop into emails is probably not a huge issue for most of us. There's much more slop out there masquerading as blog posts, HN comments, etc.. It's not a huge issue yet, but there have definitely been times when I found myself midway through reading something and realizing it's just a LLM wasting my time.
I'm starting to be reminded of Neal Stephenson's "Diamond Age". He described a future in which people walked around with a nearly invisible defensive army of nanobots surrounding them whose job it was to counter the offensive nanobot swarms of their enemies. Characters in this novel would go about their business while an unseen nanobot war took place in the air around them.
We're rapidly reaching the point where we will need AI to defend us from AI. i.e. We will soon need agents filtering all that we read and removing slop, just so we can preserve our time and attention for things that are human and real.
I can understand why various unscrupulous entities and individuals would use AI to generate "slop" content to drive clicks/karma farm etc. But it's baffling to me when I ask someone a question and they respond saying they asked ChatGPT/Claude/etc. and then just share the full response. They seem to genuinely think this is something I wanted them to do.
I've been thinking about this, what if AI runs autonomously and finds things to criticise that are factually incorrect?
It is easy to do in social media because the context is global but in enterprises it is a bit harder.
Something like "flagged as very likely untrue by AI" is something I would really appreciate.
I see many posts and comments throughout the internet that can easily be dispelled by a single LLM prompt. But this should only be used when the confidence is really high.
At the bottom, it references some stuff that came before widespread use of LLMs. One of them is no hello [0]. I disagree with no hello. If somebody wants to send a message that just says hello then they should go ahead and do that. The way that language works when someone thinks of something to say it often comes all at once. The only question is whether to say it or not, and that is the filtering stage. Now, I'm not one to begin my conversations with just a message "hello" or "hi" more than the average person. I think I do it less than the average person. Yet I was still taken aback by this request. I don't think that peoples' social instincts should be put aside so easily.
As for "Stop Sloppypasta", it doesn't feel like the content is AI-generated to me but it feels like the presentation of it is. I don't know whether that changes my opinion of the whole thing or just the presentation. As for the advice in it, it seems good, but it also seems a little bit brittle, because people can use an LLM session to review things generated in a different LLM session before sending with some success, and this will increase and therefore it's a moving target.
I don’t mind this so much if they don’t know anything about the subject themselves. What bothers me is when they then copy it at domain experts as if it makes them qualified to talk.
If I was a bot I would probably write some perfectly punctuated garbage about how your site is a crucial testament to the ever evolving digital landscape or use big words to delve into the multifaceted tapestry of internet ethics. But honestly your website about stopping sloppy pasta is just so dumb and a complete waste of time. Your acting like somebody writing a fake story with ai is the end of the world or something. Literaly nobody cares if some random article was written by a computer so maybe stop pretending your the heroic saviors of the web. Get a real hobby and stop whining about people using chat bots because its really not that deep bro.
- now the fun part: which AI did I use to write the above?
>"I asked Claude about this! Here's what it said:"
>"ChatGPT says:"
My policy suggestion is that we need to completely people quoting ChatGPT. That's legit, that's not a bannable offense, not against any policy.
The author wastes time talking about this case, and even does it first before talking about the much worse case:
>"The sender shares AI output as their own work, with no indication a chatbot wrote it."
This is 100 times worse, and is objective rather than subjective. If the author admits it's AI when confronted it kills their reputation, (if they don't admit it and turns out it is AI, it's fraud, fireable offense)
Putting these 2 categories of AI use wastes breath and conflates the two, the message will not be clear at all.
What's worse, such a policy actually has the effect of increasing undisclosed AI use. This is a specific case of the general case: banning all AI usage increases unregulated AI usage. Everyone who prohibited employees from using AI in 2024 knows that what you get is undisclosed AI use or content you are not sure is AI written or not. If you give a specific way to use AI, you can add features like auditability, supply chain control, and you can remove any outs from employees and users that do not comply with the policy.
Your ellipsis leaves out the answer to your question. The paragraph is contrasting "ChatGPT says" which is annoying, but transparent (as LMGTFY), with "sloppypasta" which includes no such indicator.
Admittedly, the paragraph is somewhat confusingly written. Also probably written by an LLM.
LLMs are tools. For me (wot had a C64 as one of my first computers) they are seriously close to magic but I understand what a "next token guesser" means.
This reminds me of why I despise certain works/styles of art and artists. I feel cheated if I'm made to spend more time and effort interpreting a work of art than the creator put into it themselves.
I notice that your comment history is all rapid-fire three-paragraph LLM responses. You do appear knowledgeable and respond quickly, but I've just dumped 10 minutes of my life into your attention in order to verify, parse, and filter through your responses.
I can't tell whether you're a person who thought about something. Therefore, I can't tell whether, for example, https://news.ycombinator.com/item?id=47393311 is an analysis I should take seriously (as I might, if it were spoken from experience) or just Markov-chain, Reddit-trained hypothetical fluff.
How can we increase the friction to presumptively exclude you, but provide accommodation if, for example, you're more comfortable in your native language and using the LLM mainly to bring your English writing to a level consistent with your personal expertise?
> I notice that your comment history is all rapid-fire three-paragraph LLM responses
I looked after you said this and those are all from today, in the last hour. And is a stark change from their (very short) comment history.
In particular these two comments are extremely suspicious[0,1]. I think even if not LLM generated I highlights something likely wrong, which paseante themselves states!
>> a long, detailed response in Slack implied the person had spent time thinking
There's 2 minutes between these comments, on different threads (I also noticed they did similar things in a few threads as I typed this out). While the timing is reasonable for the amount of words written it does not seem adequate for reading the article and/or other comments. Personally, I find that kind of behavior rude as it enshitifies the social space the rest of us are in[2].
what the hell happened with this account? existed since 2014 and had a few real comments. then all of a sudden, slop deluge (extremely ironic given the thread). if this were to be reddit i would understand - high karma accounts have monetary value. but this is hacker news. genuinely confused as to the motivation here. is the original account owner the one posting these comments?
frankly I'm disappointed in the amount of responses this account is getting on its other comments. i thought this forum was a bit better than average at detecting artificial behaviour. perhaps the internet is already completely dead and i am merely picking thru its bones.
What's interesting is that there are probably people who could spend a year happily working with an AI "coworker" without knowing it was an AI, but then get upset and change their viewpoint after learning the truth.
when a truth is revealed to someone operating under a totally different understanding of a situation, it can be confusing, disorienting and upsetting.
this seems reasonable to me, especially in this transition period where we're navigating ethical and respectful collaboration that involves AI. give people a little grace in this weird new world.
Doesn’t match any of our internal product design, adds tons of extraneous features. When I brought this up with said PM they basically responded that these inaccuracies should just be brought up in the sprint review and “partnering” with the engineering team. AI etiquette is something we’ll all have to learn in the coming years.
Talk about an AI induced productivity increase ...
Currently it's a bit of a wild west, but eventually we'll need to figure out the correct set of rules of how to use AI.
Funny how that works out.
The funny thing is that I know my manager got this “working” within a week with Claude. I had to spend 2 weeks with 4 JIRA tasks, many commits for toy examples, and three reports.
That, would give the responder the chance to modify the prompt and get a perhaps better answer from the LLM?
I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.
I suspect the endgame of this is probably the fulfillment of Dead Internet Theory, where it's just AI creating content and AI browsing the internet for content, and users will never engage with it directly. That person who spent 10 seconds getting AI to write something will be consumed by AI as well, only to be surfaced to you when you ask the AI to summon and summarize.
And if that fills people with horror at the inefficiency of it all, well, like I said, it isn't like the internet was a bastion of efficiency before. We smiled and laughed for years that all of this technology and power is just being used to share cat videos.
Isn't it obvious? If I'd wanted to see AI response to my question, I'd ask it myself (maybe I already did). If I'm asking humans, I want to see human responses. I eat fast-food sometimes, but if I was served a Big Mac at a sit down restaurant I'd be properly upset.
I find this fascinating, honestly. It shouldn't matter as long as it addresses your ask, yet it does. I also wish I could filter social media on "it's not X. It's Y"
Because it's probably not actually about the content but the sense of connection. People want to feel like they're connecting to people. That they're being worthy of someone's else's time and attention.
And if that's what people are seeking, slack and social media are probably not the platforms for it (and, arguably, never were).
So it does not meet the bare minimum of addressing my ask, the premise of the ask hinges on a discussion with a real person.
If the LLM output is concise and efficient I don’t actually care that it’s LLM output.
My problem is that much of the LLM prose feels like someone took their half-baked idea and asked the LLM to put a veneer of quality writing on top of it. Then you waste your time reading it to parse out the half-baked idea hiding among the wall of text.
If a person has a shitty idea that sounds good, they start writing about it. If they exercise some care in their writing, the act of writing itself is enough to make them realize that their idea is shitty.
By the way, it happens to me all the time! Even just on HN, I’ve bailed halfway through writing a comment because I realized that I didn’t know what I was talking about, lol.
But an LLM will gladly take that shitty idea and expand it into a very plausible article/message/post, that seems reasonable if you don’t think very critically about it. And it’ll be done with such a high-seeming level of care that any human author would’ve been fact checking themselves the whole time.
So it forces the reader to think even more critically, rather than letting our subconscious try to judge authenticity of the writer through the language they use.
For example, someone who says “my WiFi is broken” when referring to the fact that their computer is dead, we can quickly judge them as “not an expert at computers”. But if they say that “my M.2 drive has gone bad”, we inherently assume they have some understanding. —- when the first person uses LLMs to write, they sound as informed as the second person even if they are completely clueless and wrong
But it doesn't? I'm more than capable of using Google and chatgpt myself. If I was looking for a machine generated answer to my question I would have already found it myself and never made the post in the first place. If I went to the effort of posting the question, it means that either the slop answer is not sufficient for some reason or that I want to hear from actual humans that have subjective experiences that an LLM cannot.
Posting an AI response verbatim basically says "I think you're too stupid to click a couple of buttons, so let me show you how it's done". I think it's very reasonable to get upset at the implication.
So I have been Googling for "Reader X vs Reader Y review"(/comparison/etc) hoping to find Reddit comments or non-spam blog posts from people who actually own both to compare screen and battery life. I found a reddit thread comparing them directly and lo and behold the first comment is someone saying "I own both but honestly you could just ask ChatGPT for this". Fortunately a couple other people responded...
When I ask Gemini or ChatGPT, all I get is regurgitation of the tech specs (that are all mostly identical) plus summarized SEO spam reviews (that were probably written by another LLM based on those same tech specs) and it's totally unhelpful. So for this, I absolutely do NOT want an OpenClaw bot to respond as if they've physically used the devices and it would be actively enraging to learn a "helpful" comment "answering" the question was actually just an LLM impersonator.
Perhaps they did it for the off chance of a good response.
The people copy-pasting slop almost never excerpt the relevant response. As a result, you get non-concise text you have to triple check. This is functionally useless to the point of being fine to skip.
It's also about the content. Generic slop I can get on demand from an LLM myself, vs a novel insight.
They are achieving the exact opposite. I don't connect with the person who sends me slop. And they send me content that is a waste of my time and attention, because I have to vet it. Why would I trust someone - how can I ever connect with them - when the only thing I know about them is they take shortcuts?
In particular, I've been thinking a lot about educational content, and what I'd love to ask educational providers for is not AI-generated content, but rather carefully human-built curricula offered in a structured manner, which my own AI could then use to create dynamic content for me.
Reading AI generated prose, even if it’s my prompt, always gives me the same feeling as when I read a LinkedIn post: Like a simple concept was stretched into an unnecessarily long, formulaic format to trick the reader into thinking it was more than it was.
Everyone taking their scraps of thoughts and putting them into an LLM likes it because the output agrees with them. It’s flattering. But other people don’t like it because we have to read walls of text to absorb what should have been a couple of their scattered bullet points.
Just give me the bullet points. Don’t run it through the LLM expander. That just wastes my time.
The odds are far higher it’s somebody who knows very little about anything but wants to make money from the gullible.
The problem is that getting an AI to answer a question is trivial. If I wanted to know what an AI has to say about the topic, I would just ask myself. Sending AI output has, as the author writes, the same connotation as sending a LMGTFY link. It does not provide me any value at all, I know how to write a question to an AI, just as I know how to use Google.
Which is irrelevant. TFA is talking about personal communication (and the examples are from a business setting).
And their concern is not the mere quality or lack thereof, but also its origin, and this is something new.
>I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.
No, many of us hate "our AI" content too, and wouldn't impose it to other people, same way we wouldn't fling shit at them.
Did you even read the article? It is about person to person interactions. The three examples weer:
* Someone butting in to an ongoing discussion with a solution (but it's generic and misfitting AIslop)
* Someone being asked for their expertise and responding (but it's generic and misfitting AIslop)
* Someone comes with a problem thesis looking for help (but it's generic and misfitting AIslop)
The only one of these that existed prior to AI was the middle one, and the article very specifically calls out how transparent it used to be, because it had the shape of a google link.
The first one would be impossible because the person would have to either write an unhelpful response, and they wouldn't find the words at length. You could ignore them or pick it apart easily. The last one would be impossible unless if they were copy pasting from a large PDF, which would look nothing like a chat message.
What kind of workplace hellscape do you work on where people posting low effort bait on SLACK was the norm? The premise of this reply is entirely non-sensical.
The problem is the same as it has always been. Figure out how to use your time and attention effectively,
Conversely, if your take is that there's no point being angry and we should just take it in stride, that just emboldens the producers of slop.
I ignore it. But if that isn’t an option, this sort of writing can help you convince someone in power around you it’s okay to ignore it.
I honestly don't mind someone else's AI as long as I can trust it/them. One problem I have with sloppypasta specifically is that it reads as raw LLM output and the user isn't transparent about how they worked with the AI or what they verified. "ChatGPT says" isn't enough; for me to avoid inheriting a verification burden, I'd also need to understand what they were prompting for, if they iterated with the AI, and if/what/how they validated.
(the other problem is that dumping a multi-paragraph response in the midst of a chat thread is just obnoxious, but that's true even if its artisanal human-written text)
(n)amow(?): (not) All my own work ?
But the slop-wall is even worse, as it wastes the questioner's time in figuring out that they're just getting slop. At least RTFM is efficient.
https://nohello.net
https://dontasktoask.com
Well, cat videos make people happy.
> The internet was not a bastion of high quality content or discourse pre-AI.
I have read thousands upon thousands of pages of AI-related discourse, watched hundreds of videos since 2022, maybe even a thousand now on it. NEVER at any point in time did people opine for the "high quality" internet of before. They opined for the imperfect HUMAN internet of before. We are now seeing once pristine, curated corners of the internet being infected with sloppypasta.
This is quite a broad brush to paint the internet with. It's like saying The Earth is not a bastion of warzones/peaceful places to live. That is HIGHLY dependent on location.
To "opine" is to give an opinion on something.
To "pine" for something is to wish for it, usually in a nostalgic sense.
I get how the two are related and can be confused, especially when you're talking about comments on the web. Just thought I'd clarify.
If that's what you're pining for, you're going to have to find a highly protected part of the internet that is walled off from untrusted actors. However, that's always been the solution, and AI doesn't change that.
For me it destroyed company as aligned group of people, at C level, it's just bazaar of drones throwing AI slow at each other.
https://github.com/ocaml/ocaml/pull/14369#issuecomment-35573...
However, the essay and the guidelines were all human-written!
So trace* through ninerealmlabs and ahgraber and sure enough:
Kudos for being forthright.---
* Turns out clicking "Open Source" bottom right gets there faster!
Happy to take suggestions on this!
LLM usage affects our voices, our selves, our souls and our jobs.
In "your" own words: it's rude.
I'm possibly too jaded / cynical already...
They did disclose AI usage which is good: https://github.com/ahgraber/stopsloppypasta?tab=readme-ov-fi...
How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?
I've never did that so far because I feel like I am either exposing their serious lack of professionalism or, if I wrongly assumed it was AI, I am plainly telling them that their work looks like bad AI slop.
Make them realise they're replacing themselves if they continue down that path. "What value do you have if you're just acting as a pipe to the AI?"
You don’t. You keep these arguments handy for ignoring their output until it’s germane.
I've found success having sidebar conversations with the colleague (e.g., not in the main public thread where they pasted slop), explaining why it was disruptive and suggesting how they might alter their behavior. It may also be useful to see if you can propose or contribute to a broader policy on appropriate AI use/contribution with AI, and leverage that policy as the conversation justification?
Pattern rather than person? General team reviews or the like. As long as it's not tech leadership pressing for it..
That's the asymmetry of the problem: Writing with AI delegates the thinking to the reader as well as all the risk for correcting it.
The most interesting incident for me is having someone take our Discourse thread, paste it into AI to validate their feelings being hurt (I took a follow up prompt to go full sycophancy), and then posting the response back that lambasted me. The mods handled that one before I was aware, but I then did the same thing, giving different prompts, and never sharing the output. It was an intriguing experience and exploration. I've since been even more mindful of my writing, sometimes using similar prompts to adjust my tone or call me out. I still write the first pass myself, rarely relying on AI for editing.
The person being targeted just prompted the same AI with "Which user has thin skin" and instantly the AI turn on the other person. Then the moderators got involved and told the first guy to stop using AI as a genital pleaser.
Embrace the tension. Tension is human.
This is one role that I can't tell if it's completely useless in an AI powered world, or if that's basically what we all end up doing, reviewing and commenting on the work versus actually making it.
sloppypasta: Verbatim LLM output copy-pasted at someone, unread, unrefined, and unrequested. From slop (low-quality AI-generated content) + copypasta (text copied and pasted, often as a meme, without critical thought). It is considered rude because it asks the recipient to do work the sender did not bother to do themselves.
They are stealing our work, turning it into a model, and then renting our decisions to less intelligent people.
They (tech companies) don’t want us to be smart any more. They are commodifying intelligence.
I'm starting to be reminded of Neal Stephenson's "Diamond Age". He described a future in which people walked around with a nearly invisible defensive army of nanobots surrounding them whose job it was to counter the offensive nanobot swarms of their enemies. Characters in this novel would go about their business while an unseen nanobot war took place in the air around them.
We're rapidly reaching the point where we will need AI to defend us from AI. i.e. We will soon need agents filtering all that we read and removing slop, just so we can preserve our time and attention for things that are human and real.
People who previously couldn't put in the effort or quality, are now vomiting tons of slop I'm meant to read and review.
PRs descriptions. Documentation. Plans. Etc.
Walls of sprawling text, "relevant files", linked references, unhelpful factoids, subtle inconsistencies and incoherencies.
It's oppressive like 95% humidity on a warm day.
It is easy to do in social media because the context is global but in enterprises it is a bit harder.
Something like "flagged as very likely untrue by AI" is something I would really appreciate.
I see many posts and comments throughout the internet that can easily be dispelled by a single LLM prompt. But this should only be used when the confidence is really high.
Instead, they'll use an LLM to send a slop response back.
Instant karma!
As for "Stop Sloppypasta", it doesn't feel like the content is AI-generated to me but it feels like the presentation of it is. I don't know whether that changes my opinion of the whole thing or just the presentation. As for the advice in it, it seems good, but it also seems a little bit brittle, because people can use an LLM session to review things generated in a different LLM session before sending with some success, and this will increase and therefore it's a moving target.
0: https://nohello.net/
It's also very impolite to dump 5 pages of text on someone, because now you're asking _them_ to validate it.
When I ask a question in Slack I want people's input. Part of my work is also consulting the GPTs and see if the information makes sense.
And it shows up the most with people who answer questions in domains they're not a 100% familiar with.
- now the fun part: which AI did I use to write the above?
It includes 4 follow up actions and I automate check in messages to see how they are progressing with them.
The author wastes time talking about this case, and even does it first before talking about the much worse case:
>"The sender shares AI output as their own work, with no indication a chatbot wrote it."
This is 100 times worse, and is objective rather than subjective. If the author admits it's AI when confronted it kills their reputation, (if they don't admit it and turns out it is AI, it's fraud, fireable offense)
Putting these 2 categories of AI use wastes breath and conflates the two, the message will not be clear at all.
What's worse, such a policy actually has the effect of increasing undisclosed AI use. This is a specific case of the general case: banning all AI usage increases unregulated AI usage. Everyone who prohibited employees from using AI in 2024 knows that what you get is undisclosed AI use or content you are not sure is AI written or not. If you give a specific way to use AI, you can add features like auditability, supply chain control, and you can remove any outs from employees and users that do not comply with the policy.
How?
Admittedly, the paragraph is somewhat confusingly written. Also probably written by an LLM.
I'm 55 years old. "slop" is way older than your examples. Try a dictionary, eg: https://dictionary.cambridge.org/dictionary/english/slop
LLMs are tools. For me (wot had a C64 as one of my first computers) they are seriously close to magic but I understand what a "next token guesser" means.
I wish there was a remedy. I block or mute the person when I can.
I notice that your comment history is all rapid-fire three-paragraph LLM responses. You do appear knowledgeable and respond quickly, but I've just dumped 10 minutes of my life into your attention in order to verify, parse, and filter through your responses.
I can't tell whether you're a person who thought about something. Therefore, I can't tell whether, for example, https://news.ycombinator.com/item?id=47393311 is an analysis I should take seriously (as I might, if it were spoken from experience) or just Markov-chain, Reddit-trained hypothetical fluff.
How can we increase the friction to presumptively exclude you, but provide accommodation if, for example, you're more comfortable in your native language and using the LLM mainly to bring your English writing to a level consistent with your personal expertise?
In particular these two comments are extremely suspicious[0,1]. I think even if not LLM generated I highlights something likely wrong, which paseante themselves states!
There's 2 minutes between these comments, on different threads (I also noticed they did similar things in a few threads as I typed this out). While the timing is reasonable for the amount of words written it does not seem adequate for reading the article and/or other comments. Personally, I find that kind of behavior rude as it enshitifies the social space the rest of us are in[2].[0] https://news.ycombinator.com/item?id=47392999
[1] https://news.ycombinator.com/item?id=47393012
[2] https://news.ycombinator.com/item?id=47393465
frankly I'm disappointed in the amount of responses this account is getting on its other comments. i thought this forum was a bit better than average at detecting artificial behaviour. perhaps the internet is already completely dead and i am merely picking thru its bones.
this seems reasonable to me, especially in this transition period where we're navigating ethical and respectful collaboration that involves AI. give people a little grace in this weird new world.