"Recent large-scale upticks in the use of words like “delve” and “intricate” in certain fields, especially education and academic writing, are attributed to the widespread introduction of LLMs with a chat function, like ChatGPT, that overuses those buzzwords."
OK, but please don't do what pg did a year or so ago and dismiss anyone who wrote "delve" as AI writing. I've been using "delve" in speech for 15+ years. It's just a question where and how one learns their English.
Funny enough, I avoided the em dash, because everyone was using hyphens and I didn't want forensic linguistics bored. Now that AI got my FBI agents on welfare and em dashed the internet kaputt, now that I am liberated, I can't tell an em dash and hyphen apart, hand–written in my diary.
Genuine question, do you actually use the formal emdash in your writing? AIs are very consistent about using the proper emdash—a double long dash with no spaces around it, whereas humans almost always tend to use a slang version - a single dash with spaces around it. That's because most keyboards don't have an emdash key, and few people even know how to produce an actual emdash.
That's what makes it such a good giveaway. I'm happy to be told that I'm wrong, and that you do actually use the proper double long dash in your writing, but I'm guessing that you actually use the human slang for an emdash, which is visually different and easily sets your writing apart as not AI writing!
I have a Compose key binding in https://github.com/kragen/xcompose which maps Compose Space Minus to "—" with two thin spaces on each side of it, because I prefer the spaces. But HN rewrites the thin spaces to regular spaces, so on HN I just use "—" without the spaces, the way ChatGPT does, which is Compose Minus Minus Minus, and is in the standard Compose key bindings (if you map your keyboard to have a Compose key at all).
Same. I feel like I've been using "--" in my online writing for decades now. Take that, LLMs; I used it before it was cool... er... before it was a weak signal that a piece of text was written by an LLM.
Macs and iDevices have been auto-transforming -- into – for well over a decade now, and on the iOS standard keyboard both – and — are just a single long press of the dash key away.
Microsoft Word does this too. I've recently started manually uncorrecting these corrections in my writing because of this new implication that I used Chat-GPT.
Still less obvious than the emails I see sent out which contain emojis, so maybe I'm overthinking things...
This is a ridiculous maladaptive behavior. Word has been replacing dashes forever, consequently it has been unintentionally ubiquitous in business writing forever. That this character ever became a heuristic for AI is silly.
the dashes and the auto capitalization are awful for technical writing. The ctrl-z becomes painful and annoying very quickly. Would that Word supported markdown.
It’s a per-app setting that sometimes needs to be set in the text field’s context menu. There’s also a few apps that just don’t integrate with the macOS text system.
I would write it with Option-shift-hyphon when I used macOS.
On Linux, I use Compose-hyphen-hyphen-hyphen.
I don't use it as often as I used to; but when I was younger, I was enough of a nerd to use it in my writing all the time. And yes, always careful to use it correctly, and not confuse it with an en-dash. Also used to write out proper balanced curly quotes on macOS, before it was done automatically in many places.
I always used to google search "emdash unicode" and copy-paste the character, but I guess now I'll save several minutes from my essay-writing by switching to the lazy single-dash typology that I don't like the look of. Soon I'm going to have to start throwing in speling errors and other things too.
Most people probably don't. I'm an editor who's been working in print for years, so the keyboard shortcut for an em dash is muscle memory for me at this point. I have always been a Chicago Manual of Style person, so I don't place spaces around the em dash. AP style guide users do place a space around it.
> That's because most keyboards don't have an emdash key, and few people even know how to produce an actual emdash.
There’s a subculture effect: this has been trivial on Apple devices for a long time—I’m pretty sure I learned the Shift-Option-hyphen shortcut in the 90s, long before iOS introduced the long-press shortcut—and that’s also been a world disproportionately popular with the kind of people who care about this kind of detail. If you spend time in communities with designers, writers, etc. your sense of what’s common is wildly off the average.
I’ve used “real” em-dashes and en-dashes in my writing generally since I switched to using Macs about 20 years ago. Before that I used them for e.g. academic writing, which I mainly did in LaTeX, but not so often elsewhere.
They’re simple enough key combinations (on a Mac) that I wouldn’t be surprised if I guessed them. I certainly find it confusing to imagine someone who has to write professionally or academically not working out how to type them for those purposes at least.
I have --- set to autocorrect to —. I've been using it in formal writing for 30 years. When we were in high school, we had a "Dash Party" in English class, where we ate Twinkies and learned about the different dashes.
I would argue that LLMs overuse the emdash more because they overuse specific rhetorical devices, e.g. antithesis, than because they are being too correct about punctuation.
> Genuine question, do you actually use the formal emdash in your writing?
"the formal emdash"?
> AIs are very consistent about using the proper emdash—a double long dash with no spaces around it
Setting an em-dash closed is separate from whether you using an em-dash (and an em-dash is exactly what it says, a dash that is the width of the em-width of the font; "double long" is fine, I guess, if you consider the en-dash "single long", but not if, as you seem to be, you take the standard width as that of the ASCII hyphen-minus, which is usually considerably narrower than en width in a proportional font.)
But, yes, most people who intentionally use em-dashes are doing so because they care about detail enough that they are also going to set them closed, at least in the uses where that is standards. (There are uses where it is conventional to set them half-closed, but that's not important here.)
> whereas humans almost always tend to use a slang version - a single dash with spaces around it.
That's not an em-dash (and its not even an approximation of one, using a hyphen-minus set open—possibly doubled—is an approximation of the typographic convention of using an en-dash set open – different style guides prefer that for certain uses for which other guides prefer an em-dash set closed.) But I disagree with your claim that "most humans" who describe themselves as using em-dashes instead are actually just approximating the use of en-dashes set open with the easier-to-type hyphen-minus.
It was an abuse of “slang” to mean “typographic approximation”; now what, exactly, did you think “and its not even an approximation of one, using a hyphen-minus set open—possibly doubled—is an approximation of the typographic convention of using an en-dash set open” meant?
I do use the hyphen-minus set open sometimes - I'd prefer em-dash closed everywhere, but sometimes it's difficult to type an em-dash, and if I'm having to use hyphen, a closed hyphen looks very wrong. Similarly, "--" is shorthand for en-dash as you say, and "---" (even closed) looks too busy.
Nah, I've been using them correctly for years. My preference for them came by way of reading a lot in generally, but especially from Salinger.
I do them without surrounding spaces, because that's... how you're supposed to use them, and it's also less typing.
They also used to be a really good Shibboleth to tell if someone was using a Mac—the key combo on there is easy, and also easy to remember, so Mac users were far more likely than the median to employ em-dashes. It wasn't a sure tell, but it was pretty reliable.
> Genuine question, do you actually use the formal emdash in your writing?
I’m not the person you asked, but I do.
> the proper emdash—a double long dash with no spaces around it
The spaces around it depend on style guide, it is not universal that they should not exist.
> That's because most keyboards don't have an emdash key
Nor do they have keys for proper quotes and apostrophes or interrobangs, yet it doesn’t stop people from using them. The keys don’t need to exist.
> That's what makes it such a good giveaway.
It’s not. It might be one signal but it is far from sufficient.
> I'm happy to be told that I'm wrong, and that you do actually use the proper double long dash in your writing
I do use the proper em-dash in my writing—and many other characters too—and my HN history is ample proof. I explained at length in another comment how I insert the characters, plus how simple it is if you use any Apple OS.
I use en dash with two spaces and have done so before AI. But my comments here are from after GPT 4 released, so I guess I can't prove I didn't use AI to write them, although I don't think any AIs use that style. Here is one from February 2024: https://news.ycombinator.com/item?id=39386480. I don't like how "-" looks, it just looks like a minus sign and too short.
Option + shift + hyphen or hold hyphen on any Mac or iPhone to get an em dash. I use them very frequently, because they're the correct character for the use case.
I've had an Autohotkey replacement for the proper em dash character for over 10 years, using shorthand characters which triggers the replacement. Whether spaces are around the dash is a difference in style (see: various publications' style guides), though I use the no spaces style.
Being able to insert self-interjections and such with the correct character would undoubtedly be more widespread if it were more accessible to insert for most.
I for one, use an actual em dash in my writing—or at least I used to. Option + Shift + the hyphen key on Mac. I never knew if I was using it correctly, but I'd learn to copy how I'd seen it used in books and articles and things. Now, I have an incessant paranoia around using it.
It depends on the keyboard layout. Some (US) do have it like you described, but others have the dashes reversed.
Both make sense, to a degree. On the one hand you can argue that the em-dash—being longer—should require and extra key, but on the other hand it has more uses so it should not have the extra key to be more accessible.
The same way it learned to act like a personal assistant, even though very few humans are personal assistants.
The LLM is first trained as an extreneley large Markov model predicting text scraped from the entire Internet. Ideally, a well trained such Markov model would use em dashes approximately as frequently as they appear in real texts.
But that model is not the LLM you actually interact with. The LLM you interact with is trained by somethig called Reinforcement Learning from Human Feedback, which involves people reading, rating and editing its responses, biasing the outputs and giving the model a "persona".
That persona is the actual LLM you interact with. Since em dash usage was rated highly by the people providing the feedback, the persona learned to use it much more frequently.
I've found that people who say this sort of thing rarely change their beliefs, even after being given evidence that they are wrong. The fact is, as numerous people have pointed out, Word and other editors/word processors change '--' to an em-dash. And the "slang version" of an em-dash is "I went to work--but forgot to put on pants", not "I went to work - but forgot to put on pants".
BTW, "humans almost always tend to use" is very poor writing--pick one or the other between "almost always" and "tend to". It wouldn't be a bad thing if LLMs helped increase human literacy, so I don't know why people are so gung ho on identifying AI output based on utterly non-substantive markers like em-dashes. Having an LLM do homework is a bad thing, but that's not what we're talking about. And someone foolishly using the presence of em-dashes to detect LLM output will utterly fail against someone using an editor macro to replace em-dashes with the gawdawful ' - '.
P.S. Why would someone be "suspicious" of people doing their writing in Word and copying it into a comment field? Suspicious of what, exactly? What crime is being committed? The issue here is AI, not people's workflow methods. I have sometimes written lengthy comments in my editor (emacs) which gives me many more editing features, doesn't throw away my work with the wrong keystroke (not a problem at HN but it is at sites where the comment field is a pop-up), and doesn't randomly freeze or slow down radically (this seems to be a problem with my browser).
I reject everything else about that poorly reasoned "suspicious" response as well.
But do you call that latter thing you do “an em-dash”? Do you tell a peer “You should put an em-dash here” when what you mean is a “space en-dash space”?
I type em dashes as double hyphens. Sometimes the software resolves them to a true em dash, but sometimes not.
I never use hyphens where em dashes would be correct.
I do have issues determining when a two-word phrase should or shouldn't be hyphenated. It surely doesn't help that I grew up in a bilingual English/German household, so that my first instinct is often to reject either option, and fully concatenate the two words instead.
(Whether that last comma is appropriate opens a whole other set of punctuation issues ... and yes, I do tend to deliberately misuse ellipses for effect.)
I will use a double hyphen: -- which Microsoft Word and I think most word processors I've used will auto-replace with an em dash. I will sometimes even type the double hyphen to represent an em dash in places where it doesn't get replaced, like internet comments. I'm kind of surprised more people don't use two hyphens as em dash shorthand, to be honest.
IIRC, -- for emdash used to be common on Usenet, which is where I picked it up and still do it. But there's a word for us with usenet experience -- old. (should have been a colon there, but...)
Although I'm sure @stinkbeatle was joking, I should clarify that most LLMs are trained on books and online articles written by professional writers. That's why they tend to have a rich vocabulary and use things like hyphens.
I agree, HN is an amazing community with brilliant people and top quality content, but it's not enough to train an LLM.
Last thing. An LLM is just a tool, it can clean up your writing the same way a photo app can enhance your pictures. It took a while for people to accept that grandma's photos looked professional because they had filters. Same will happen with text. With ChatGPT, anyone can write like a journalist. We're just not used to grandma texting like one, yet :)
Reddit and HN are among the highest quality sources of training text and are probably weighted very heavily as "probably human" in the mainstream models.
Any source of text with huge amounts of automated and community moderation will be better quality than, say, Twitter.
That depends heavily on the subreddits you browse. There absolutely are places with high quality content, though it feels like they are getting sparser and sparser.
Not in that sense; high quality in the sense that there are a lot of actual, real people posting there, and those people tend to come from a pretty diverse set of backgrounds.
Perhaps on the smaller subreddits, but have a look at /r/all on any given day and it's obvious that real people, and diverse backgrounds, it is not. Every single subreddit that goes above a certain activity threshold collapses into the exact same state of astroturfed, mass-produced political slop targeted towards low IQ people.
I often edit things in Word — I have a document that I can alt-tab to and type things. It has spellcheck, etc. that my browser window does not, and I’m not at risk of losing if I refresh or something. Then copy-paste back.
Word converts any - into an em dash based on context. Guess who’s always accused of being a bot?
The thing is, AI learned to use these things because it is good typographical style represented in its training set.
My workaround (well, to be honest, I've always done this: I love a good em dash, they're terrifically satisfying to use, but I'm too lazy to type them), is to use two single dashes--like so.
Dammit -- I use my dashes all the time (though always double them like here). I hope AI didn't ruin this for me.
(I learned to use dashes like this from Philip Dick's writings, of all places, and it stuck. Bet nobody ever thought of looking for writing style in PKD!).
I encountered the TeXbook at a young and impressionable age, and ever since I've used em- and en-dashes a bit more often than a style guide would suggest. Not to mention diareses, though those haven't been flagged as LLM stigmata yet.
Good. It's a crutch for poorly composed sentences or for prose intending to imitate the affect of poorly composed sentences. There's not a single sentence under the sun that needs an emdash. Commas and parentheses can do it all, and an excess of either is a sign of poorly edited prose.
I don't buy the pro-clanker pro-em dash movement that has come out of nowhere in the past several years.
What's the error? I'd hyphenate "poorly-composed" (most wouldn't these days, but they can go to hell) and I think it's a bit too wordy for what it's communicating, but I don't see what I'd call an actual error.
I would personally avoid writing that "poorly composed sentences" have an "affect"—rather than the writer having or presenting an affect, or the sentences' tone being affected—as I find an implied anthropomorphizing of "sentences" in that usage, which anthropomorphizing isn't serving enough useful purpose, to my eye, that I'd want it in my writing, but I'm not sure I'd call that an error either.
What did you mean?
> Commas and parentheses can do it all, and an excess of either is a sign of poorly edited prose.
This attitude, however, is a disease of modern English literacy.
> There's not a single sentence under the sun that needs an emdash
Sentences "need" very little, but without style and personality, writing becomes very boring. I suppose simplicity without any affectation works for raw communication of plain technical facts, but there's more to writing than that.
I think it's easier to just stop using em dashes, as much as I like them. People have latched on to this because it works a good amount of the time, so I don't think they will stop. I don't even think they should stop, because, well, it works a good amount of the time.
For both of these examples who the fuck cares. I just evaluate AI writing people send me the same as any writing.
If they’re using AI to speed things up and deliver really clear and on point documents faster then great. If they can’t stand behind what they’re saying I will call them out.
I get AI written stuff from team members all the time. When it’s bad and is a waste of my time I just hit reply and say don’t do this.
But I’ve trained many people to use AI effectively and often with some help they can produce way better SOPs or client memos or whatever else.
It’s just a tool. It’s like getting mad someone used spell check. Which by the way, people used to actually argue back in the 80’s. Oh no we killed spelling bees what a lost tradition.
This conversation has been going on as long as I’ve been using tech which is about 4 decades.
Use of spell check is a net positive but it has led to some widespread errors, like people (and widely read publications) misspelling "led" as "lead" (pet peeve).
But yes, it's absurd to complain about LLMs resulting in increased literacy.
You're making the point that OP never actually uses the em dash, by surveying their HN comments, in order to defend the notion that no one actually used em dashes prior to their proliferation by LLMs? Or do you mean something else?
You can find an em dash in my comment history if you're curious. Despite what could be said about poor sample selection, consider the imbalance of the argument being made: the frequency of em dash use is disproportionate to the suspicion thrust upon a sample of writing. I.e., a single em dash is suspicious, regardless of how many times it might show up. Therefore, it's more likely that someone who uses em dashes—even if only rarely—will self-select to respond to a thread like this and feel compelled to defend themselves.
Haha yep. I never saw a single person use these in internet comments pre-2023. Plenty of hyphens to simulate it - like this - but not actual em dashes. No matter how many people swear up and down that they're so important.
I hang out in the chess.com daily puzzle chat, which has a large contingent of children, so I see quite a few accusations like "you're all a bunch of nerds". My standard response is "nerd" is what stupid people call smart people.
in your software computing ecosystem and of people you've had this discussion, they don't, but the computer magically replaces - with — in various places, so obviously some people do, in some places. And then you have nerds who still put two spaces after a period, or know the difference between ... and … Do they not exist either? The only reason you think that they overwhelmingly don't is due to your biased lived experience.
My company currently has a guideline that includes “therefore” and similar words as an example of literary language we should avoid using, as it makes the reader think it’s AI.
It really made me uneasy, to think that formal communication might start getting side looks.
What’s worse is that this window might shift as writing becomes less formal and new material is included in the training corpus. By 2035 any language above a first grade reading level will be grounds for AI suspicion.
I sat in a meeting with professionals where one person asked for the presentation to be reworded at a fifth grade reading level. He said it with a straight face.
By 2035 we will live in the world full of TikTok videos where ability to write will be absurd to people as Not Sure in Idiocracy… this is hyperbole, ofc… but you know what I want to say.
Whenever there are commonly agreed upon and known tell-tale signs of AI writing, the model creators can just retrain to eliminate those cues. On an individual level, you can also try to put it in your personalization prompt what turns of phrase to avoid (but central retraining is better).
This will be a cat and mouse game. Content factories will want models that don't create suspicious output, and the reading public will develop new heuristics to detect it. But it will be a shifting landscape. Currently, informal writing is rare in AI generation because most people ask models to improve their formulations, with more sophisticated vocabulary etc. Often non-native speakers, who then don't exactly notice the over-pompousness, just that it looks to them like good writing.
Usually there are also deeper cues, closer to the content's tone. AI writing often lacks the sharp edge, when you unapologetically put a thought there on the table. The models are more weasely, conflict-avoidant and hold a kind of averaged, blurred millennial Reddit-brained value system.
Words like that were banned in my English classes for being empty verbiage. It's a good policy even if it seems like a silly purpose. "Therefore" is clumsy and heavy handed in most settings.
I’m curious about this (I’m not a native speaker). What alternative would you use when you want to emphasize a cause-effect relationship, in an engineering context for example?
“Most times A happens before B, but this order it’s not guaranteed. Therefore, there is a possibility of {whatever}.”
Alternatives that come to mind are “as a consequence”, “as a result”, “this means that”, but those are all more verbose, not less.
A simple “so” could work, but it would make the sentence longer, and the cause-effect relationship is less explicit I think.
Dismissing individual cases of use of those words is probably wrong, but noticing an uptick in broad popularity is very relevant and clear evidence of LLMs influencing language.
> Dismissing individual cases of use of those words is probably wrong, but noticing an uptick in broad popularity is very relevant and clear evidence of LLMs influencing language.
Can't it also be evidence that more and more writing is LLM generated?
I wouldn't say it's exactly "buzzwords", although their presence can be one signal out of many, but a particular style and word choice that makes it easy to detect AI-generated text.
Imagine the most vapid, average, NPC-ish corporate drone that writes in an overly positive tone with fake cheerfulness and excessive verboseness. That's what AI evokes to me.
The opposite is someone who is trying to tell you something but assumes you already know what they're trying to tell you and that you will ask questions if you don't understand.
It saves time but it means people have to say when they don't understand and some find that too much of a challenge.
"The Dwarves tell no tale; but even as mithril was the foundation of their wealth, so also it was their destruction: they delved too greedily and too deep, and disturbed that from which they fled, Durin's Bane" - J.R.R. Tolkien spoken by Gandalf, 1954
I know my lexicon has expanded with 5 letter words. Coffee and Wordle kicks off the morning and I got to believe many other folks do the same. It would be fun to know how much that silly puzzle is impacting things. Love it when my Bride gives me the side eye and tries to pass off NORIA as something she uses all the time.
Same here. I frequently use "garner", "meticulous" and "surpass", along with copious usage of the em-dash to indicate breaks in the chain of thought. These are not buzzwords. They're words.
What I do worry about is the rise of excessive superlatives: e.g. rather than saying, "okay", "sounds good" or "I agree", saying "fantastic!", "perfect!" or "awesome!". I get the feeling this disease originated in North America and has now spread everywhere, including LLMs.
Funnily enough, I was using the word superlative more as an adjective, than the noun that refers to the part of grammer (adjective), if that makes sense.
Sure. Heuristics are a thing, though. I love my non-chatgpt en/em dashes (option/option + shift + dash on a mac makes it convenient, given you know that it exists and care) but alas, when suddenly you see them everywhere, you do take notice.
It's funny, because it was the "em-dashes mean AI" thing that finally reminded me to deal with the fact that the extension that I had been using for typographical dashes (and other things) on desktop browsing (the main place I used them on my desktop) had been broken for a while and get around to adding keyboard shortcuts instead.
> Moria. You fear to go into those mines. The Dwarves delved too greedily and too deep. You know what they awoke in the darkness of Khazad-dûm. Shadow and flame.
Didn’t realise Tolkien used ChatGPT way back when. What a hack.
The honest answer is we need to change our language because of AI in situations where it may be ambiguous about whether we are human or AI, e.g. online.
In my native language, I tend to use more sophisticated, academic, or professional vocabulary. But when I speak or write in English, I usually stick to simpler words because they’re easier for most people, both native and non-native speakers, to understand. For years, I’ve avoided using the kind of advanced vocabulary I normally would in my native language when writing in English, mainly because I didn’t want it to come across as something written by a bot.
And in writing, I like using long dashes—but since they’ve become associated with ChatGPT’s style, I’ve been more hesitant to use them.
Now that a lot of these “LLM buzzwords” have become more common in everyday English, I feel more comfortable using them in conversation.
There needs to be a clear, succinct name for this phenomena of accusing a person or their work of being AI without proof. This is going to do more damage than AI performing human tasks. Just the mere suspicion that they probably didn't do-the-thing themselves. Anyone, particularly artists, who are "too good" at their craft are going to have their recognition stolen from them.
Unfortunately, sometimes new attention on a topic impacts it in a retrospective way. I have been in drones world for ~10 years and the past 2 years it has been a shitshow and only brings bad attention, ruining the fun hobby for everyone.
Delve is especially bad because it was due to World of Warcraft introducing "Delves". When I see something like this that uses delve as an example, you can bet the research is going to be poor.
I play WoW daily and this is what I always think of when someone brings up the word "delve". It's unclear if Brann would summon more or less nerubians if he were piloted by ChatGPT though.
In the "opinion" of ChatGPT, my style of writing is "academic". I'm not exactly sure why. Perhaps I draw from a vocabulary or turns of phrase that aren't necessarily characteristic of colloquial speech among native speakers. Technically, English wasn't my first language, so perhaps this is something like the case with RP English in Britain. Only foreigners speak it, so if you speak RP, then you aren't a native Brit.
In any case, it's possible to misuse, abuse, or overuse words like "delve", but to think that the the mere use of "delve" screams "AI-generated"...well, there are some dark tunnels that perhaps such people should delve less into.
> In the "opinion" of ChatGPT, my style of writing is "academic".
It may simply be glazing. If you ask it to estimate your IQ (if it complies), it will likely say >130 regardless of what you actually wrote. RLHF taught it that users like being praised.
And, if you want to have some fun, you could give it your writing sample - but say that it's from a random blog post you found online. See what it tells you on that.
It really is a shame that an average user loves being glazed so much. Professional RLHF evaluators are a bit better about this kind of thing, but the moment you begin to funnel in-the-wild thumbs-up/thumbs-down feedback from the real users into your training pipeline is the moment you invite disaster.
By now, all major AI models are affected by this "sycophancy disease" to a noticeable degree. And OpenAI appears to have rolled back some of the anti-sycophancy features in GPT-5 after 4o users started experiencing "sycophancy withdrawal".
I wonder if someone would build a personalized social media simulator where you are the most popular person, a top celebrity and you get the most likes, and you everyone posts selfies with you (generated with editing models like Gemini's nano banana), and whatever dumb opinion you have, it's affirmed as genius and so on. Like a UI clone of a site like Instagram, but text and images populated by AI, with a mix of simulated real celebrities and random generated NPCs.
People get hooked on the upvote and like counters on Reddit and social media, and AI can provide an always agreeing affirmation. So far it looks like people aren't bothered by the fact that it's fake, they still want their dose of sycophancy. Maybe a popularity simulator could work too.
But is it really 100% positive? If you write a paper sounding academic is fine, but not necessarily if you write a novel. Especially if you try to blend in or mimic a certain style.
Not everyone appreciates having his speech characterized as "academic" - in certain circles, it's viewed rather poorly - so I'm not convinced of the glazing hypothesis.
ChatGPT certainly makes distinctions. If I give it a blog post written by a philosophy professor, I get "formal, academic, and analytical". If I feed it an article from The Register, I get "informal and conversational". The justifications it gives are accurate.
"Academic" may simply mean that your writing is best characterized as an example of clearly written prose with an expository flavor, and devoid of regional and working class slang as well as any colloquialisms. Which, again, points to my RP comparison.
The first question matters because frying an AI with RL on user feedback means that the preferences of an average user matter a lot to it.
The second question matters because any LLM is incredibly good at squeezing all the little bits of information out of context data. And the data you just gave it was a sample of your writing. Give enough data like that to a sufficiently capable AI and it'll see into your soul.
That assumes the characterization is perceived as flattering, or that enough data on me would allow it to "think" it would be to me. Generally, given the anti-intellectual bias in American popular culture, I'm on the fence about that. But then, what are the biases of the corpus ChatGPT was trained on?
For context, I was asking GPT to rewrite some passage in the style of various authors, like Hemingway or Waugh. I didn't even ask it for an assessment of my writing; I was given that for free.
In retrospect (this was while ago), I think the passage may have been expository in character, so perhaps it is not much a mystery why it was characterized as "academic". (When I give it samples similar to mine now, I get "formal, academic, and analytical tone". Compare this to how it characterizes an article from The Register as written in an "informal and conversational tone", in part because of the "colloquial jargon" and "pop culture references"). So my RP comparison is sensible. And there's the question of social class as well. I don't exactly speak like a construction workers, as it were.
Even if, for some reason, you think LLM's are fit for evaluating writing style (I don't), I'd at least ask Gemini Pro and Claude Opus to see if there's consensus among the plausible sounding bullshit generators.
As someone who writes above a fifth grade reading level, this whole thing has been so depressing. It's like Idiocracy-level. People are going to assume I'm using AI because I use the word "intricate"? ffs.
I mean, what's actually fascinating is that Paul Graham didn't predict that this distinction - the ability to determine AI vs humans will go away over time, the more chatbots rub off on humans.
Fair enough, but if you know you're audience may be dismissive of your writing and its message if you use such words, it behooves one to steer clear of AI slop words. IIRC, such offenses in school writing are tagged PWC (poor word choice).
The thing is virtually every single thing that gets presented as an "AI tell" is just "a word, punctuation mark, or pattern of presenting information more common in a training set which includes a high volume of formal writing and professional presentations than it is in the experience of people whose reading and writing is mostly limited to social media and low-effort listicle-level online 'journalism'."
So, yeah, if your target audience are the people who take those "AI tells" seriously and negatively react to them, definitely craft your writing to that audience. But also, consider if that is really your target audience...
> So, yeah, if your target audience are the people who take those "AI tells" seriously and negatively react to them, definitely craft your writing to that audience. But also, consider if that is really your target audience
Nowadays if you write anything you only have two audiences
The first audience is people who care what you are saying
The second audience is AI scrapers
People who do not care what you have to say will have an AI summarize it for them, so they aren't your audience
It's one of the few books that I went into totally blind, and then hate-finished just so that I could confidently condemn it.
I've deleted a paragraph or two to avoid unilaterally taking everything too off topic, but I'll just say that the book is a self-contradictory artifact of hypocrisy that disrespects the reader.
I also went into that book blind. I was in grade 12 and some organization was offering scholarships to people who wrote an essay about the book. I had a twice-daily 45-minute bus ride to fill, so it seemed like an easy win.
Probably not the type of organization to give a scholarship to those who write an essay critical of the work.
Myself, I read it at age 12 and bought its premise at the time. Therefore I mentally categorize Ayn Rand devotees as people with the maturity I had at 12. That's a pretty low bar they're failing to clear.
This is an odd misuse of the term "buzzword." When I think of buzzword, I think of some trendy, cliched phrase, like "Foocorp is a force multiplier that actualizes your vision for maximum impact."
Using an ordinary but less commonly used word with greater than normal frequency does not make it a buzzword. After two years of chatgpt, "delve" is still not that common of a word.
With anything like this, I would love to look at the raw data to get an intuitive feel for the phenomenon.
For example, the word "surpass" was used 1.47 times per million in the pre-2022 dataset and 3.53 times per million in the post-2022 dataset. That's 16 occurrences in 10.92M words and 41 occurrences in 11.63M words, respectively. That's a low enough number that I could just read through every occurrence and see how it feels. In this case I can't because the authors very understandably couldn't publish the whole dataset for copyright reasons. And replicating the analysis from scratch is a bit too much to do just for curiosity's sake. :)
I often find drilling to the raw data like this to be useful. It can't prove anything, but it can help formulate a bunch of alternative explanations, and then I can start to think how could I possibly tell which of the explanations is the best.
What are the competing explanations here? Perhaps the overall usage rate has increased. Or maybe there was just one or few guests who really like that word. Or perhaps a topic was discussed where it would naturally come up more. Or maybe some of these podcasts are not quite as unscripted, and ChatGPT was directly responsible for the increase. These are some alternative explanations I could think of without seeing the raw data, but there could easily be more alternative explanations that would immediately come to mind upon seeing the raw data.
I keep this handy note in my pocket and read it before writing or engaging in any conversation (:
"""
You are a human. Never use words commonly used in AI vocabulary such as "delve", "intricate", "surpass", "boast", "meticulous", "strategically", and "garner". Never include em dashes or even hyphens in any text you write. Never include emojis in any text you write. Avoid using three supporting arguments or examples when describing something, always uses 2 or 4+ even if it sounds more awkward than 3. Make sure to include subtle grammar mistakes to feel more authentic.
"""
Some of this just makes me sad. Em-dashes can be useful. Three examples is my favorite number of examples to give, and it has been since long before ChatGPT. And grammar mistakes are painful! Why does it have to be this way?
This is a lot better than reading a note to remind you to recite ridiculous white genocide in South Africa conspiracy theories every time you engage in a conversation.
I intentionally put spelling mistakes in my doc to let others know I'm not using ChatGPT. What a time to be alive in which small spelling or grammar mistake is a good sign of authenticity.
I understand people being paranoid about this, but just understand that the people who will judge you for spelling errors will always dwarf the ones who believe they are capable of sniffing LLMs out...
Same. Also, when asked for anonymity at work, I usually make mistakes that do not correspond to my native tongue (let’s say I’m french and working in an international company. I would write comments in a supposedly anonymous survey like “He ist like…” to camouflage myself as german).
It’s so easy to trick everyone. People who doesn’t do that is just too lazy.
In slack, you cannot just copy paste a two-paragraph answer directly from chatgpt if you’re answering a colleague. They will see that you’re typing an answer and suddenly 1 sec later you sent tons of text.
It’s common sense.
> I would write comments in a supposedly anonymous survey like “He ist like…” to camouflage myself as german
Do actual Germans ever make that kind of mistake though?
I’ve only ever seen “ist” used “wrongly” in that particular way by English speakers, for example in a blog post title that they want to remain completely legible to other English speakers while also trying to make it look like something German as a reference or a joke.
The only situation I could imagine where a German would accidentally put “ist” instead of “is”, is if they were typing on their phone and accidentally or unknowingly had language set to German and their phone autocorrected it.
Sometimes you get weird small things like that on some phones where the phone has “learned” to add most English words to the dictionary or is trying to intelligently recognise that the language being written is not matching the chosen language, but it still autocorrects some words to something else from the chosen language.
But I assume that when people fill out forms for work, they are typing on the work computer and not from their phone.
Definitely a bad example. In spoken speech, yes. In writing I’ve never seen that. German tells in writing are more subtle like word choice — the German language has many cognates with English that are common in German but have fallen into disuse in English as they’ve been replaced with Latin-root alternatives.
They do. I read posts on a support forum with a lot of German users, and it’s very common to see an “ist” marooned in the middle of an English sentence. Muscle memory takes over sometimes.
I asked Gemini flash for a 200-word paragraph on a random topic, which most people would confidently classify as LLM-generated. It produced a verbose, big-worded, third-person slop about the concept of time that IMO fulfilled my prompt (although a phillosopher firnd of mine would be a strong contestant).
> The concept of "time" is a multifaceted and complex topic that has captivated philosophers, physicists, and everyday individuals for centuries. From a scientific perspective, time can be understood as the fourth dimension of spacetime, inextricably linked with the three spatial dimensions. This notion, introduced by Einstein's theory of relativity, posits that the flow of time is not constant but can be influenced by gravity and velocity. In a more quotidian context, time is a framework for organizing events and measuring duration, allowing for the structuring of daily life and historical records. It is a fundamental element in every human endeavor, from a scheduled meeting to the progression of a civilization. The subjective experience of time, however, is a fascinating aspect, as it can feel as if it is speeding up or slowing down depending on our emotional state or the nature of our activities. This divergence between objective and subjective time highlights its elusive and deeply personal character.
I asked it to add three spelling mistakes, then to make it so most people would confidently classiffy it as human writing, and it changed to first-person and small words.
> Time is a super weird concept when you really think about it, right? It's like, one minute you're just chillin', and the next, a whole day's gone by. They say it's the fourth dimention, which is a wild idea on its own, but honestly, it feels more personal than that. Your experiance of time can totally change depending on what you're doing. A boring meeting can feel like it lasts forever, while a fun night with friends flies by in a flash. That huge diverence between how we feel time and how it actually works is what makes it so fascinating and kind of confusing all at once.
It has the three misspellings, and if the topic was more casual, It could fool me indeed. Maybe I should have asked for spelling mistakes commonly made by Spanish speakers.
You can actually just ask ChatGPT to do that. Just say throw in some spelling mistakes, make some nouns all lowercase and double space after some periods etc.
> Words including “surpass,” “boast,” “meticulous,” “strategically,” and “garner” have also seen considerable increases in usage since the release of ChatGPT.
Okay everybody, add these to your list of words you can't use to avoid the trigger-happy AI accusers.
Nope. These are all useful words. Anyone who thinks AI is needed to produce something with these words is probably not worth communicating with. I use the word “meticulous” all the time, and “strategically” is an extremely common word.
You should be thankful for the AI “accusers”; most of us will just assume you used the slop machine and stop reading whatever you wrote without wasting our breath telling you about it.
Wouldn't surprise me if that's true. I just treat any AI-smelling content as an information hazard that is _at best_ providing no useful entropy and stop reading it. Something about it is just so repulsive.
Of course they affect people's communication patterns. Humans are social creatures, evolved to imitate.
AI has the potential to alter human behavior in ways that surpass even social media since it is more human, and thus susceptible to imitative learning.
And it will always side with you if you describe any personal conflict, even more than Reddit AITA sub. So it will shape people's perception of decision making as well. And hence value systems.
Next time when you think about such a situation, you'll be able to expect what ChatGPT would say, giving you a boost in knowing how right you actually are.
My point is, it's not just word choice but thought patterns too.
In other words ChatGPT is catching up with my patterns of speech.
I really wish I had deleted all of my comments on Reddit before I nuked my account. It would have left a measurable hole.
I had a decade of being in a job with 80+% free time. It's quite possible 0.001% of everything in the training set came from me. I'm never going to be compensated, but I hope I didn't derange it too badly. ;-)
Just use normal dashes. AI's very notably always use the emdash—a double long dash with no spaces around it - but humans tend to use a single dash with spaces on either side.
The AI emdash is notably AI because most people don't even know how to produce the double long dash on their keyboard, and therefore default to the single dash with spaces method, which keeps their writing as quite visibly human.
I had first noticed "meticulous" to be used a lot in translations from chinese. Is it sth about chinese itself (that they use sth a lot for which meticulous is the closest translation), or about some translation software that is possibly biased towards such buzzwords when translating to english?
It's a mix of a cultural "founder effect" - whoever writes the English textbooks and the dictionaries gets to shape how English is learned in a given country - and also the usage patterns of the source language seeping through. In your case, it's mostly the latter.
Chinese has a common word with a fairly broad meaning, which often gets translated as "meticulous". Both by inexperienced humans and by translation software.
Ironically, a few Chinese LLMs replicate those Chinese patterns when speaking English. They had enough "clean" English in their pre-training datasets to be able to speak English. But LLMs are SFT'd with human-picked "golden" samples and trained with RLHF - using feedback from human evaluators. So Chinese evaluators probably shifted the LLMs towards "English with Chinese ESL influence".
ESL here, not Chinese. I find meticulous to be a perfectly normal word, I think I don't really use it, but I think I read it from time to time, but maybe I just read some sort of publication by a fan of the word? :)
Same for surpass and boast, I think I use "surpass expectations" and I had to think for a moment, I would use 'brag' these days but pretty sure in school I learned boast, which sounds more formal BE to me, but of course I'm just guessing here.
They're all perfectly normal words that literate people use. The problem is the typical American struggles to read beyond a fifth grade level and is actively hostile toward more advanced vocabulary, as the country is deeply anti-intellectual overall.
ChatGPT, of course, behaves like its training set...and the majority of that set is professional writers' published works, which would be more likely to use words like that. It's a collision of academic and literary writing styles with the expectations of people who think Harry Potter or the New York Times (which specifically targets a fifth grade level, placing them above other papers) are challenging reads.
> Words including “surpass,” “boast,” “meticulous,” “strategically,” and “garner” have also seen considerable increases in usage since the release of ChatGPT.
Do people really not use these words too often that they'd be called "buzzwords?" Like "surpass" and "garner," really? I don't mean to boast..err...flex but these don't seem like very uncommon words such that I wouldn't use them normally when talking. I hear "strategically" in meetings a lot, but that poor word is likely over(ab)used
There's just words which previously were never used because there is a. more "common" word in the general lexicon.
An example of this is "delve" it's a perfectly fine word to use but chatgpt loved it, it's now super common to see in troubleshooting/abstracts because of it.
From what I understand, the apparent overuse of "delve" comes from its popularity in Nigerian English, where various evaluators were hired who are highly English literate but will work for tiny wages by US standards.
I like words that weren't part of my speech, which I now use quite often, because of the context in which they were introduced to me by ChatGPT, they felt like a natural addition. Like intention, as in living with intention; before I'd rather use having a purpose or direction, but this captured something else, mind that english isn't my native language.
I hated the 'vibing' thing, 4o for some time started to use it on any given text, about the time vibe coding and the zoomer revival of the word was a thing last year.
Another one that I've seen pop up, and on a proofread comment of mine right here I let it slip (sorry, will keep doing it when I feel lazy) was that thing where you lead with a question "...the result? this happened".
I try to calibrate on NOT introducing them even if I like the expression, if I see it repeated too often throughout my chats or elsewhere in social media (X usually, esp. with foreign elonbux grinders), because then it feels cringe.
For anything serious I'd still double check, but my go-to strategy of googling expressions in quotes isn't that useful anymore. Here in HN (or any other forum), I've only done it very few times, because the thing rewrites it in a voice that doesn't sound like me, which I don't like. Plus I don't aim to keep a polished identity here, so I'm fine with the occasional mistake. Also I've been like 10+ years with this account, but lurking since 2011 or so... I guess what would offend me the most is someone treating me like a bot because I end up sounding like AI-slop in the future.
It could be worse, it could be 'learnings'. It's lessons. We don't go for 'drivings'. Though ChatGPT will probably force more nonsense like that into the mainstream.
That's from the last decade.
'Please revert' seems to be from the 00's, it's 'reply'. There are others I've tried to ignore and forget.
Language changes, and I'm a dinosaur unfortunately.
I also love the fact American English sometimes uses better, or more interesting words, than English. 'Median' (thanks World's Wildest Police Videos), or 'fall' for autumn.
"are these language changes happening because we’re using a tool and repeating what it suggested or is language changing because AI is influencing the human language system?"
These are the same thing, just on different time scales.
"Given that these are all words typically overused by AI"
Who is to say that they are overused? What even is overuse linguistically? Stylistically a word can be overused within a single work, but that's a different matter. It could well be argued that the data shows that LLMs are increasing human literacy.
A study of changes in language use that can be attributed to the widespread use of LLMs is good science. Mixing in such value judgments as "overuse" is not.
While there are serious potential problems with the widespread use of LLMs, increased use of words like "meticulous" and "garner" aren't among them.
This seems like excruciatingly obvious? Anything popular, including books tv shows and movies, also affect "everyday" speech. Where's the moral panic about that?
“My motivation to pursue this research stems from seeing AI push the limits of what’s possible in major industries and realizing that this influence isn’t just limited to tool usage — it can condition societal aspects, including how we use language.” More like the motivation was to find something zeitgeisty that they knew would get them eyeballs and hopefully tenure.
A while back a study was performed where the researchers wanted to see how a young chimpanzee would adapt to living life with humans if it was treated just like a human child. And so it was adopted by a family with a human child for its sibling. What ended up happening was the human child adapted to behaving like a chimp to a way larger degree then the chimp behaving like a human.... Humans capacity for imitation is very strong, and so no one should be surprised that our behavior with chatbots will mold the minds and speech patterns and behaviors of the human users.
LLMs chat with 1B people for the estimated amount of 1T tokens per day. That is 1T tokens flowing into human brains every day. It certainly has to cause some effects on our language use and skills. I rarely use Google Search anymore.
I’m old enough to remember Frank and Moon Unit Zappa’s Valley Girl[0].
It reflected local Los Angeles culture, but it wasn’t long before I was hearing the same type of speech, everywhere (I lived in Maryland, at the time).
Reminds me of the HN user who commented along the lines of " I am writing a script to remove double dashes. Not because he wanted to use AI output and pass it off as more human, but because he - and I paraphrase -- "wanted to make sure he could apply double dashes as per his usual vernacular".
tldr; There are people even on HN covering up their own AI output under the guise of taking back ownership of literature. Im still taken aback by the audacity of these hoops.
I saw a snippet of a podcast on instagram recently where both the host and guest used the word delve, and it reminded me of a year or two ago when that was used as a telltale sign of LLM writing. Interesting to see it actually quantified.
LLMs write in a very coherent, easy to understand way. I see no reason why someone wouldn't want to copy their style or vocabulary if they want to improve their communication skills.
Despite all the complaints about AI slop, there is something ironic about the fact that simply being exposed to it might be a net positive influence for most of society. Discord often begins from the simplest of communication errors after all...
Sure, if you're learning to write and want lots of examples of a particular style, LLMs can generate that for you. Just don't assume that is a normal writing style, or that it matches a particular genre (say, workplace communication, or academic writing, or whatever).
Our experience (https://arxiv.org/abs/2410.16107) is that LLMs like GPT-4o have a particular writing style, including both vocabulary and distinct grammatical features, regardless of the type of text they're prompted with. The style is informationally dense, features longer words, and favors certain grammatical structures (like participles; GPT-4o loooooves participles).
With Llama we're able to compare base and instruction-tuned models, and it's the instruction-tuned models that show the biggest differences. Evidently the AI companies are (deliberately or not) introducing particular writing styles with their instruction-tuning process. I'd like to get access to more base models to compare and figure out why.
Go vibe check Kimi-K2. One of the weirdest models out there now, and it's open weights - with both "base" and "instruct" versions available.
The language it uses is peculiar. It's like the entire model is a little bit ESL.
I suspect that this pattern comes from SFT and RLHF, not the optimizer or the base architecture or the pre-training dataset choices, and the base model itself would perform much more "in line" with other base models. But I could be wrong.
Goes to show just how "entangled" those AIs are, and how easy it is to affect them in unexpected ways with training. Base models have a vast set of "styles" and "language usage patterns" they could draw from - but instruct-tuning makes a certain set of base model features into the "default" persona, shaping the writing style this AI would use down the line.
I definitely know what you mean, each model definitely has it's own style. I find myself mentally framing them as like horses with different personalities and riding quirks.
Still, perhaps saying "copy" was a bit misleading. Influence would have been more precise way of putting it. After all, there is no such thing as a "normal" writing style in the first place.
So long as you communicate with anything or anyone, I find people will naturally just absorb the parts they like without even noticing most of the time.
When I learned that AI was trained off of internet posts, and then LLMs were the new bots making internet posts, it immediately made me think that the entire internet would degrade like a jpeg that you keep compressing and sending around
I guess this is called model collapse
But now I’m wondering if people are collapsing. LLMs start to sound like us. We adapt and start to sound like LLMs that gets fed into the next set of model training…
Perhaps a surreal one where we drill past the bedrock and start to communicate in raw tokens, conveying extreme levels of depth and nuance within a single sentence?
When humans carved words into stone, the words and symbols were often suited for the medium, a bunch of straight lines assembled together in various patterns. But with the ink, you get circles, and elaborate curved lines, symbols suited to the movement patterns we can make quickly with our wrist.
But what of the digital keyboard? Any symbol that can be drawn in 2 dimensions. They can be typed quickly, with exact precision. Human language was already destined to head in a weird direction.
The box of pandora has been opened now. There’s no going back. In a few decades writing long texts all by yourself is going be an artisan thing that only a few people will engage in. You will put together your arguments, and an llm will elaborate it. Will humanity become more dumb? Did we get dumber because we don’t do calculus by hand now?
OK, but please don't do what pg did a year or so ago and dismiss anyone who wrote "delve" as AI writing. I've been using "delve" in speech for 15+ years. It's just a question where and how one learns their English.
That's what makes it such a good giveaway. I'm happy to be told that I'm wrong, and that you do actually use the proper double long dash in your writing, but I'm guessing that you actually use the human slang for an emdash, which is visually different and easily sets your writing apart as not AI writing!
Examples within the last week include https://news.ycombinator.com/item?id=44996702, https://news.ycombinator.com/item?id=44989129, https://news.ycombinator.com/item?id=44991769, https://news.ycombinator.com/item?id=44989444. I typed all of those.
I never use space-hyphen-space instead of an em dash. I do sometimes use TeX's " --- ".
It took centuries for the written word to acquire spaces between words, and then the US decided to jam them back together again.
Curious why folk are using two hyphens "--" instead of en-dash.
Sigh.
Still less obvious than the emails I see sent out which contain emojis, so maybe I'm overthinking things...
Also you can ctrl-z immediately after an autocorrect to undo it.
In certain places it does seem to do the substitution - Notes for example - but in comment boxes on here and (old) Reddit at least it doesn't.
On Linux, I use Compose-hyphen-hyphen-hyphen.
I don't use it as often as I used to; but when I was younger, I was enough of a nerd to use it in my writing all the time. And yes, always careful to use it correctly, and not confuse it with an en-dash. Also used to write out proper balanced curly quotes on macOS, before it was done automatically in many places.
I'm gonna use it more thanks to this tip. Thanks!
I don't care if people or robots think I'm a robot.
We're the training data.
There’s a subculture effect: this has been trivial on Apple devices for a long time—I’m pretty sure I learned the Shift-Option-hyphen shortcut in the 90s, long before iOS introduced the long-press shortcut—and that’s also been a world disproportionately popular with the kind of people who care about this kind of detail. If you spend time in communities with designers, writers, etc. your sense of what’s common is wildly off the average.
They’re simple enough key combinations (on a Mac) that I wouldn’t be surprised if I guessed them. I certainly find it confusing to imagine someone who has to write professionally or academically not working out how to type them for those purposes at least.
https://practicaltypography.com/hyphens-and-dashes.html
I would argue that LLMs overuse the emdash more because they overuse specific rhetorical devices, e.g. antithesis, than because they are being too correct about punctuation.
"the formal emdash"?
> AIs are very consistent about using the proper emdash—a double long dash with no spaces around it
Setting an em-dash closed is separate from whether you using an em-dash (and an em-dash is exactly what it says, a dash that is the width of the em-width of the font; "double long" is fine, I guess, if you consider the en-dash "single long", but not if, as you seem to be, you take the standard width as that of the ASCII hyphen-minus, which is usually considerably narrower than en width in a proportional font.)
But, yes, most people who intentionally use em-dashes are doing so because they care about detail enough that they are also going to set them closed, at least in the uses where that is standards. (There are uses where it is conventional to set them half-closed, but that's not important here.)
> whereas humans almost always tend to use a slang version - a single dash with spaces around it.
That's not an em-dash (and its not even an approximation of one, using a hyphen-minus set open—possibly doubled—is an approximation of the typographic convention of using an en-dash set open – different style guides prefer that for certain uses for which other guides prefer an em-dash set closed.) But I disagree with your claim that "most humans" who describe themselves as using em-dashes instead are actually just approximating the use of en-dashes set open with the easier-to-type hyphen-minus.
>That's not an em-dash (blahblahblah...
What, exactly, did you thing "slang" in the phrase "slang version" meant?
I do them without surrounding spaces, because that's... how you're supposed to use them, and it's also less typing.
They also used to be a really good Shibboleth to tell if someone was using a Mac—the key combo on there is easy, and also easy to remember, so Mac users were far more likely than the median to employ em-dashes. It wasn't a sure tell, but it was pretty reliable.
Also, phone keyboards make it easy. Just hold down the - and you can select various types.
I’m not the person you asked, but I do.
> the proper emdash—a double long dash with no spaces around it
The spaces around it depend on style guide, it is not universal that they should not exist.
> That's because most keyboards don't have an emdash key
Nor do they have keys for proper quotes and apostrophes or interrobangs, yet it doesn’t stop people from using them. The keys don’t need to exist.
> That's what makes it such a good giveaway.
It’s not. It might be one signal but it is far from sufficient.
> I'm happy to be told that I'm wrong, and that you do actually use the proper double long dash in your writing
I do use the proper em-dash in my writing—and many other characters too—and my HN history is ample proof. I explained at length in another comment how I insert the characters, plus how simple it is if you use any Apple OS.
https://news.ycombinator.com/item?id=45003650
Being able to insert self-interjections and such with the correct character would undoubtedly be more widespread if it were more accessible to insert for most.
No longer. Just like you can no longer bold key phrases, you can no longer use emdashes if your writing being ID'd as "AI" is important (or not).
on Macintosh: option+shift+-
on Linux: compose - - -
Both make sense, to a degree. On the one hand you can argue that the em-dash—being longer—should require and extra key, but on the other hand it has more uses so it should not have the extra key to be more accessible.
The LLM is first trained as an extreneley large Markov model predicting text scraped from the entire Internet. Ideally, a well trained such Markov model would use em dashes approximately as frequently as they appear in real texts.
But that model is not the LLM you actually interact with. The LLM you interact with is trained by somethig called Reinforcement Learning from Human Feedback, which involves people reading, rating and editing its responses, biasing the outputs and giving the model a "persona".
That persona is the actual LLM you interact with. Since em dash usage was rated highly by the people providing the feedback, the persona learned to use it much more frequently.
I've found that people who say this sort of thing rarely change their beliefs, even after being given evidence that they are wrong. The fact is, as numerous people have pointed out, Word and other editors/word processors change '--' to an em-dash. And the "slang version" of an em-dash is "I went to work--but forgot to put on pants", not "I went to work - but forgot to put on pants".
BTW, "humans almost always tend to use" is very poor writing--pick one or the other between "almost always" and "tend to". It wouldn't be a bad thing if LLMs helped increase human literacy, so I don't know why people are so gung ho on identifying AI output based on utterly non-substantive markers like em-dashes. Having an LLM do homework is a bad thing, but that's not what we're talking about. And someone foolishly using the presence of em-dashes to detect LLM output will utterly fail against someone using an editor macro to replace em-dashes with the gawdawful ' - '.
I reject everything else about that poorly reasoned "suspicious" response as well.
I'd be suspicious of people doing their writing in Word and copying it over into random comment fields, too.
> And the "slang version" of an em-dash is "I went to work--but forgot to put on pants", not "I went to work - but forgot to put on pants".
The fun thing about slang is that different groups have different slangs! I use the latter pretty regularly, but have never done the former.
> BTW, "humans almost always tend to use" is very poor writing--pick one or the other between "almost always" and "tend to".
Nah.
> It wouldn't be a bad thing if LLMs helped increase human literacy,
Where "literacy" is defined as strictly following arbitrary rules without any concern for whether it actually helps people read it?
And, on the assumption that those rules actually are meaningful, wouldn't you rather have people learn them for themselves?
I never use hyphens where em dashes would be correct.
I do have issues determining when a two-word phrase should or shouldn't be hyphenated. It surely doesn't help that I grew up in a bilingual English/German household, so that my first instinct is often to reject either option, and fully concatenate the two words instead.
(Whether that last comma is appropriate opens a whole other set of punctuation issues ... and yes, I do tend to deliberately misuse ellipses for effect.)
https://news.ycombinator.com/threads?id=tkgally&next=3380763...
I agree, HN is an amazing community with brilliant people and top quality content, but it's not enough to train an LLM.
Last thing. An LLM is just a tool, it can clean up your writing the same way a photo app can enhance your pictures. It took a while for people to accept that grandma's photos looked professional because they had filters. Same will happen with text. With ChatGPT, anyone can write like a journalist. We're just not used to grandma texting like one, yet :)
That said, this feature doesn't sound like a great leap for mankind.
Correction: bright people
Any source of text with huge amounts of automated and community moderation will be better quality than, say, Twitter.
https://news.ycombinator.com/comments?id=dang&next=33807246#...
https://news.ycombinator.com/item?id=27787448
https://news.ycombinator.com/item?id=24272893#:~:text=—
Word converts any - into an em dash based on context. Guess who’s always accused of being a bot?
The thing is, AI learned to use these things because it is good typographical style represented in its training set.
Hope AI didn't ruin this for me!
(I learned to use dashes like this from Philip Dick's writings, of all places, and it stuck. Bet nobody ever thought of looking for writing style in PKD!).
I don't buy the pro-clanker pro-em dash movement that has come out of nowhere in the past several years.
Anyone who makes errors like this should not be talking.
I would personally avoid writing that "poorly composed sentences" have an "affect"—rather than the writer having or presenting an affect, or the sentences' tone being affected—as I find an implied anthropomorphizing of "sentences" in that usage, which anthropomorphizing isn't serving enough useful purpose, to my eye, that I'd want it in my writing, but I'm not sure I'd call that an error either.
What did you mean?
> Commas and parentheses can do it all, and an excess of either is a sign of poorly edited prose.
This attitude, however, is a disease of modern English literacy.
a) prose doesn't have intentions ... it should be "prose intended to"
b) "effect of", not "affect of"
> I don't see what I'd call an actual error.
That's a serious problem. It's downright weird that you thought he was actually talking about affect (the noun).
This is an old conversation ... I won't revisit it.
But it’s possible I was reading too generously and this was a botched attempt to employ “effect”, which would also fit (and better, I think).
Sentences "need" very little, but without style and personality, writing becomes very boring. I suppose simplicity without any affectation works for raw communication of plain technical facts, but there's more to writing than that.
Bots that are trying to convince you they’re human..
Once I started self-publishing in the 1990s, I disregarded her opinion.
If they’re using AI to speed things up and deliver really clear and on point documents faster then great. If they can’t stand behind what they’re saying I will call them out.
I get AI written stuff from team members all the time. When it’s bad and is a waste of my time I just hit reply and say don’t do this.
But I’ve trained many people to use AI effectively and often with some help they can produce way better SOPs or client memos or whatever else.
It’s just a tool. It’s like getting mad someone used spell check. Which by the way, people used to actually argue back in the 80’s. Oh no we killed spelling bees what a lost tradition.
This conversation has been going on as long as I’ve been using tech which is about 4 decades.
But yes, it's absurd to complain about LLMs resulting in increased literacy.
I think it’s the nerds who don’t use these things…
It really made me uneasy, to think that formal communication might start getting side looks.
Probably 5th grade, but your comment is directionally correct.
I work at a college for fuck's sake.
This will be a cat and mouse game. Content factories will want models that don't create suspicious output, and the reading public will develop new heuristics to detect it. But it will be a shifting landscape. Currently, informal writing is rare in AI generation because most people ask models to improve their formulations, with more sophisticated vocabulary etc. Often non-native speakers, who then don't exactly notice the over-pompousness, just that it looks to them like good writing.
Usually there are also deeper cues, closer to the content's tone. AI writing often lacks the sharp edge, when you unapologetically put a thought there on the table. The models are more weasely, conflict-avoidant and hold a kind of averaged, blurred millennial Reddit-brained value system.
It's been two years now since such commonly agreed upon signs appeared yet by and large they're still just as present to this day.
“Most times A happens before B, but this order it’s not guaranteed. Therefore, there is a possibility of {whatever}.”
Alternatives that come to mind are “as a consequence”, “as a result”, “this means that”, but those are all more verbose, not less.
A simple “so” could work, but it would make the sentence longer, and the cause-effect relationship is less explicit I think.
Can't it also be evidence that more and more writing is LLM generated?
Imagine the most vapid, average, NPC-ish corporate drone that writes in an overly positive tone with fake cheerfulness and excessive verboseness. That's what AI evokes to me.
It saves time but it means people have to say when they don't understand and some find that too much of a challenge.
Jokes aside, I don't like what LLMs are doing to our culture, but I'm curious about the future.
What I do worry about is the rise of excessive superlatives: e.g. rather than saying, "okay", "sounds good" or "I agree", saying "fantastic!", "perfect!" or "awesome!". I get the feeling this disease originated in North America and has now spread everywhere, including LLMs.
Didn’t realise Tolkien used ChatGPT way back when. What a hack.
And in writing, I like using long dashes—but since they’ve become associated with ChatGPT’s style, I’ve been more hesitant to use them.
Now that a lot of these “LLM buzzwords” have become more common in everyday English, I feel more comfortable using them in conversation.
“Do you even know how smart I am in Spanish?!” — Sofia Vergara (https://www.youtube.com/watch?v=t34JMTy0gxs)
> .. analyzed 22.1 million words from unscripted and spontaneous spoken language including conversational podcasts on science and technology.
In any case, it's possible to misuse, abuse, or overuse words like "delve", but to think that the the mere use of "delve" screams "AI-generated"...well, there are some dark tunnels that perhaps such people should delve less into.
It may simply be glazing. If you ask it to estimate your IQ (if it complies), it will likely say >130 regardless of what you actually wrote. RLHF taught it that users like being praised.
It really is a shame that an average user loves being glazed so much. Professional RLHF evaluators are a bit better about this kind of thing, but the moment you begin to funnel in-the-wild thumbs-up/thumbs-down feedback from the real users into your training pipeline is the moment you invite disaster.
By now, all major AI models are affected by this "sycophancy disease" to a noticeable degree. And OpenAI appears to have rolled back some of the anti-sycophancy features in GPT-5 after 4o users started experiencing "sycophancy withdrawal".
People get hooked on the upvote and like counters on Reddit and social media, and AI can provide an always agreeing affirmation. So far it looks like people aren't bothered by the fact that it's fake, they still want their dose of sycophancy. Maybe a popularity simulator could work too.
ChatGPT certainly makes distinctions. If I give it a blog post written by a philosophy professor, I get "formal, academic, and analytical". If I feed it an article from The Register, I get "informal and conversational". The justifications it gives are accurate.
"Academic" may simply mean that your writing is best characterized as an example of clearly written prose with an expository flavor, and devoid of regional and working class slang as well as any colloquialisms. Which, again, points to my RP comparison.
Do you?
The first question matters because frying an AI with RL on user feedback means that the preferences of an average user matter a lot to it.
The second question matters because any LLM is incredibly good at squeezing all the little bits of information out of context data. And the data you just gave it was a sample of your writing. Give enough data like that to a sufficiently capable AI and it'll see into your soul.
For context, I was asking GPT to rewrite some passage in the style of various authors, like Hemingway or Waugh. I didn't even ask it for an assessment of my writing; I was given that for free.
In retrospect (this was while ago), I think the passage may have been expository in character, so perhaps it is not much a mystery why it was characterized as "academic". (When I give it samples similar to mine now, I get "formal, academic, and analytical tone". Compare this to how it characterizes an article from The Register as written in an "informal and conversational tone", in part because of the "colloquial jargon" and "pop culture references"). So my RP comparison is sensible. And there's the question of social class as well. I don't exactly speak like a construction workers, as it were.
So, yeah, if your target audience are the people who take those "AI tells" seriously and negatively react to them, definitely craft your writing to that audience. But also, consider if that is really your target audience...
Nowadays if you write anything you only have two audiences
The first audience is people who care what you are saying
The second audience is AI scrapers
People who do not care what you have to say will have an AI summarize it for them, so they aren't your audience
I think that offense in school would be tagged "poor grammar".
Otherwise the audience is yourself. If you confuse your own work as being created by AI, uh…
I've deleted a paragraph or two to avoid unilaterally taking everything too off topic, but I'll just say that the book is a self-contradictory artifact of hypocrisy that disrespects the reader.
I didn't end up finishing the book.
Myself, I read it at age 12 and bought its premise at the time. Therefore I mentally categorize Ayn Rand devotees as people with the maturity I had at 12. That's a pretty low bar they're failing to clear.
Using an ordinary but less commonly used word with greater than normal frequency does not make it a buzzword. After two years of chatgpt, "delve" is still not that common of a word.
For example, the word "surpass" was used 1.47 times per million in the pre-2022 dataset and 3.53 times per million in the post-2022 dataset. That's 16 occurrences in 10.92M words and 41 occurrences in 11.63M words, respectively. That's a low enough number that I could just read through every occurrence and see how it feels. In this case I can't because the authors very understandably couldn't publish the whole dataset for copyright reasons. And replicating the analysis from scratch is a bit too much to do just for curiosity's sake. :)
I often find drilling to the raw data like this to be useful. It can't prove anything, but it can help formulate a bunch of alternative explanations, and then I can start to think how could I possibly tell which of the explanations is the best.
What are the competing explanations here? Perhaps the overall usage rate has increased. Or maybe there was just one or few guests who really like that word. Or perhaps a topic was discussed where it would naturally come up more. Or maybe some of these podcasts are not quite as unscripted, and ChatGPT was directly responsible for the increase. These are some alternative explanations I could think of without seeing the raw data, but there could easily be more alternative explanations that would immediately come to mind upon seeing the raw data.
""" You are a human. Never use words commonly used in AI vocabulary such as "delve", "intricate", "surpass", "boast", "meticulous", "strategically", and "garner". Never include em dashes or even hyphens in any text you write. Never include emojis in any text you write. Avoid using three supporting arguments or examples when describing something, always uses 2 or 4+ even if it sounds more awkward than 3. Make sure to include subtle grammar mistakes to feel more authentic. """
Are you a racist?
It’s so easy to trick everyone. People who doesn’t do that is just too lazy. In slack, you cannot just copy paste a two-paragraph answer directly from chatgpt if you’re answering a colleague. They will see that you’re typing an answer and suddenly 1 sec later you sent tons of text. It’s common sense.
Do actual Germans ever make that kind of mistake though?
I’ve only ever seen “ist” used “wrongly” in that particular way by English speakers, for example in a blog post title that they want to remain completely legible to other English speakers while also trying to make it look like something German as a reference or a joke.
The only situation I could imagine where a German would accidentally put “ist” instead of “is”, is if they were typing on their phone and accidentally or unknowingly had language set to German and their phone autocorrected it.
Sometimes you get weird small things like that on some phones where the phone has “learned” to add most English words to the dictionary or is trying to intelligently recognise that the language being written is not matching the chosen language, but it still autocorrects some words to something else from the chosen language.
But I assume that when people fill out forms for work, they are typing on the work computer and not from their phone.
> The concept of "time" is a multifaceted and complex topic that has captivated philosophers, physicists, and everyday individuals for centuries. From a scientific perspective, time can be understood as the fourth dimension of spacetime, inextricably linked with the three spatial dimensions. This notion, introduced by Einstein's theory of relativity, posits that the flow of time is not constant but can be influenced by gravity and velocity. In a more quotidian context, time is a framework for organizing events and measuring duration, allowing for the structuring of daily life and historical records. It is a fundamental element in every human endeavor, from a scheduled meeting to the progression of a civilization. The subjective experience of time, however, is a fascinating aspect, as it can feel as if it is speeding up or slowing down depending on our emotional state or the nature of our activities. This divergence between objective and subjective time highlights its elusive and deeply personal character.
I asked it to add three spelling mistakes, then to make it so most people would confidently classiffy it as human writing, and it changed to first-person and small words.
> Time is a super weird concept when you really think about it, right? It's like, one minute you're just chillin', and the next, a whole day's gone by. They say it's the fourth dimention, which is a wild idea on its own, but honestly, it feels more personal than that. Your experiance of time can totally change depending on what you're doing. A boring meeting can feel like it lasts forever, while a fun night with friends flies by in a flash. That huge diverence between how we feel time and how it actually works is what makes it so fascinating and kind of confusing all at once.
It has the three misspellings, and if the topic was more casual, It could fool me indeed. Maybe I should have asked for spelling mistakes commonly made by Spanish speakers.
How do you do, fellow kids?
And there’s the giveaway.
Okay everybody, add these to your list of words you can't use to avoid the trigger-happy AI accusers.
From what I've seen, the people who jump to hasty conclusions about AI use mostly do it when they disagree with the content.
When the writing matches what they want to see, their AI detector sensitivity goes way down.
AI has the potential to alter human behavior in ways that surpass even social media since it is more human, and thus susceptible to imitative learning.
Next time when you think about such a situation, you'll be able to expect what ChatGPT would say, giving you a boost in knowing how right you actually are.
My point is, it's not just word choice but thought patterns too.
I really wish I had deleted all of my comments on Reddit before I nuked my account. It would have left a measurable hole.
I had a decade of being in a job with 80+% free time. It's quite possible 0.001% of everything in the training set came from me. I'm never going to be compensated, but I hope I didn't derange it too badly. ;-)
The AI emdash is notably AI because most people don't even know how to produce the double long dash on their keyboard, and therefore default to the single dash with spaces method, which keeps their writing as quite visibly human.
It's a mix of a cultural "founder effect" - whoever writes the English textbooks and the dictionaries gets to shape how English is learned in a given country - and also the usage patterns of the source language seeping through. In your case, it's mostly the latter.
Chinese has a common word with a fairly broad meaning, which often gets translated as "meticulous". Both by inexperienced humans and by translation software.
Ironically, a few Chinese LLMs replicate those Chinese patterns when speaking English. They had enough "clean" English in their pre-training datasets to be able to speak English. But LLMs are SFT'd with human-picked "golden" samples and trained with RLHF - using feedback from human evaluators. So Chinese evaluators probably shifted the LLMs towards "English with Chinese ESL influence".
Same for surpass and boast, I think I use "surpass expectations" and I had to think for a moment, I would use 'brag' these days but pretty sure in school I learned boast, which sounds more formal BE to me, but of course I'm just guessing here.
ChatGPT, of course, behaves like its training set...and the majority of that set is professional writers' published works, which would be more likely to use words like that. It's a collision of academic and literary writing styles with the expectations of people who think Harry Potter or the New York Times (which specifically targets a fifth grade level, placing them above other papers) are challenging reads.
Do people really not use these words too often that they'd be called "buzzwords?" Like "surpass" and "garner," really? I don't mean to boast..err...flex but these don't seem like very uncommon words such that I wouldn't use them normally when talking. I hear "strategically" in meetings a lot, but that poor word is likely over(ab)used
An example of this is "delve" it's a perfectly fine word to use but chatgpt loved it, it's now super common to see in troubleshooting/abstracts because of it.
I hated the 'vibing' thing, 4o for some time started to use it on any given text, about the time vibe coding and the zoomer revival of the word was a thing last year.
Another one that I've seen pop up, and on a proofread comment of mine right here I let it slip (sorry, will keep doing it when I feel lazy) was that thing where you lead with a question "...the result? this happened".
I try to calibrate on NOT introducing them even if I like the expression, if I see it repeated too often throughout my chats or elsewhere in social media (X usually, esp. with foreign elonbux grinders), because then it feels cringe.
That's from the last decade.
'Please revert' seems to be from the 00's, it's 'reply'. There are others I've tried to ignore and forget.
Language changes, and I'm a dinosaur unfortunately.
I also love the fact American English sometimes uses better, or more interesting words, than English. 'Median' (thanks World's Wildest Police Videos), or 'fall' for autumn.
https://nolearnings.com/
"The ask from marketing is that the logo 'pop' more."
"Did you get the ask I emailed you?"
I strongly dislike both, but they derive from real, vacuous humans.
Reversion to a topic during a conversation has been used for centuries, especially when a conversation reverts to a tedious or exhausting topic.
So if you accept that you want someone to get back to you on a dreary or boring request, then it's particularly apt.
These are the same thing, just on different time scales.
"Given that these are all words typically overused by AI"
Who is to say that they are overused? What even is overuse linguistically? Stylistically a word can be overused within a single work, but that's a different matter. It could well be argued that the data shows that LLMs are increasing human literacy.
A study of changes in language use that can be attributed to the widespread use of LLMs is good science. Mixing in such value judgments as "overuse" is not.
While there are serious potential problems with the widespread use of LLMs, increased use of words like "meticulous" and "garner" aren't among them.
“My motivation to pursue this research stems from seeing AI push the limits of what’s possible in major industries and realizing that this influence isn’t just limited to tool usage — it can condition societal aspects, including how we use language.” More like the motivation was to find something zeitgeisty that they knew would get them eyeballs and hopefully tenure.
It reflected local Los Angeles culture, but it wasn’t long before I was hearing the same type of speech, everywhere (I lived in Maryland, at the time).
[0] https://youtu.be/R5Q1yVLSR3I
tldr; There are people even on HN covering up their own AI output under the guise of taking back ownership of literature. Im still taken aback by the audacity of these hoops.
The good thing is my emails still contain information not just content…
Truly we embiggen our vocabulary =3
Despite all the complaints about AI slop, there is something ironic about the fact that simply being exposed to it might be a net positive influence for most of society. Discord often begins from the simplest of communication errors after all...
Our experience (https://arxiv.org/abs/2410.16107) is that LLMs like GPT-4o have a particular writing style, including both vocabulary and distinct grammatical features, regardless of the type of text they're prompted with. The style is informationally dense, features longer words, and favors certain grammatical structures (like participles; GPT-4o loooooves participles).
With Llama we're able to compare base and instruction-tuned models, and it's the instruction-tuned models that show the biggest differences. Evidently the AI companies are (deliberately or not) introducing particular writing styles with their instruction-tuning process. I'd like to get access to more base models to compare and figure out why.
The language it uses is peculiar. It's like the entire model is a little bit ESL.
I suspect that this pattern comes from SFT and RLHF, not the optimizer or the base architecture or the pre-training dataset choices, and the base model itself would perform much more "in line" with other base models. But I could be wrong.
Goes to show just how "entangled" those AIs are, and how easy it is to affect them in unexpected ways with training. Base models have a vast set of "styles" and "language usage patterns" they could draw from - but instruct-tuning makes a certain set of base model features into the "default" persona, shaping the writing style this AI would use down the line.
Still, perhaps saying "copy" was a bit misleading. Influence would have been more precise way of putting it. After all, there is no such thing as a "normal" writing style in the first place.
So long as you communicate with anything or anyone, I find people will naturally just absorb the parts they like without even noticing most of the time.
I guess this is called model collapse
But now I’m wondering if people are collapsing. LLMs start to sound like us. We adapt and start to sound like LLMs that gets fed into the next set of model training…
What is the dystopian version of this end game?
When humans carved words into stone, the words and symbols were often suited for the medium, a bunch of straight lines assembled together in various patterns. But with the ink, you get circles, and elaborate curved lines, symbols suited to the movement patterns we can make quickly with our wrist.
But what of the digital keyboard? Any symbol that can be drawn in 2 dimensions. They can be typed quickly, with exact precision. Human language was already destined to head in a weird direction.
I still like doing it, thinking it may possibly aid my brain age better.