In the OP screen share, they toggle various telemetry options on and off, but every time a setting changes, there is a pop-up that says "a setting has changed that requires a restart [of the editor] to take effect" -- and the user just hits "cancel" and doesn't restart the editor. Then, unsurprisingly, the observed behavior doesn't change. Maybe I'm dumb and/or restarting the editor doesn't actually make a difference, but at least superficially, I'm not sure you can draw useful conclusions from this kind of testing...
edit: to be clear I see that they X-out the topmost window of the editor and then re-launch from the bottom bar, but it's not obvious that this is actually restarting the stuff that matters
Thanks for watching and catching that. It seems like a major oversight for the core claim: That disabling telemetry doesn’t work. If a restart is required and the tests ignored the restart warning that would invalidate the tests.
Either way, it’s useful to see the telemetry payloads.
It was rough a few years ago, but nowadays it's pretty nice. TI rebuilt their Code Composer Studio using Theia so it does have some larger users. It has LSP support and the same Monaco editor backend - which is all I need.
It's VSCode-with-an-Eclipse-feel to it - which might or might not be your cup of tea, but it's an alternative.
Agreed not the most well thought landing page, but the explore page gives a good insight of how it’s being used and what it looks like: https://theia-ide.org/theia-platform/
(Scroll down to Selected Tools based on Eclipse Theia)
The feature that keeps me from moving off of vscode is their markdown support. In particular the ability to drag and drop to insert links to files and images *. Surprisingly, no other editor does this even though I use it all the time.
Eclipse (as in ecosystem) is fairly popular in Enterprise, but since it exposes all the knobs, and is a bona fide IDE which has some learning curve, people stay away from it.
Also it used to be kinda heavy, but it became lighter because of Moore's law and good code management practices all over the board.
I'm planning to deploy Theia in its web based form if possible, but still didn't have the time to tinker with that one.
Using Eclipse as "the Java LSP" in VSCode makes more sense now.
Nevertheless, as much as I respect Erich for what he did for Eclipse, I won't be able to follow him to VSCode, since I don't respect Microsoft as much.
So not also using Github, LinkedIn, TypeScript (any FE framework that uses it), any Microsoft owned studios games, no Linux kernel contributions, GHC contributions,....
It is kid of hard to avoid nowadays.
Here a session with him related to VSCode history,
"The Story of Visual Studio Code with Erich Gamma and Kai Maetzel"
This is why I used "(as in ecosystem)" in the first paragraph. It was a bit late when I wrote this comment, and it turned out to be very blurry meaning wise.
Java isn't quite what I think of as lightweight. I mean it probably can be, but most Java engineering I know of is all about adding more and more libraries, frameworks, checks, tests, etc.
What's wrong with that? If they re-implement the whole thing it would amount to the same code size. It's the JDT language SERVER not some sort of "headless" software with UI needlessly bundled.
Yeah , INSEAD of forking vscode which is not modification friendly they should justuse theia because it is maintained to be modular and allowed to be used like a Library to build IDEs of your choice.
I would be interested to see a similar analysis of ByteDance's video editor, CapCut (desktop version). The editor itself is amazing, IMO it has the best UI of any video editing software I've used. Surely, it's full of telemetry and/or spyware, though, but it would be good to know to which extent. I couldn't find any such analysis.
Great analysis, well done !
Since you've already done VSCode, Trae, Cursor, can you analyse Kiro (AWS fork). I'm curious about their data collection practices.
Anecdata but Kiro is much, much, much, much easier to put through corporate procurement compared to its peers. I'm talking days vs months.
This is not because it is better and I've seen no inclination that it would somehow be more private or secure, but most enterprises already share their proprietary data with AWS and have an agreement with AWS that their TAMs will gladly usher Kiro usage under.
Interesting to distinguish that privacy/security as it relates to individuals is taken at face value, while when it relates to corporations it is taken at disclosure value.
This seems perfectly rational. If you're already entrusting all your technical infrastructure to AWS, then adding another AWS service doesn't add any additional supply-chain risk, whereas adding something from another vendor does do that.
I don't want any program on my computer including the OS to make any network calls whatsoever unless they're directly associated with executing GUI/CLI interactions I have currently undertaken as the user. Any exception should be opt-in. IMHO the entire Overton window of these remote communications is in the wrong place.
But the telemetry settings not working and the actions of the Trae moderators to quash any discussion of the telemetry is extremely concerning. People should be able to make informed decisions about the programs they are using, and the Trae developers don't seem to agree.
To further the analogy, sex may be an industry, but not everyone who participates does so comercially. Some who do so comercially may not want to be filmed.
Because software is a tool. It serves the user and only the user, and no one else. Ideally, a device I own should never act in anyone else's interests.
I was interested in learning Dart until the installer told me Google would be collecting telemetry. For a programming language. I’ve never looked at it again.
As a somewhat paranoid person I find this level of paranoia beyond me. Like do you own a car? Or a phone? A credit card? Walk around in public where there's cameras on every block? I don't agree with it at all but the world we're living it makes it impossible to not be tracked with way more than (usually anonymized) telemetry data.
"file paths (obfuscated)" -- this is likely enough for them to work out who the user is, if they work on open source software. They get granular timing data and the files the user has edited, which they could match with open source PRs in their analytics pipeline.
I suspect they aren't actually doing that, but the GDPR cares not what you're doing with the data, but what is possible with it, hence why any identifier (even "obfuscated") which could lead back to a user is considered PII.
Your analysis is thorough, and I wonder if their reduction of processes from 33 to 20...(WOW) had anything to do with moving telemetry logic elsewhere (hence increased endpoint activity).
Naming is hard but if there really were 2 different AI IDEs with nearly identically name that's no accident.
But it seems like traeide.com is in the best case someones extremely misleading web design demo, worst case a scam.
One the traeide website:
> Educational demo only. Not affiliated with ByteDance's Trae AI. Is Trae IDE really free? What's the catch? Yes, Trae IDE is completely free with no hidden costs. As a ByteDance product, it is committed to making advanced AI coding tools accessible to all developers.
By the way TRAE isn't free anymore, they now provide a premium subscription.
If the later really is just a web design demo it has a bunch of red flags. Why the officially sounding domain? Download links for executables!!! If it is just a web design demo for portfolio -> why are there no contact information for the author whose work and skills it's supposed to advertise?
This is true of practically every online community. The vast majority of the users are passive participants, a small fraction contribute, and a small subset of contributors generate most of the content. Reddit is a prime example of this, the numbers are incredibly lopsided there.
This isn't true, this is the sort of toxic "if I have nothing to hide then why value privacy" ideology that got us into this privacy nightmare.
Every single person has "something to hide", and that's normal. It's normal to not want your messages snooped through. It doesn't mean you're a criminal, or even computer-saavy.
Mhhh it is not really about “nothing to hide”, it was more that if you use niche services targeted at privacy, it puts a big target on you.
Like the Casio watches, travelling to Syria, using Tor, Protonmail, etc…
When it is better in reality to have a regular watch, a Gmail with encrypted .zip files or whatever, etc.
It does not mean you are a criminal if you have that Casio watch, but if you have this, plus encrypted emails, plus travel to some countries as a tourist, you are almost certain to put yourself in trouble for nothing, while you tried to protect yourself.
And if you are a criminal, you will put yourself in trouble too, also for nothing, while you tried to protect yourself.
This was the basis of Xkeyscore, and all of that to say that Signal is one very good signal that the person may be interesting.
2. Using a secure, but niche, service is still more secure than using a service with no privacy.
Sure, you can argue using Signal puts a "target" on your back. But there's nothing to target, right? Because it's not being run by Google or Meta. What are they gonna take? There's no data to leak about you.
If I were a criminal, which I'm not, I'd rather rob a bank with an actual gun than with a squirt gun. Even though having an actual gun puts a bigger target on your back. Because the actual gun works - the squirt gun is just kinda... useless.
>If I were a criminal, which I'm not, I'd rather rob a bank with an actual gun than with a squirt gun. Even though having an actual gun puts a bigger target on your back. Because the actual gun works - the squirt gun is just kinda... useless
Actually, there was a case... I can't recall but it might have been in Argentina, where the robbers did explicitly use fake guns when robbing the banks because doing so still actually worked for the purposes of the robbery, and it also reduced their legal liability.
It's a dark pattern called "placebo controls" - giving users the illusion of choice maintains positive sentiment while maximizing data collection, and avoids the PR hit of admitting telemetry is mandatory.
Telemetry toggles add noise to the data at the very least. IMO it's part of the reason you're actually better off with no client-side telemetry at all. Obviously they see it the opposite way.
I so much like the fact that I've come back to TUI (helix editor) recently.
I'm trying ZED too, which I believe as a commercial product comes with telemetry too.. but yeah, learning advanced rules of a personal firewall always helpful!
1. Try using pi-hole to block those particular endpoints via making DNS resolution fail; see if it still works if it can’t access the telemetry endpoints.
2. Their ridiculous tracking, disregard of the user preference to not send telemetry, and behavior on the Discord when you mentioned tracking says everything you need to know about the company. You cannot change them. If you don’t want to be tracked, then stay away from Bytedance.
Hate to break it to you, but /etc/hosts only works for apps that use getaddrinfo or similar APIs. Anything that does its own DNS resolution, which coincidentally includes anything Chromium-based, is free to ignore your hosts file.
But pi-hole seems equally susceptible to the same issue? If you're really serious about blocking you'd need some sort of firewall that can intercept TLS connections and parse SNI headers, which typically requires specialized hardware and/or beefy processor if you want reasonable throughput speeds.
I configured my router to redirect all outbound port 53 udp traffic to adguard home running on a raspberry pi. From the log, it seems to be working reasonably enough, especially for apps that do their own dns resolution like the netflix app on my chromecast. Hopefully they don't switch to dns over https any time soon to circumvent it.
DNS over https depends on the ability to resolve the DoH hostname via DNS, which is blockable via PiHole, or depend on a set of static IPs, which can be blocked by your favorite firewall.
A sufficiently spiteful app could host a DoH resolver/proxy on the same server as its api server (eg. api.example.com/dns-query), which would make it impossible for you to override DNS settings for the app without breaking the app itself.
In the context of snooping on the SNI extension, you definitely can.
The SNI extension is sent unencrypted as part of the ClientHello (first part of the TLS handshake). Any router along the way see the hostname that the client provides in the SNI data, and can/could drop the packet if they so choose.
When the nefarious actor is already inside the house, who knows to what lengths they will go to circumvent the protections? External network blocker is more straightforward (packets go in, packets go out), so easier to ensure that there is nothing funny happening.
On Apple devices, first-party applications get to circumvent LittleSnitch-like filtering. Presumably harder to hide this kind of activity on Linux, but then you need to have the expertise to be aware of the gaps. Docker still punches through your firewall configuration.
So that these domains are automatically blocked on all devices on a local network. Also, you can't really edit the hosts file on Android or iOS, but I guess mobile OSes are not part of the discussion here.
Although there are caveats -- if an app decides to use its own DNS server, sometimes secure DNS, you are still out of luck. I just recently discovered that Android webview may bypass whatever DNS your Wi-Fi points to.
Yeah, that was my point. I'm not sure what's so breath taking about what ByteDance is doing. I'm not a fan. But, with Meta, Google, Microsoft and I'll throw on Amazon, a huge chunk of the general public's web activity is tracked. Everywhere. All the time. The people have spoken, they are okay with being tracked. I've yet to talk with a non-technical person who was shocked that their online activity was tracked. They know it is. They assume it is. ByteDance's range of telemetry does not matter to them. Just wanna keep on tiktok'ing. Why does telemetry sent to Bytedance matter? Is it a China thing? I'm not concerned about a data profile on me in China. I'm concerned about the ones here in the US. I'll stop. I'm not sure I have a coherent point.
I can also suggest OpenSnitch or Portmaster to anyone whose conscious about these network connections. I couldn't live without them, never trust opt-outs.
I wonder how many of these telemetry events can sneakily exfiltrate arbitrary data like source code. For example, they could encode arbitrary data into span IDs, timestamps (millisecond and nanosecond components), or other per-event UIDs. It may be slow...but surely it's possible.
Well there's a middle ground - Sublime Text isn't free but it's fantastic and isn't sending back all my code/work to the Chinese Government. Sorry, "Telemetry"
And the other side of the middle ground, Grafana being AGPL but requiring you to disable 4 analytics flags, 1 gravatar flag, and (I think) one of their default dashboards was also fetching news from a Grafana URL.
As for why people outside these companies use their products, it usually comes down to two reasons: a) Their employer has purchased licenses and wants employees to use them, either for compliance or to get value from the investment; or b) They genuinely like the product—whether it’s because of its features, price, performance, support, or overall experience.
Spying and telemetry is not something specific to Bytedance. Example: Google ? Or Microsoft ? Why is it a problem only when it is Bytedance or Huawei ? For the exact same activity
In fact the Chinese entities are even less likely to share your secrets to your governement than their best friends at Google
No one in the chain of comments you are replying to has mentioned anything about Google, and on HackerNews you will find the majority sentiment is against spying in all forms - especially by Google, Meta, etc.
Even if we interact with your rhetoric[1] at face value, there is a big difference between data going to your own elected government versus that of a foreign adversary.
So you are implying at the end that it is better that your secrets (“telemetry”) go to your local agencies and to possible relatives or family who work on Gmail, Uber, etc ?
I'm sorry but why? Your government can use this data to actually hurt you and put you on the no-fly list, or even put you in prison.
But a foreign government is limited to what it can do to you if you are not a very high-value target.
So I try as much as possible to use software and services from a non-friendly government because this is the highest guarantee that my data will not be used against me in the future.
And since we can all agree that any data that is collected will end up with the government some way or another. Using forging software is the only real guarantee.
Unless the software is open source and its server is self-hosted, it should be considered Spyware.
In my mind, the difference is that spying does or can contain PII, or PII can be inferred from it, where telemetry is incapable of being linked to an individual, to a reasonable extent.
Every single piece of telemetry sent over the internet includes PII - the IP address of the sender - by virtue of how our internet protocols are designed.
Apple provides telemetry services that strips the IP before providing it to the app owners. Routing like this requires trust (just as a VPN does), but it's feasible.
You said it's different from spying because there is no PII in the information. Now you're saying it's different because it's not given to app owners.
Why is it relevant whether they provide it to app owners directly? The issue people have is the information is logged now and abused later, in whatever form.
This is like saying every physical business is collecting PII because employees can technically take a photo of a customer. It's hard to do business without the possibility of collecting PII.
No, it's like saying a business that has a CCTV camera recording customers, and sending that data off site to a central location, where they proceed to proceed to use the data for some non-PII-related purpose (maybe they're tracking where in stores people walk, on average), are in fact sending PII to that off site location.
Distinguishing factors from your example include
1. PII is actually encoded and handled by computer systems, not the mere capability for that to occur.
2. PII is actually sent off site, not merely able to be sent off site.
3. It doesn't assert that the PII is collected, which could imply storage, it merely asserts that it is sent as my original post does. We don't know whether or not it is stored after being received and processed.
Anonymized or not, opt-out telemetry is plain spying. Go was about to find out, and they backed out the last millisecond and converted to opt-in, for example.
Telemetry can be implemented well. The software you use gets bugs fixed much faster since you get statistics that some bugs have higher impact than others. The more users software has, less skills they have in average to accurately report any issues.
The PowerShell team at Microsoft added opt-out telemetry to track when it was launched so they could make the case internally that they should get more funding, and have more internal clout.
It’s easy to argue that if you are a PowerShell user or developer you benefit from no telemetry, but it’s hard to argue that you benefit from the tool you use being sidelined or defunded because corporate thinks nobody uses it. “Talk to your users” doesn’t solve this because there are millions of computers running scripts and no way to know who they are or contact them even if you could contact that many people, and they would not remember how often they launched it.
> it’s hard to argue that you benefit from the tool you use being sidelined or defunded because corporate thinks nobody uses it.
Let the corporation suffer then. With an open API, a third party will make a better one. Microsoft can buy that; corporations have a habit of doing that.
> “Talk to your users” doesn’t solve this because there are millions of computers running scripts
Why are you worried about the problems that scripts face? If the developer encounters issues in scripts, the developer can work to fix it. Sometimes that might mean filing a bug report... or a feature request for better documentation. Or the developer might get frustrated and use something better. Like bash.
> there are millions of computers running scripts and no way to know who they are or contact them
Why do they matter to you, or a corporation then?
> they would not remember how often they launched it.
If your users aren't interacting with you for feature requests and bug reports, then either you don't have users or you don't have good enough reachability from the users to you.
To take that logic to its extreme: I'm sure we could have amazing medical breakthroughs if we just gave up that pesky 'don't experiment on non-consenting humans' hang-up we have.
I think I was speaking aspirationally, in that the spirit of the guidelines precludes us from relating to the site and each other in such a way. We are ends, not means to an end, if that end involves subverting curious discussion. To my reading, shaming isn’t compatible with arguing in good faith as dang has helped me to understand through breaking the guidelines in strange new ways myself.
I’d be happy to email mods if you think they can tell you better. I’m no authority or guide, as my own comment history shows. I’m not better than you, or who I am replying to. I say this because I care to have a discussion that (only?) HN can enable. We can backbite anywhere (else) online, but the folks who created HN made it for something else, and arguably something greater.
props to OP for the screenshots and payloads—that’s how you do it. If any IDE wants trust, they know the recipe - make telemetry optin by default and provide a real kill switch.
It's cheap, the ai features cost about half of what other editors are charging ($10/mo) and the free tier has generous limit. I guess you pay the difference with something else :)
I’m one of those who use it—mainly because it’s cheap, as others have mentioned. I wish Cursor offered a more generous limit so I wouldn’t need another paid subscription. But it won’t. So Trae comes in — fulfilling that need and sealing the deal. This is what we call competition: it brings more freedom and helps everyone get what they want. Kudos to the competition!
I'm not defending Trae’s telemetry — just pointing out the hard truth about why pricing works and why many people care less about privacy concerns (all because there are no better alternatives for them, considering the price.)
By the way, for those who care more about pricing($7.5/M) — here you go: https://www.trae.ai/. It’s still not as good as Cursor overall (just my personal opinion), but it’s quite capable now and is evolving fast — they even changed their logo in a very short time. Maybe someday it could be as competitive as Cursor — or even more so.
Honestly great to see this. This is the power that FB/Microsoft/Google have if they ever decided to take the gloves off. Maybe this will be the motivating factor to get some privacy laws with fangs.
If you continue to send telemetry after I explicitly opt-out, then I get to sue (or atleast get a cut of the fines for whistleblowing)
I'm with you, but I don't see the problem with their argument. They should have mentioned GDB, Valgrind, and maybe things like pdb and ruff, but I think their point was clear enough without it. Hell, in vim I use ruff for linting and you can jump into a debugger. When you have it configured that way people do refer to it as an IDE. It isn't technically correct but it gets the point across to the people who wouldn't know that
What is there in an IDE today, that is missing from (n)vim? With the advent of DAP and LSP servers, I can't find anything that I would use a "proper" IDE for.
- popup context windows for docs (kind of there, but having to respect the default character grid makes them much less capable and usually they don't allow further interaction)
- contextual buttons on a line of code (sure, custom commands exist, but they're not discoverable)
Don't IDEs use DAP as well? That would mean neovim has 1:1 feature parity with IDEs when it comes to debugging. I understand the UI/UX might need some customization, but it's not like the defaults in whatever IDE fit everyone either.
Popup context windows for docs are super good in neovim, I would make a bet that they are actually better than what you find in IDEs, because they can use treesitter for automatic syntax highlighting of example code. Not sure what you mean with further interaction.
Contextual buttons are named code actions, and are available, and there are like 4 minimap plugins to choose from.
These are called "balloon"s[1]. Plenty of people have setups for things like docs (press "K") or other things (By default "K" assumes a man page)
> contextual buttons on a line of code
I don't know what this means, can you explain?
> minimap
Do you mean something like this?[2] Personally, I use tagbar[3] as I like using ctags and being able to jump around in the project.
The "minimap" is the only one here that isn't native. You can also have the file tree on the left if you want. Most people tend to use NerdTree[4], but like with a lot of plugins, there's builtins that are just as good. Here's the help page for netrw[5], vim's native File Explorer
Btw, this all works in vim. No need for neovim for any of this stuff. Except for the debugger, this stuff has been here for quite some time. The debugger has been around as a plugin for awhile too. All this stuff has been here since I started using vim, which was over a decade ago (maybe balloons didn't have as good of an interface? Idk, it's been awhile)
And are not interactive as far as I know. I've not seen a way to get a balloon on the type in another balloon and then go to the browser docs from a link in that.
> Do you mean something like this?
Yes, but that's still restricted to terminal characters (you could probably do something fancy with sixel, but still) - for larger files with big indents it's not useful anymore.
> contextual buttons on a line of code
For example options to refactor based on the current location. I could construct this manually from 3 different pieces, but this exists in other IDEs already integrated and configured by default. Basically where's the "extract this as named constant", "rename this type across the project" and others that I don't have to implement from scratch.
I mean you use completion, right? That's interaction? In insert mode <C-p> or <C-n>, same to scroll through options.
> [tabbar is] still restricted to terminal characters (you could probably do something fancy with sixel,
Wait... you want it as an image? I mean... sure? You could, but I'm really curious why you would want that. I told you this was one option, but there are others. Are you referring to the one that was more visual and didn't show actual text? Idk, I'm not going to hunt down that plugin for you and I'm willing to bet you that it exists.
> For example options to refactor based on the current location.
First off, when quoting it helps to add more >'s to clarify the depth. So ">>>" in this case. I was confused at first as I didn't say those words (Also, try adding two leading spaces ;)
Second, sure, I refactor all the time. There's 3 methods I know. The best way is probably with bufdo and having all the files opened in a buffer (tabs, windows, or panes are not required). But I'm not sure why this is surprising. Maybe you don't know what ctags are? If not, they are what makes all that possible and I'd check them out because I think it will answer a lot of your questions.
> Basically where's the "extract this as named constant", "rename this type across the project"
Correct me if I'm wrong, but you are asking about "search and replace" right? I really do recommend reading about ctags and I think these two docs will give you answers to a lot more things that just this question[0,1]. Hell, there's even The Primeagen's refactoring plugin in case you wanted to do it another way that's not vim-native.
But honestly, I really can't tell if you're just curious or trying to defend your earlier position. I mean if you're curious and want to learn more we can totally continue and I'm sure others would love to add more. And in that case I would avoid language like "vim doesn't" and instead phrase it as "can vim ___?", "how would I do ____ in vim?", or "I find ___ useful in VS code, how do people do this in vim?" Any of those will have the same result but not be aggressive. But if you're just trying to defend your position, well... Sun Tzu said you should know your enemy and I don't think you know your enemy.
Very basic one. What I mean is once you get the completion, how do you interact with that view - let's say you want to dig into a type that's displayed. Then you want to get to the longer docs for that type. There's nothing out there that does it as far as I know.
> Wait... you want it as an image?
Yes, the asciiart minimaps are cool, but they really don't have enough resolution for more complex longer files in my experience.
> The best way is probably with bufdo and having all the files opened in a buffer
You see why this is not great, right? That's an extra thing to think about.
> Maybe you don't know what ctags are?
I know. It's step 1 out of many for implementing proper refactoring system.
> but you are asking about "search and replace" right?
Search and replace with language and context awareness. You can diy it in vim or start stacking plugins. Then you can do the same with the next feature (like inserting method stub). But... I can just use an actual IDE with vim mode instead.
> And in that case I would avoid language like "vim doesn't"
Vim doesn't do those things though. There's a whole ecosystem of additions of plugins of the day that add one thing or another. But it turns out it's easier to embed nvim in an ide than play with vim plugins until you get something close to ide. Been there for years, done that, got tired. VS with vim mode has better ide features than vim with all the customised plugins.
I guess because I don't use VSC I don't know what you're talking about (can you show me?) but getting docs is not an issue to me. If I want the doc on a function I press K in normal mode.
> That's an extra thing to think about.
Is it? I mean the difference is literally
%s/foo/bar/g
bufdo %s/foo/bar/g
I don't see how that's more than what you'd do in any other system. You want to be able to replace one instance, all instances in the file, and all instances everywhere, right? Those can't all be exact same commands.
And it's not very hard to remember things like bufdo, windo, tabdo because I'm already familiar with a buffer, tab, and window. It's not an extra item in memory for me, so no, I don't see. It's just as easy and clear as if I clicked a button that said "do you all files"
> Search and replace with language and context awareness
You mean ins-completion? That's native. I can complete things from other files (buffers), ctags, and whatever. You can enable auto suggest if you really want but that's invasive for me and distracting. But to each their own. I mean the right setup is only the right setup for you, right?
> Vim doesn't do those things though.
Yet I'm really not sure what's missing. I'll give you the minimap but I personally don't really care about that one. Is it that big of a deal? (I already know what percentage of the file I'm in and personally I'd rather the real estate be used for other things. But that's me). But so far this conversation has been you telling me vim doesn't do something, me showing you it does, and you just saying no. To me it just sounds like you don't know vim. It's cool, most people don't read docs ¯\_(ツ)_/¯
I mean there's a lot of stuff that people that have been using vim for years don't know but is in vimtutor. I mean how many people don't know about basic things like ci, competition (including line or file path), or <C-[>? How many people use :wq lol
I just like vim man. You don't have to, that's okay. I like that I can do marks. I love the powerful substitution system. I mean I can just write the signatures of my init functions and automatically create the class variables. Or deal with weird situations like this time some Python code had its documentation above the function and I could just bufdo a string replace to turn those into proper docstrings. I love that I can write macros on the fly, trivially, and can apply them generously. I love registers and how powerful they are. I mean I can write the command I want on a line, push it into a register, and then just call it with @. It's trivial to add to my rc if I like it enough. I love that it's really easy to drop in a local config file that sets the standards for the project I'm working on when it differs from my defaults and I can even share that with everyone! I really like the fact that I can have my editor on just about every nix machine and I don't even need to install it. I can work effectively on a novel machine disconnected from the internet.
I mean my love for vim isn't really just the navigation. But even in the navigation I'm constantly using bindings that most vim plugins don't have. It's not easier for me to use another system and add vim keybindings because that's only a very small portion of vim. I'd rather have all of vim and a fuck ton more of my resources.
I don't think you understand what I mean with the language aware rename. It's not even close to %s. Let's say I've got a c# app and I rename a class in VS. This will rename the class in the file, all class usages (but not as text - if I rename A.B, then it will not touch X.B due to different namespaces), including other projects in the solution, optionally will rename the file it lives in and optionally will/won't replace the text in comments. All listed for review and approval and I don't have to have any of those files open ahead of time.
gcc/as/ld are batch processors from the GNU toolchain that offer few (if any) features beyond basic C/C++ (and a handful of other languages) as support, and they're non-standard toolchains on 2 out of 3 major operating systems requiring a bit of heavy lifting to use.
It's kind of nonsense to bring them up in this conversation.
I install vscode from scratch, install a few extensions I need, set 3 or 4 settings I use regularly, and bang in 5 minutes I have a customized, working environment catered for almost any language.
vi? Good luck with that.
And I say that as an experienced vim user who used to tinker a bit.
> I install vscode from scratch, install a few extensions I need, set 3 or 4 settings I use regularly, and bang in 5 minutes I have a customized, working environment catered for almost any language.
Weird, I'd say that's my experience with vim. I just carry around by dotfiles, which is not that extensive.
Hell, I will even feel comfortable in a vi terminal, though that's extremely rare to actually find. Usually vi is just remapped to vim
Edit:
The git folder with *all* my dotfiles (which includes all my notes) is just 3M, so I can take it anywhere. If I install all the plugins and if I install all the vim plugins I currently have (which some are old and I don't use) the total is ~100M. So...
You misread. I'm using 74K for *vim* configs. (Mostly because I have a few files for organization's sake)
I rounded up to 3M from 2.3M and 1.4M of that is .git lol. 156K is all my rc files, another 124K for anything that goes into ~/.confg, 212K for my notes, 128K for install scripts, 108K for templates, and 108K for scripts
I'll repeat myself, with the *same emphasis* as above. Hopefully it's clearer this time.
>> The git folder with **all** my dotfiles (which includes all my notes) is just 3M
I was just saying it's pretty simple to carry *everything* around, implying that this is nothing in comparison to something like a plugin or even VScode itself. I mean I went to the VScode plugin page and a lot of these plugins are huge. *All* of the plugins I have *combined* (including unused) are 78M. The top two most installed VSC plugins are over 50M. Hell, the ssh plugin is 28M! I don't have a single plugin that big!
I'm eying Zed. Unfortunately I am dependent on a VS Code extension for a web framework I use. VS Code might have gotten to a critical level of network effect with their extensions, which might make it extremely sticky.
Sad to hear that. I really enjoyed VS Codium before I jumped full-time into Nova.
(Unsolicited plug: If you're looking for a Mac-native IDE, and your needs aren't too out-of-the-ordinary, Nova is worth a try. If nothing else, it's almost as fast as a TUI, and the price is fair.)
> Why isn't there a decently done code editor with VSCode level features but none of the spyware garbage?
Because no other company was willing to spend enough money to reach critical mass other than Microsoft. VSCode became the dominant share of practically every language that it supported within 12-18 months of introduction.
This then allowed things like the Language Server Protocol which only exists because Microsoft reached critical mass and could cram it down everybody's throat.
Because telemtry is how you effectively make a decently done editor. If you don't have telemtry you will be likely lower quality and will be copying from other editors who are able to effectively build what users want.
VSCode is extremely unsafe and you should only use it in a managed, corporate environment where breaches aren't your problem. This goes with any fork, as well.
If you signed a Nondisclosure agreement with your employer, and you use—without approval—a tool that sends telemetry, you may be liable for a breach of the NDA.
Opening IDEA after those three days was the same kind of feeling I imagine you’d get when you take off a too tight pair of shoes you’ve been trying to run a marathon in.
ymmv, of course, but for $dayjob I can’t even be arsed trying anything else at this point, it’s so ingrained I doubt it’ll be worth the effort switching.
Hi HN,
I was evaluating IDEs for a personal project and decided to test Trae, ByteDance's fork of VSCode. I immediately noticed some significant performance and privacy issues that I felt were worth sharing. I've written up a full analysis with screenshots, network logs, and data payloads in the linked post.
Here are the key findings:
1. Extreme Resource Consumption:
Out of the box, Trae used 6.3x more RAM (~5.7 GB) and spawned 3.7x more processes (33 total) than a standard VSCode setup with the same project open. The team has since made improvements, but it's still significantly heavier.
2. Telemetry Opt-Out Doesn't Work (It Makes It Worse):
I found Trae was constantly sending data to ByteDance servers (byteoversea.com). I went into the settings and disabled all telemetry. To my surprise, this didn't stop the traffic. In fact, it increased the frequency of batch data collection. The telemetry "off" switch appears to be purely cosmetic.
3. What's Being Sent:
Even with telemetry "disabled," Trae sends detailed payloads including:
Hardware specs (CPU, memory, etc.)
Persistent user, device, and machine IDs
OS version, app language, user name
Granular usage data like time-on-ide, window focus state, and active file types.
4. Community Censorship:
When I tried to discuss these findings on their official Discord, my posts were deleted and my account was muted for 7 days. It seems words like "track" trigger an automated gag rule, which prevents any real discussion about privacy.
I believe developers should be aware of this behavior. The combination of resource drain, non-functional privacy settings, and censorship of technical feedback is a major red flag. The full, detailed analysis with all the evidence (process lists, Fiddler captures, JSON payloads, and screenshots of the Discord moderation) is available at the link. Happy to answer any questions.
I'm sure you didn't mean to, but you've crossed into being aggressive with another user. Please don't do that on HN—not with anyone, and least of all new users who deserve to be welcomed and treated charitably, not harrassed for not already knowing HN's arcane and rather primitive formatting rules*.
(First, thanks for this reply, which was nicer and more receptive than I was expecting!)
We don't want LLM-generated content any more than you do, and I'm confident that the vast majority of the community agrees. The problem is that there are lots of nuances yet to be worked out, so we shouldn't be heavy-handed.
Is it ok, for example, for a non-native speaker to use an LLM to fix up their English? I'd say it's clear that HN would be better off with what they originally wrote; non-native speakers are totally welcome, nearly always do just fine, and we want to hear people in their own voice, not passed through a mechanical filter. But someone who doesn't know HN very well and is nervous about their English couldn't know that.
People need to be treated gently. The cost of being hostile to newcomers, instead of focusing on what's interesting about their work, drowns out the benefit of enforcing conventions. It drives away people we ought to embrace. The community's zeal for protecting HN is admirable, but can easily turn unhelpful. The biggest threat to this site is not that its quality will collapse (it has been relatively stable for years now*), but that it will die due to lack of new users.
Like I said, I'm sure you didn't mean to have that effect. The problem is that most of us underestimate such effects in our own comments, so we end up propagating it without meaning to. The fact that several users replied indicating that they felt this way (edit: I mean that they felt your original post was too hostile) is a strong indicator.
* not that it's all that great, but these things are relative
Like a lot of the threads where you come in on my comments with "actually let's be a dick about this, I'll start" the rest of the thread seems to agree that some bad formatting in no way undermines an important bit of signal, that some language barrier adjacent friction is a tithe of noise to pay for it, and that erring on the side of tolerance around non-native speakers trying to navigate an English -dominated forum and industry is the better move.
I don't know how I got on your shitlist shortlist (I'll assume you don't do this with everyone), but give it a rest.
Hi HN, I was evaluating IDEs for a personal project and decided to test Trae, ByteDance's fork of VSCode. I immediately noticed some significant performance and privacy issues that I felt were worth sharing. I've written up a full analysis with screenshots, network logs, and data payloads in the linked post. Here are the key findings:
1. Extreme Resource Consumption: Out of the box, Trae used 6.3x more RAM (~5.7 GB) and spawned 3.7x more processes (33 total) than a standard VSCode setup with the same project open. The team has since made improvements, but it's still significantly heavier.
2. Telemetry Opt-Out Doesn't Work (It Makes It Worse): I found Trae was constantly sending data to ByteDance servers (byteoversea.com). I went into the settings and disabled all telemetry. To my surprise, this didn't stop the traffic. In fact, it increased the frequency of batch data collection. The telemetry "off" switch appears to be purely cosmetic.
3. What's Being Sent: Even with telemetry "disabled," Trae sends detailed payloads including: Hardware specs (CPU, memory, etc.) Persistent user, device, and machine IDs OS version, app language, user name Granular usage data like time-on-ide, window focus state, and active file types.
4. Community Censorship: When I tried to discuss these findings on their official Discord, my posts were deleted and my account was muted for 7 days. It seems words like "track" trigger an automated gag rule, which prevents any real discussion about privacy.
I believe developers should be aware of this behavior. The combination of resource drain, non-functional privacy settings, and censorship of technical feedback is a major red flag. The full, detailed analysis with all the evidence (process lists, Fiddler captures, JSON payloads, and screenshots of the Discord moderation) is available at the link. Happy to answer any questions.
By coincidence, I was just doing the same thing on the original comment. Since it's more or less identical now to what you posted, I'm going to move this subthread underneath https://news.ycombinator.com/item?id=44703481, where it will probably make more sense.
(Incidentally, mostly all we both did was add newlines, and it's on my list to see if I can get the software to do this automatically without screwing up other posts.)
I don't think the post you're replying to deserves what feels like an aggressive dressing-down. The reality is that post formatting on HN is really janky and it's hard to get it to do what you want. I don't blame someone for failing to master HN formatting on one of their first tries.
I've been here since like 2008 and I still screw it up sometimes. I'd have a not do it but HN comments are one of my "zero AI" zones. To echo sibling, its time to support markdown, its a standard now and even in arc, a reasonable subset is a pittance on anyone's time budget. It doesn't need to match GH bug for bug, that's a real project, but code formatting `backticks` is table stakes.
I wonder if its healthy to end up at aggressive dressing down is when we review peoples submitted work and point out the second half is LLM-generated 4 item lists.
I see a lot of confused comments blaming Microsoft, so to clarify: This analysis is about TRAE, a ByteDance IDE that was forked from VSCode: https://www.trae.ai/
I can't prove it, but I think that's untrue. Anecdotally, I've only heard MS using it in the last 10 years or so, and it's been pretty common terminology for years before that.
Last 10 years is right - Windows 10 was when they went all-in, and that was released in 2015. Before that, "telemetry" usually referred to situations where the same entity owned both ends of the data collection, so "consent" wasn't even necessary.
Microsoft caught flack for backporting telemetry to Windows 7 in the Windows 8/8.1 era. They really started sucking down data in Windows 10 but their spying started years before that.
Yeah. One of the most frustrating things about modern gaming is companies collecting metrics about how their game is played, then publishing "X players did Y!" pages. They're always interesting, but.... why can't I see those stats for my own games?! Looking at you, Doom Eternal and BG3.
You can capture the telemetry data with a HTTPS MITM and read it yourself.
Or (if you're working lower level) you can see an obfuscated function is emitting telemetry, saying "User did X", then you can understand that the function is doing X.
> You can capture the telemetry data with a HTTPS MITM and read it yourself.
That's not helping me, the user.
That's helping me, the developer.
> Or (if you're working lower level) you can see an obfuscated function is emitting telemetry, saying "User did X", then you can understand that the function is doing X.
I agree, while ByteDance has played their cards fairly safe in regards to how much we know about their links to CCP, it doesn’t mean they are a “good” or “trustworthy” company.
Calling out leople who trust software from ByteDance, and not calling out people who trust software by Microsoft (i.e. VSCode) is a bit hypocritic. Both are faceless corps that produce unethical software.
The point of the point was that it was highlighting a contrast with how VSCode normally works. If they were the same, this would not be a post about Bytedance, but a post about Microsoft.
Don't forget that the remote editing feature in VSCode has you install non-free binaries on the remote machines. It's not like netdir where it just wraps openssh.
I'm always surprised when corporate IT departments allow that given what's not allowed these days.
No? The non-free binary is the client not the server. There are free implementations of the client for Code OSS.
The server installed on the remote machine is part of vscode itself called the REH (remote extension host). It’s the core of the editor running in headless mode on the server.
And responsible for a huge amount of disk usage on our experiments shared machines (due to a bunch of students using vscode, with each user having their own copies of magic binaries). I wonder if there's an easy way to block it...
Microsoft's telemetry policies for VSCode aren't great, but there's a big difference between "defaulting to opt-in" and "sending even more data when the user turns telemetry off". Your post is a stupid and incorrect bit of whataboutism.
Is it just me or does the formatting of this feel like ChatGPT (numbered lists, "Key Takeaways", and just the general phrasing of things)? It's not necessarily an issue if you checked over it properly but if you did use it then it might be good to mention that for transparency, because people can tell anyway and it might feel slightly otherwise
Don't pay any attention to people giving you shit for using translation software. A lot of us sometimes forget that the whole world knows a little English and most of us native speakers have a ridiculous luxury in getting away with being two lazy to learn a few other languages.
I think it's good form to mention it as a little disclaimer, just so people don't take it the wrong day. Just write (this post was originally written by me but formatted and corrected with LLM since English is not my primary language).
From what I've seen, people generally do not like reading a generated content, but every time I've seen the author come back and say "I used it because it isn't my main language" the community always takes back the criticism. So I'd just be upfront about it and get ahead of it.
I wasn't annoyed about it, I just said it might be good to mention because people will notice anyway, and at this point there's enough AI slop around that it can make people automatically ignore it so it would be good to explain that. I'm surprised I got downvotes and pushback for this, I thought it was a common view that it's good to disclose this kind of thing and I thought I was polite about it
To be clear I think this has good information and I upvoted it, it’s just that as someone else said it’s good to get ahead of anyone who won’t like it by explaining why and also it can feel a little disingenuous otherwise (I don’t like getting other people to phrase things for me either for this reason but maybe that’s just me)
It's disingenuous to call LLMs "translation software", and it's bad advice to say "don't pay attention those people".
Even if you don't agree with it, publishing AI generated content will exclude from ones audience the people who won't read AI generated content. It is a tradeoff one has to decide whether or not to make.
I'm sympathetic to someone who has to decide whether to publish in 'broken english' or to run it through the latest in grammar software. For my time, I far prefer the former (and have been consuming "broken english" for a long while, it's one of the beautiful things about the internet!)
Part of the problems with using LLMs for translation is precisely that they alter the tone and structure of what you give it, writing using the LLM cliches and style, and it's unsurprising people see that and just assume completely generated slop. It's unfortunate, and I would probably try and use LLMs if English wasn't my first language, but I don't think it's as simple as "using translation software", I've not seen people called out in that way for dodgy Google Translate translations, for example, it's a problem specific to LLMs and the output they make having fundamental issues.
God forbid people actually learn the language they're trying to communicate in. I'd much rather read someone's earnest but broken English than LLM slop anyway.
I'd rather you write in broken English than filter it through an LLM. At least that way I know I'm reading the thoughts of a real human rather than something that may have its meaning slightly perturbed.
> might be good to mention that for transparency, because people can tell anyway and it might feel slightly otherwise
Devil's advocate: why does it matter (apart from "it feels wrong")? As long as the conclusions are sound, why is it relevant whether AI helped with the writing of the report?
It is relevant because it wastes time and adds nothing of substance. An AI can only output as much information as was inputted into it. Using it to write a text then just makes it unnecessarily more verbose.
The last few sections could have been cut entirely and nothing would have been lost.
Edit: In the process of writing this comment, the author removed 2 sections (and added an LLM acknowledgement), of which I referred to in my previous statement. To the author, thank you for reducing the verbosity with that.
AI-generated content is rarely published with the intention of being informative. * Something being apparently AI-generated is a strong heuristic that something isn't worth reading.
We've been reading highly-informative articles with "bad English" for decades. It's okay and good to write in English without perfect mastery of the language. I'd rather read the source, rather than the output of a txt2txt model.
* edit -- I want to clarify, I don't mean to imply that the author has ill will or intent to misinform. Rather, I intend to describe the pitfalls of using an LLM to adapt ones text, inadvertently adding a very strong flavor of spam to something that is not spam.
True, but there are many more people that speak no English, or so badly that an article would be hard to understand.
I face this problem now with the classes I teach. It's an electronics lab for physics majors. They have to write reports about the experiments they are doing. For a large fraction, this task is extraordinary hard not because of the physics, but because of writing in English. So for those, LLMs can be a gift from heaven. On the other hand, how do I make sure that the text is not fully LLM generated? If anyone has ideas, I'm all ears.
I don't have any ideas to help you there. I was a TA in a university, but that was before ChatGPT, and it was an expectation to provide answers in English. For non-native English speakers, one of the big reasons to attend an English-speaking university was to get the experience in speaking and reading English.
But I also think it's a different thing entirely. It's different being the sole reader of text produced by your students (with responsibility to read the text) compared to being someone using the internet choosing what to read.
Because AI use is often a strong indicator of a lack of soundness. Especially if it's used to the point where its structural quirks (like a love for lists) shine through.
Because AI isn't so hot on the "I" yet, and if you ask it to generate this kind of document it might just make stuff up. And there is too much content on the internet to delve deep on whatever you come across to understand the soundness of it. Obviously you need to do it at some point with some things, but few people do it all the time with everything.
Pretty much everyone has heuristics for content that feels like low quality garbage, and currently seeing the hallmarks of AI seems like a mostly reasonable one. Other heuristics are content filled with marketing speak, tons of typos, whatever.
I can't decide to read something because the conclusions are sound. I have to read the entire thing to find out if the conclusions are sound. What's more, if it's an LLM, it's going to try its gradient-following best to make unsound reasoning seem sound. I have to be an expert to tell that it is a moron.
I can't put that kind of work into every piece of worthless slop on the internet. If an LLM says something interesting, I'm sure a human will tell me about it.
The reason people are smelling LLMs everywhere is because LLMs are low-signal, high-effort. The disappointment one feels when a model starts going off the rails is conditioning people to detect and be repulsed by even the slightest whiff of a robotic word choice.
edit: I feel like we discovered the direction in which AGI lies but we don't have the math to make it converge, so every AI we make goes completely insane after being asked three to five questions. So we've created architectures where models keep copious notes about what they're doing, and we carefully watch them to see if they've gone insane yet. When they inevitably do, we quickly kill them, create a new one from scratch, and feed it the notes the old one left. AI slop reads like a dozen cycles of that. A group effort, created by a series of new hires, silently killed after a single interaction with the work.
> As long as the conclusions are sound, why is it relevant whether AI helped with the writing of the report?
TL;DR: Because of the bullshit asymmetry principle. Maybe the conclusions below are sound, have a read and try to wade through ;-)
Let us address the underlying assumptions and implications in the argument that the provenance of a report, specifically whether it was written with the assistance of AI, should not matter as long as the conclusions are sound.
This position, while intuitively appealing in its focus on the end result, overlooks several important dimensions of communication, trust, and epistemic responsibility. The process by which information is generated is not merely a trivial detail, it is a critical component of how that information is evaluated, contextualized, and ultimately trusted by its audience. The notion that it feels wrong is not simply a matter of subjective discomfort, but often reflects deeper concerns about transparency, accountability, and the potential for subtle biases or errors introduced by automated systems.
In academic, journalistic, and technical contexts, the methodology is often as important as the findings themselves. If a report is generated or heavily assisted by AI, it may inherit certain limitations, such as a lack of domain-specific nuance, the potential for hallucinated facts, or the unintentional propagation of biases present in the training data. Disclosing the use of AI is not about stigmatizing the tool, but about providing the audience with the necessary context to critically assess the reliability and limitations of the information presented. This is especially pertinent in environments where accuracy and trust are paramount, and where the audience may need to know whether to apply additional scrutiny or verification.
Transparency about the use of AI is a matter of intellectual honesty and respect for the audience. When readers are aware of the tools and processes behind a piece of writing, they are better equipped to interpret its strengths and weaknesses. Concealing or omitting this information, even unintentionally, can erode trust if it is later discovered, leading to skepticism not just about the specific report, but about the integrity of the author or institution as a whole.
This is not a hypothetical concern, there are numerous documented cases (eg in legal filings https://www.damiencharlotin.com/hallucinations/) where lack of disclosure about AI involvement has led to public backlash or diminished credibility. Thus, the call for transparency is not a pedantic demand, but a practical safeguard for maintaining trust in an era where the boundaries between human and machine-generated content are increasingly blurred.
edit: to be clear I see that they X-out the topmost window of the editor and then re-launch from the bottom bar, but it's not obvious that this is actually restarting the stuff that matters
Either way, it’s useful to see the telemetry payloads.
https://news.ycombinator.com/item?id=44706580
https://theia-ide.org/
It was rough a few years ago, but nowadays it's pretty nice. TI rebuilt their Code Composer Studio using Theia so it does have some larger users. It has LSP support and the same Monaco editor backend - which is all I need.
It's VSCode-with-an-Eclipse-feel to it - which might or might not be your cup of tea, but it's an alternative.
click
> Please login to use this demo
close tab
(Scroll down to Selected Tools based on Eclipse Theia)
* https://code.visualstudio.com/Docs/languages/markdown#_inser...
I don’t mind project being done and in maintenance mode. But I am not investing my time into starting using it.
Getting started page has screenshots broken on AWS.
Also it used to be kinda heavy, but it became lighter because of Moore's law and good code management practices all over the board.
I'm planning to deploy Theia in its web based form if possible, but still didn't have the time to tinker with that one.
Using Eclipse as "the Java LSP" in VSCode makes more sense now.
Nevertheless, as much as I respect Erich for what he did for Eclipse, I won't be able to follow him to VSCode, since I don't respect Microsoft as much.
It is kid of hard to avoid nowadays.
Here a session with him related to VSCode history,
"The Story of Visual Studio Code with Erich Gamma and Kai Maetzel"
https://www.youtube.com/watch?v=TTYx7MCIK7Y
I don't do Web Development, I live in the trenches. Since I don't own a desktop system anymore, I don't honestly game.
I'm exposed to them via systemd and Linux Kernel, yes, but at least both are licensed with GPL.
At least I'm trying to minimize my exposure.
For more context, please see https://news.ycombinator.com/item?id=44634786
Thanks for the video, btw. I'll take a look the moment I have time.
This is why I used "(as in ecosystem)" in the first paragraph. It was a bit late when I wrote this comment, and it turned out to be very blurry meaning wise.
My bad.
https://marketplace.visualstudio.com/items?itemName=redhat.j...
This is not because it is better and I've seen no inclination that it would somehow be more private or secure, but most enterprises already share their proprietary data with AWS and have an agreement with AWS that their TAMs will gladly usher Kiro usage under.
Interesting to distinguish that privacy/security as it relates to individuals is taken at face value, while when it relates to corporations it is taken at disclosure value.
not saying this is good but everyone do this
Handwaving away this abuse of privacy by saying "everyone does it because it makes money" is a gross justification.
no one force you to use these tools, you can use another tools that suit your needs
if you read terms of service and privacy policy then you agree to it because you use it then company have right too
But the telemetry settings not working and the actions of the Trae moderators to quash any discussion of the telemetry is extremely concerning. People should be able to make informed decisions about the programs they are using, and the Trae developers don't seem to agree.
Unique Identifiers: Machine ID, user ID, device fingerprints Workspace Details: Project information, file paths (obfuscated)
Plus os details.
I'd rather none.
I was interested in learning Dart until the installer told me Google would be collecting telemetry. For a programming language. I’ve never looked at it again.
I keep it disabled for both Dart and Flutter.
We all pick our own battles.
Anonymization is usually a lie:
https://news.ycombinator.com/item?id=20513521
https://news.ycombinator.com/item?id=21428449
Also please stop with security/privacy nihilism, https://news.ycombinator.com/item?id=27897975
I suspect they aren't actually doing that, but the GDPR cares not what you're doing with the data, but what is possible with it, hence why any identifier (even "obfuscated") which could lead back to a user is considered PII.
Your analysis is thorough, and I wonder if their reduction of processes from 33 to 20...(WOW) had anything to do with moving telemetry logic elsewhere (hence increased endpoint activity).
What does Bytedance say regarding all this?
Would be sad if wrong one is murdered.
But it seems like traeide.com is in the best case someones extremely misleading web design demo, worst case a scam.
One the traeide website:
> Educational demo only. Not affiliated with ByteDance's Trae AI. Is Trae IDE really free? What's the catch? Yes, Trae IDE is completely free with no hidden costs. As a ByteDance product, it is committed to making advanced AI coding tools accessible to all developers.
By the way TRAE isn't free anymore, they now provide a premium subscription.
If the later really is just a web design demo it has a bunch of red flags. Why the officially sounding domain? Download links for executables!!! If it is just a web design demo for portfolio -> why are there no contact information for the author whose work and skills it's supposed to advertise?
Dang said a similarly small minority of users here do all the commenting.
https://old.reddit.com/r/slatestarcodex/comments/9rvroo/most...
HN discussion of that link for anyone curious
Every single person has "something to hide", and that's normal. It's normal to not want your messages snooped through. It doesn't mean you're a criminal, or even computer-saavy.
Like the Casio watches, travelling to Syria, using Tor, Protonmail, etc…
When it is better in reality to have a regular watch, a Gmail with encrypted .zip files or whatever, etc.
It does not mean you are a criminal if you have that Casio watch, but if you have this, plus encrypted emails, plus travel to some countries as a tourist, you are almost certain to put yourself in trouble for nothing, while you tried to protect yourself.
And if you are a criminal, you will put yourself in trouble too, also for nothing, while you tried to protect yourself.
This was the basis of Xkeyscore, and all of that to say that Signal is one very good signal that the person may be interesting.
2. Using a secure, but niche, service is still more secure than using a service with no privacy.
Sure, you can argue using Signal puts a "target" on your back. But there's nothing to target, right? Because it's not being run by Google or Meta. What are they gonna take? There's no data to leak about you.
If I were a criminal, which I'm not, I'd rather rob a bank with an actual gun than with a squirt gun. Even though having an actual gun puts a bigger target on your back. Because the actual gun works - the squirt gun is just kinda... useless.
Actually, there was a case... I can't recall but it might have been in Argentina, where the robbers did explicitly use fake guns when robbing the banks because doing so still actually worked for the purposes of the robbery, and it also reduced their legal liability.
I'm trying ZED too, which I believe as a commercial product comes with telemetry too.. but yeah, learning advanced rules of a personal firewall always helpful!
1. Try using pi-hole to block those particular endpoints via making DNS resolution fail; see if it still works if it can’t access the telemetry endpoints.
2. Their ridiculous tracking, disregard of the user preference to not send telemetry, and behavior on the Discord when you mentioned tracking says everything you need to know about the company. You cannot change them. If you don’t want to be tracked, then stay away from Bytedance.
The SNI extension is sent unencrypted as part of the ClientHello (first part of the TLS handshake). Any router along the way see the hostname that the client provides in the SNI data, and can/could drop the packet if they so choose.
On Apple devices, first-party applications get to circumvent LittleSnitch-like filtering. Presumably harder to hide this kind of activity on Linux, but then you need to have the expertise to be aware of the gaps. Docker still punches through your firewall configuration.
In fact, most web browsers are using DoH, so pihole is useless in that regard.
Although there are caveats -- if an app decides to use its own DNS server, sometimes secure DNS, you are still out of luck. I just recently discovered that Android webview may bypass whatever DNS your Wi-Fi points to.
For what it's worth, I do use Google products personally. But I won't go near Facebook, WhatsApp, or Instagram.
https://github.com/grafana/tempo/discussions/5001#discussion...
(Yes, that's for Grafana tempo, but the issue in `grafana/grafana` was just marked as duplicate of this.)
I work at Apple, so I’m not concerned about being monitored—it’s all company-owned equipment and data anyway.
It was the same when I worked at Microsoft. I used Microsoft products exclusively, regardless of any potential privacy concerns.
Employees at Google and Amazon do the same. It’s known as “dogfooding”—using your own products to test and improve them (https://en.wikipedia.org/wiki/Eating_your_own_dog_food).
As for why people outside these companies use their products, it usually comes down to two reasons: a) Their employer has purchased licenses and wants employees to use them, either for compliance or to get value from the investment; or b) They genuinely like the product—whether it’s because of its features, price, performance, support, or overall experience.
In this case, the software being analyzed is the alternative that sucks.
Unless you're somehow saying telemetry doesn't report anything about what a user is doing to it's home server.
In fact the Chinese entities are even less likely to share your secrets to your governement than their best friends at Google
Even if we interact with your rhetoric[1] at face value, there is a big difference between data going to your own elected government versus that of a foreign adversary.
[1] https://en.wikipedia.org/wiki/Whataboutism
But a foreign government is limited to what it can do to you if you are not a very high-value target.
So I try as much as possible to use software and services from a non-friendly government because this is the highest guarantee that my data will not be used against me in the future.
And since we can all agree that any data that is collected will end up with the government some way or another. Using forging software is the only real guarantee.
Unless the software is open source and its server is self-hosted, it should be considered Spyware.
"What about Google" is not a logical continuation of this discussion
It should be a crime for Google as well.
"Whataboutism" is a logical fallacy.
https://en.wikipedia.org/wiki/Whataboutism
Apple provides telemetry services that strips the IP before providing it to the app owners. Routing like this requires trust (just as a VPN does), but it's feasible.
Why is it relevant whether they provide it to app owners directly? The issue people have is the information is logged now and abused later, in whatever form.
So many US universities running such nodes, without ever getting legal troubles. Such lucky boys
Distinguishing factors from your example include
1. PII is actually encoded and handled by computer systems, not the mere capability for that to occur.
2. PII is actually sent off site, not merely able to be sent off site.
3. It doesn't assert that the PII is collected, which could imply storage, it merely asserts that it is sent as my original post does. We don't know whether or not it is stored after being received and processed.
Try talking to your users instead.
> The more users software has, less skills they have in average to accurately report any issues.
No amount of telemetry will solve that.
It’s easy to argue that if you are a PowerShell user or developer you benefit from no telemetry, but it’s hard to argue that you benefit from the tool you use being sidelined or defunded because corporate thinks nobody uses it. “Talk to your users” doesn’t solve this because there are millions of computers running scripts and no way to know who they are or contact them even if you could contact that many people, and they would not remember how often they launched it.
https://learn.microsoft.com/en-us/powershell/module/microsof...
Let the corporation suffer then. With an open API, a third party will make a better one. Microsoft can buy that; corporations have a habit of doing that.
> “Talk to your users” doesn’t solve this because there are millions of computers running scripts
Why are you worried about the problems that scripts face? If the developer encounters issues in scripts, the developer can work to fix it. Sometimes that might mean filing a bug report... or a feature request for better documentation. Or the developer might get frustrated and use something better. Like bash.
> there are millions of computers running scripts and no way to know who they are or contact them
Why do they matter to you, or a corporation then?
> they would not remember how often they launched it.
If your users aren't interacting with you for feature requests and bug reports, then either you don't have users or you don't have good enough reachability from the users to you.
Corporations provide value to others. It's not just the corporation that is missing out.
To be clear, I consent to send telemetry from some of the tools I use and deploy.
Their common pattern? They wait a bit, and ask nicely about whether I want to participate. Also, the dialog box asking the question defaults to off.
I read the fine print, look a the data they push, ponder and decide whether I'm cool with it or not.
Give me choice, be upfront and transparent. Then we can have a conversation.
HN is not your army, and it isn’t a theater of ideological battle.
Please don’t do that here.
I’d be happy to email mods if you think they can tell you better. I’m no authority or guide, as my own comment history shows. I’m not better than you, or who I am replying to. I say this because I care to have a discussion that (only?) HN can enable. We can backbite anywhere (else) online, but the folks who created HN made it for something else, and arguably something greater.
I'm not defending Trae’s telemetry — just pointing out the hard truth about why pricing works and why many people care less about privacy concerns (all because there are no better alternatives for them, considering the price.)
By the way, for those who care more about pricing($7.5/M) — here you go: https://www.trae.ai/. It’s still not as good as Cursor overall (just my personal opinion), but it’s quite capable now and is evolving fast — they even changed their logo in a very short time. Maybe someday it could be as competitive as Cursor — or even more so.
Because the person using it works at Bytedance.
I guess your question is better phrased as: “Why would any non-Bytedance employee use Bytedance VSCode fork?”, to which I have no answer.
If you continue to send telemetry after I explicitly opt-out, then I get to sue (or atleast get a cut of the fines for whistleblowing)
- popup context windows for docs (kind of there, but having to respect the default character grid makes them much less capable and usually they don't allow further interaction)
- contextual buttons on a line of code (sure, custom commands exist, but they're not discoverable)
- "minimap"
Popup context windows for docs are super good in neovim, I would make a bet that they are actually better than what you find in IDEs, because they can use treesitter for automatic syntax highlighting of example code. Not sure what you mean with further interaction.
Contextual buttons are named code actions, and are available, and there are like 4 minimap plugins to choose from.
How do I get a memory graph with custom event markers overlayed on it then? That's the default for VS for example.
The "minimap" is the only one here that isn't native. You can also have the file tree on the left if you want. Most people tend to use NerdTree[4], but like with a lot of plugins, there's builtins that are just as good. Here's the help page for netrw[5], vim's native File Explorer
Btw, this all works in vim. No need for neovim for any of this stuff. Except for the debugger, this stuff has been here for quite some time. The debugger has been around as a plugin for awhile too. All this stuff has been here since I started using vim, which was over a decade ago (maybe balloons didn't have as good of an interface? Idk, it's been awhile)
[0] https://vimdoc.sourceforge.net/htmldoc/debugger.html
[1] https://vimdoc.sourceforge.net/htmldoc/options.html#'balloon...
[2] https://github.com/wfxr/minimap.vim
[3] https://github.com/preservim/tagbar
[4] https://github.com/preservim/nerdtree
[5] https://vimhelp.org/pi_netrw.txt.html#netrw
And are not interactive as far as I know. I've not seen a way to get a balloon on the type in another balloon and then go to the browser docs from a link in that.
> Do you mean something like this?
Yes, but that's still restricted to terminal characters (you could probably do something fancy with sixel, but still) - for larger files with big indents it's not useful anymore.
> contextual buttons on a line of code
For example options to refactor based on the current location. I could construct this manually from 3 different pieces, but this exists in other IDEs already integrated and configured by default. Basically where's the "extract this as named constant", "rename this type across the project" and others that I don't have to implement from scratch.
Second, sure, I refactor all the time. There's 3 methods I know. The best way is probably with bufdo and having all the files opened in a buffer (tabs, windows, or panes are not required). But I'm not sure why this is surprising. Maybe you don't know what ctags are? If not, they are what makes all that possible and I'd check them out because I think it will answer a lot of your questions.
Correct me if I'm wrong, but you are asking about "search and replace" right? I really do recommend reading about ctags and I think these two docs will give you answers to a lot more things that just this question[0,1]. Hell, there's even The Primeagen's refactoring plugin in case you wanted to do it another way that's not vim-native.But honestly, I really can't tell if you're just curious or trying to defend your earlier position. I mean if you're curious and want to learn more we can totally continue and I'm sure others would love to add more. And in that case I would avoid language like "vim doesn't" and instead phrase it as "can vim ___?", "how would I do ____ in vim?", or "I find ___ useful in VS code, how do people do this in vim?" Any of those will have the same result but not be aggressive. But if you're just trying to defend your position, well... Sun Tzu said you should know your enemy and I don't think you know your enemy.
[0] https://vim.fandom.com/wiki/Browsing_programs_with_tags
[1] https://vim.fandom.com/wiki/Search_and_replace_in_multiple_b...
[2] https://github.com/ThePrimeagen/refactoring.nvim
Very basic one. What I mean is once you get the completion, how do you interact with that view - let's say you want to dig into a type that's displayed. Then you want to get to the longer docs for that type. There's nothing out there that does it as far as I know.
> Wait... you want it as an image?
Yes, the asciiart minimaps are cool, but they really don't have enough resolution for more complex longer files in my experience.
> The best way is probably with bufdo and having all the files opened in a buffer
You see why this is not great, right? That's an extra thing to think about.
> Maybe you don't know what ctags are?
I know. It's step 1 out of many for implementing proper refactoring system.
> but you are asking about "search and replace" right?
Search and replace with language and context awareness. You can diy it in vim or start stacking plugins. Then you can do the same with the next feature (like inserting method stub). But... I can just use an actual IDE with vim mode instead.
> And in that case I would avoid language like "vim doesn't"
Vim doesn't do those things though. There's a whole ecosystem of additions of plugins of the day that add one thing or another. But it turns out it's easier to embed nvim in an ide than play with vim plugins until you get something close to ide. Been there for years, done that, got tired. VS with vim mode has better ide features than vim with all the customised plugins.
And it's not very hard to remember things like bufdo, windo, tabdo because I'm already familiar with a buffer, tab, and window. It's not an extra item in memory for me, so no, I don't see. It's just as easy and clear as if I clicked a button that said "do you all files"
You mean ins-completion? That's native. I can complete things from other files (buffers), ctags, and whatever. You can enable auto suggest if you really want but that's invasive for me and distracting. But to each their own. I mean the right setup is only the right setup for you, right? Yet I'm really not sure what's missing. I'll give you the minimap but I personally don't really care about that one. Is it that big of a deal? (I already know what percentage of the file I'm in and personally I'd rather the real estate be used for other things. But that's me). But so far this conversation has been you telling me vim doesn't do something, me showing you it does, and you just saying no. To me it just sounds like you don't know vim. It's cool, most people don't read docs ¯\_(ツ)_/¯I mean there's a lot of stuff that people that have been using vim for years don't know but is in vimtutor. I mean how many people don't know about basic things like ci, competition (including line or file path), or <C-[>? How many people use :wq lol
I just like vim man. You don't have to, that's okay. I like that I can do marks. I love the powerful substitution system. I mean I can just write the signatures of my init functions and automatically create the class variables. Or deal with weird situations like this time some Python code had its documentation above the function and I could just bufdo a string replace to turn those into proper docstrings. I love that I can write macros on the fly, trivially, and can apply them generously. I love registers and how powerful they are. I mean I can write the command I want on a line, push it into a register, and then just call it with @. It's trivial to add to my rc if I like it enough. I love that it's really easy to drop in a local config file that sets the standards for the project I'm working on when it differs from my defaults and I can even share that with everyone! I really like the fact that I can have my editor on just about every nix machine and I don't even need to install it. I can work effectively on a novel machine disconnected from the internet.
I mean my love for vim isn't really just the navigation. But even in the navigation I'm constantly using bindings that most vim plugins don't have. It's not easier for me to use another system and add vim keybindings because that's only a very small portion of vim. I'd rather have all of vim and a fuck ton more of my resources.
It's kind of nonsense to bring them up in this conversation.
vi? Good luck with that.
And I say that as an experienced vim user who used to tinker a bit.
Hell, I will even feel comfortable in a vi terminal, though that's extremely rare to actually find. Usually vi is just remapped to vim
Edit:
The git folder with *all* my dotfiles (which includes all my notes) is just 3M, so I can take it anywhere. If I install all the plugins and if I install all the vim plugins I currently have (which some are old and I don't use) the total is ~100M. So...
You misread. I'm using 74K for *vim* configs. (Mostly because I have a few files for organization's sake)
I rounded up to 3M from 2.3M and 1.4M of that is .git lol. 156K is all my rc files, another 124K for anything that goes into ~/.confg, 212K for my notes, 128K for install scripts, 108K for templates, and 108K for scripts
I'll repeat myself, with the *same emphasis* as above. Hopefully it's clearer this time.
I was just saying it's pretty simple to carry *everything* around, implying that this is nothing in comparison to something like a plugin or even VScode itself. I mean I went to the VScode plugin page and a lot of these plugins are huge. *All* of the plugins I have *combined* (including unused) are 78M. The top two most installed VSC plugins are over 50M. Hell, the ssh plugin is 28M! I don't have a single plugin that big!Any recommendations?
This seems like an easy win for a software project
JetBrains products. Can work fully offline and they don't send "telemetry" if you're a paying user: https://www.jetbrains.com/help/clion/settings-usage-statisti...
Isn't that what VS Codium is for?
Either way it uses electron. Which I hate so much.
Sad to hear that. I really enjoyed VS Codium before I jumped full-time into Nova.
(Unsolicited plug: If you're looking for a Mac-native IDE, and your needs aren't too out-of-the-ordinary, Nova is worth a try. If nothing else, it's almost as fast as a TUI, and the price is fair.)
what other software packages have 200 year old jokes about them?
Microsoft is content with funding it, the price is your telemetry (for now).
For high quality development tools I use true FOSS; or I pay for my tools to avoid not knowing where the value is being extracted.
The price of VSCode is halo effect for Azure products
Specifically: the remote code extension, the C/C++ extension and the Python extension.
Because no other company was willing to spend enough money to reach critical mass other than Microsoft. VSCode became the dominant share of practically every language that it supported within 12-18 months of introduction.
This then allowed things like the Language Server Protocol which only exists because Microsoft reached critical mass and could cram it down everybody's throat.
Opening IDEA after those three days was the same kind of feeling I imagine you’d get when you take off a too tight pair of shoes you’ve been trying to run a marathon in.
ymmv, of course, but for $dayjob I can’t even be arsed trying anything else at this point, it’s so ingrained I doubt it’ll be worth the effort switching.
Here are the key findings:
1. Extreme Resource Consumption: Out of the box, Trae used 6.3x more RAM (~5.7 GB) and spawned 3.7x more processes (33 total) than a standard VSCode setup with the same project open. The team has since made improvements, but it's still significantly heavier.
2. Telemetry Opt-Out Doesn't Work (It Makes It Worse): I found Trae was constantly sending data to ByteDance servers (byteoversea.com). I went into the settings and disabled all telemetry. To my surprise, this didn't stop the traffic. In fact, it increased the frequency of batch data collection. The telemetry "off" switch appears to be purely cosmetic.
3. What's Being Sent: Even with telemetry "disabled," Trae sends detailed payloads including: Hardware specs (CPU, memory, etc.) Persistent user, device, and machine IDs OS version, app language, user name Granular usage data like time-on-ide, window focus state, and active file types.
4. Community Censorship: When I tried to discuss these findings on their official Discord, my posts were deleted and my account was muted for 7 days. It seems words like "track" trigger an automated gag rule, which prevents any real discussion about privacy.
I believe developers should be aware of this behavior. The combination of resource drain, non-functional privacy settings, and censorship of technical feedback is a major red flag. The full, detailed analysis with all the evidence (process lists, Fiddler captures, JSON payloads, and screenshots of the Discord moderation) is available at the link. Happy to answer any questions.
https://news.ycombinator.com/newsguidelines.html
* (which, btw, are at https://news.ycombinator.com/formatdoc)
Something I'm chewing on: we're in a bit of a sticky wicket if new accounts can post LLM-generated content and saying as such is aggressive.
We don't want LLM-generated content any more than you do, and I'm confident that the vast majority of the community agrees. The problem is that there are lots of nuances yet to be worked out, so we shouldn't be heavy-handed.
Is it ok, for example, for a non-native speaker to use an LLM to fix up their English? I'd say it's clear that HN would be better off with what they originally wrote; non-native speakers are totally welcome, nearly always do just fine, and we want to hear people in their own voice, not passed through a mechanical filter. But someone who doesn't know HN very well and is nervous about their English couldn't know that.
People need to be treated gently. The cost of being hostile to newcomers, instead of focusing on what's interesting about their work, drowns out the benefit of enforcing conventions. It drives away people we ought to embrace. The community's zeal for protecting HN is admirable, but can easily turn unhelpful. The biggest threat to this site is not that its quality will collapse (it has been relatively stable for years now*), but that it will die due to lack of new users.
Like I said, I'm sure you didn't mean to have that effect. The problem is that most of us underestimate such effects in our own comments, so we end up propagating it without meaning to. The fact that several users replied indicating that they felt this way (edit: I mean that they felt your original post was too hostile) is a strong indicator.
* not that it's all that great, but these things are relative
I bet I sound a lot dumber than this when I'm renting servers in Hong Kong in Cantonese.
I don't know how I got on your shitlist shortlist (I'll assume you don't do this with everyone), but give it a rest.
Hi HN, I was evaluating IDEs for a personal project and decided to test Trae, ByteDance's fork of VSCode. I immediately noticed some significant performance and privacy issues that I felt were worth sharing. I've written up a full analysis with screenshots, network logs, and data payloads in the linked post. Here are the key findings:
1. Extreme Resource Consumption: Out of the box, Trae used 6.3x more RAM (~5.7 GB) and spawned 3.7x more processes (33 total) than a standard VSCode setup with the same project open. The team has since made improvements, but it's still significantly heavier.
2. Telemetry Opt-Out Doesn't Work (It Makes It Worse): I found Trae was constantly sending data to ByteDance servers (byteoversea.com). I went into the settings and disabled all telemetry. To my surprise, this didn't stop the traffic. In fact, it increased the frequency of batch data collection. The telemetry "off" switch appears to be purely cosmetic.
3. What's Being Sent: Even with telemetry "disabled," Trae sends detailed payloads including: Hardware specs (CPU, memory, etc.) Persistent user, device, and machine IDs OS version, app language, user name Granular usage data like time-on-ide, window focus state, and active file types.
4. Community Censorship: When I tried to discuss these findings on their official Discord, my posts were deleted and my account was muted for 7 days. It seems words like "track" trigger an automated gag rule, which prevents any real discussion about privacy.
I believe developers should be aware of this behavior. The combination of resource drain, non-functional privacy settings, and censorship of technical feedback is a major red flag. The full, detailed analysis with all the evidence (process lists, Fiddler captures, JSON payloads, and screenshots of the Discord moderation) is available at the link. Happy to answer any questions.
By coincidence, I was just doing the same thing on the original comment. Since it's more or less identical now to what you posted, I'm going to move this subthread underneath https://news.ycombinator.com/item?id=44703481, where it will probably make more sense.
(Incidentally, mostly all we both did was add newlines, and it's on my list to see if I can get the software to do this automatically without screwing up other posts.)
Also, I hope getting to #1 on the front page with your first post led to more positive vibes than getting harangued led to negative ones :)
extra
line breaks
so that there's enough space between your sentences.
Can you please expand on that? I have trouble understanding how telemetry helps me, as a user of the product, understand how the product works.
Or (if you're working lower level) you can see an obfuscated function is emitting telemetry, saying "User did X", then you can understand that the function is doing X.
That's not helping me, the user.
That's helping me, the developer.
> Or (if you're working lower level) you can see an obfuscated function is emitting telemetry, saying "User did X", then you can understand that the function is doing X.
Again, it helps me, the developer.
Neither of these help me, the user.
I'm always surprised when corporate IT departments allow that given what's not allowed these days.
The server installed on the remote machine is part of vscode itself called the REH (remote extension host). It’s the core of the editor running in headless mode on the server.
Here is an example open source implementation of the non-free binary client (it’s a 4 files that call various editor APIs) https://github.com/xaberus/vscode-remote-oss
This is REH entry (the binary installed on the remote machine) https://github.com/microsoft/vscode/tree/main/src/vs/server/...
(or maybe you just have a similar writing style)
From what I've seen, people generally do not like reading a generated content, but every time I've seen the author come back and say "I used it because it isn't my main language" the community always takes back the criticism. So I'd just be upfront about it and get ahead of it.
Even if you don't agree with it, publishing AI generated content will exclude from ones audience the people who won't read AI generated content. It is a tradeoff one has to decide whether or not to make.
I'm sympathetic to someone who has to decide whether to publish in 'broken english' or to run it through the latest in grammar software. For my time, I far prefer the former (and have been consuming "broken english" for a long while, it's one of the beautiful things about the internet!)
It's clear that this isn't what OP was doing. The LLM was writing, not merely translating. dang put it well:
> we want people to speak in their own voice
https://news.ycombinator.com/item?id=44704054
Devil's advocate: why does it matter (apart from "it feels wrong")? As long as the conclusions are sound, why is it relevant whether AI helped with the writing of the report?
The last few sections could have been cut entirely and nothing would have been lost.
Edit: In the process of writing this comment, the author removed 2 sections (and added an LLM acknowledgement), of which I referred to in my previous statement. To the author, thank you for reducing the verbosity with that.
We've been reading highly-informative articles with "bad English" for decades. It's okay and good to write in English without perfect mastery of the language. I'd rather read the source, rather than the output of a txt2txt model.
* edit -- I want to clarify, I don't mean to imply that the author has ill will or intent to misinform. Rather, I intend to describe the pitfalls of using an LLM to adapt ones text, inadvertently adding a very strong flavor of spam to something that is not spam.
But I also think it's a different thing entirely. It's different being the sole reader of text produced by your students (with responsibility to read the text) compared to being someone using the internet choosing what to read.
simple as
Pretty much everyone has heuristics for content that feels like low quality garbage, and currently seeing the hallmarks of AI seems like a mostly reasonable one. Other heuristics are content filled with marketing speak, tons of typos, whatever.
I can't decide to read something because the conclusions are sound. I have to read the entire thing to find out if the conclusions are sound. What's more, if it's an LLM, it's going to try its gradient-following best to make unsound reasoning seem sound. I have to be an expert to tell that it is a moron.
I can't put that kind of work into every piece of worthless slop on the internet. If an LLM says something interesting, I'm sure a human will tell me about it.
The reason people are smelling LLMs everywhere is because LLMs are low-signal, high-effort. The disappointment one feels when a model starts going off the rails is conditioning people to detect and be repulsed by even the slightest whiff of a robotic word choice.
edit: I feel like we discovered the direction in which AGI lies but we don't have the math to make it converge, so every AI we make goes completely insane after being asked three to five questions. So we've created architectures where models keep copious notes about what they're doing, and we carefully watch them to see if they've gone insane yet. When they inevitably do, we quickly kill them, create a new one from scratch, and feed it the notes the old one left. AI slop reads like a dozen cycles of that. A group effort, created by a series of new hires, silently killed after a single interaction with the work.
TL;DR: Because of the bullshit asymmetry principle. Maybe the conclusions below are sound, have a read and try to wade through ;-)
Let us address the underlying assumptions and implications in the argument that the provenance of a report, specifically whether it was written with the assistance of AI, should not matter as long as the conclusions are sound.
This position, while intuitively appealing in its focus on the end result, overlooks several important dimensions of communication, trust, and epistemic responsibility. The process by which information is generated is not merely a trivial detail, it is a critical component of how that information is evaluated, contextualized, and ultimately trusted by its audience. The notion that it feels wrong is not simply a matter of subjective discomfort, but often reflects deeper concerns about transparency, accountability, and the potential for subtle biases or errors introduced by automated systems.
In academic, journalistic, and technical contexts, the methodology is often as important as the findings themselves. If a report is generated or heavily assisted by AI, it may inherit certain limitations, such as a lack of domain-specific nuance, the potential for hallucinated facts, or the unintentional propagation of biases present in the training data. Disclosing the use of AI is not about stigmatizing the tool, but about providing the audience with the necessary context to critically assess the reliability and limitations of the information presented. This is especially pertinent in environments where accuracy and trust are paramount, and where the audience may need to know whether to apply additional scrutiny or verification.
Transparency about the use of AI is a matter of intellectual honesty and respect for the audience. When readers are aware of the tools and processes behind a piece of writing, they are better equipped to interpret its strengths and weaknesses. Concealing or omitting this information, even unintentionally, can erode trust if it is later discovered, leading to skepticism not just about the specific report, but about the integrity of the author or institution as a whole.
This is not a hypothetical concern, there are numerous documented cases (eg in legal filings https://www.damiencharlotin.com/hallucinations/) where lack of disclosure about AI involvement has led to public backlash or diminished credibility. Thus, the call for transparency is not a pedantic demand, but a practical safeguard for maintaining trust in an era where the boundaries between human and machine-generated content are increasingly blurred.