I'll put the two blog posts that people have been linking to in the top text there, so people can read them if they want.
* we'll re-up the post so that it goes to roughly the same place on the frontpage that this submission was at before merging. that relativizes the timestamp (here: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...), but I believe longcat's submission was first.
Sorry neuroo - I know it sucks to have a post doing well on HN's frontpage and them plummet. But users are disagreeing about which URL is best so it seems safest to pick the original/official source, and to give the 'credit' to the first submitter.
Have we not learned, yet? The number of points this submission has already earned says we have not.
People, do not trust security advisors who tell you to do such things, especially ones who also remove the original instructions entirely and replace them with instructions to run their tools instead.
The original security advisory is at https://github.com/nrwl/nx/security/advisories/GHSA-cxm3-wv7... and at no point does it tell you to run the compromised programs in order to determine whether they are compromised versions. Or to run semgrep for that matter.
Good callout. Evidence so far points to `nx --version` itself being safe because this was in a post-install script but we changed the rec in our post.
We took the versions in the Github security advisory and compiled it into a Semgrep rule which is MIT-licensed: https://semgrep.dev/c/r/oqUk5lJ/semgrep.ssc-mal-resp-2025-08.... Semgrep rules can be overkill for these use cases but it can be convenient to have a single command to check for all affected versions across multiple packages, especially for our users who already have Semgrep installed. That's basically what I did on all our internal repos.
We updated the blog post to note the Semgrep rule is MIT licensed. And you can run locally with Semgrep (which is LGPL: https://github.com/returntocorp/semgrep) if you curl it and run `semgrep --config=rule.yaml`
Create a blog post about a security issue. Post it on HN and get upvotes. Find people who believe they might be affected. Let them run the affected program. Boom.
Nope, because the script was commited to upstream and you can review what ended in the package.
It seems a lot of general "wisdom" here is thrown by people who have not looked into this particular incident or are unfamiliar with js node dev in general.
Correct, luckily, but all it takes is one eval. So be diligent about checking. However, like you said, luckily it’s JavaScript and there’s a history online that you can see.
Be weary of binary wasms though, harder to analyze. In the end, because it was published and npm allows you to see the history, we can all see.
Still, from a security standpoint, anything within a “package” that is compromised, compromises the package. Don’t install it. Wait for the fix.
@dang Even though the blogpost has some helpful flavor, this GH issue seems much more direct and giving much more straightforward guidance for resolving the issue. Is it possible to change the link?
I do not like these coverages. They always write about VSCode Extension which has basically nothing todo with the bug.
It only did run affected programs of course but it's so stupid to even talk about vscode in that case.
if you used the affected nx versions you are affected no matter if you used vscode,webstorm, whatever ide of your liking. if you used a not affected nx version nothing happend no matter which vscode version you used.
* it can inform triage, if you use the extension you're more likely to be impacted
* because it was VSCode, Workplace Trust actually partially mitigated this in at least 38 cases
With all due respect to subsistence farming, I would say that digital tech is already sufficiently "bootstrapped", such that even if the world's industrial base entirely collapses, and we don't have any more chips fabricated, the next century will still be about who can best utilize computers, scrounging up discarded phones and repurposing them to (re)automate farming, manufacturing and drone warfare. Even LLM-based AIs are already entrenched, and I'd expect people to be running ollama and aider/void on solar powered laptops in their tribe's half-destroyed buildings.
The amount of offline documentation required to do so is gargantuan. Try any kind of "repurposing" of any phone -- go for something trivial, not as hard as controlling an automated greenhouse circuit -- and try to do so without the internet -- or let's take the difficulty down a notch, without AI or search engines, wikipedia allowed. The operating system on that phone is likely way too complicated for you to succeed. It's also likely to be locked.
It seems that few to no people understand just how unusual it is to buy an Intel or AMD64 based system and just boot it up. It's the exception in the industry, not the norm. Even the Raspberry Pi relies on the device tree, which is effectively a series of magic numbers for booting the board.
I worked at an enormous company that made embedded products. In the entire company, there were maybe ~12 engineers that knew how to boot up the various products. None of them were capable of booting all the devices. There was another team dedicated to preserving the knowledge they had because when one would retire they didn't even bother handing over all the knowledge. Only active product lines were transitioned to another employee. If a product line was brought back for a new contract and the bootloader was not already available, there were a huge number of man hours budgeted for that activity alone.
> The amount of offline documentation required to do so is gargantuan.
I have Ollama running on my local PC with 128GBs of RAM. If civilization collapses will my tribe be better off compared to a tribe that doesn't have a similar system running on solar power? I would think so. And if we have a local copy of Wikipedia (25GBs compressed, 150GBs uncompressed & with basic images), then we'd be infinitely better off.
My PC isn't anything special and is made of commodity parts.
The tribe members do not have to run ollama on their phones. My PC could be the server that they connect to over tribe wifi.
Capabilities of commodity PCs continue to grow every year. This appears to make a complete civilization collapse near impossible. As long as some of us survive the initial catastrophic event, and the planet can sustain human life, humanity will not be starting from scratch and will bounce back.
This Semgrep post describes a very different prompt from what Nx reported themselves, which suggests the attacker was "live-editing" their payload over multiple releases and intended to go further.
Still, why does the payload only upload the paths to files without their actual contents?
Why would they not have the full attack ready before publishing it? Was it really just meant as a data gathering operation, a proof of concept, or are they just a bit stupid?
This feels more like someone wanted to just kick the hornet's nest, and specifically used AI to get both traction for the discussion to latch on and get the topic focused on it.
Especially: given the .bashrc editing to cause shutdown. This thing is obviously trying to be as loud as possible, without being overly destructive.
I'll put the two blog posts that people have been linking to in the top text there, so people can read them if they want.
* we'll re-up the post so that it goes to roughly the same place on the frontpage that this submission was at before merging. that relativizes the timestamp (here: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...), but I believe longcat's submission was first.
Sorry neuroo - I know it sucks to have a post doing well on HN's frontpage and them plummet. But users are disagreeing about which URL is best so it seems safest to pick the original/official source, and to give the 'credit' to the first submitter.
> Run semgrep --config [...]
> Alternatively, you can run nx –version [...]
Have we not learned, yet? The number of points this submission has already earned says we have not.
People, do not trust security advisors who tell you to do such things, especially ones who also remove the original instructions entirely and replace them with instructions to run their tools instead.
The original security advisory is at https://github.com/nrwl/nx/security/advisories/GHSA-cxm3-wv7... and at no point does it tell you to run the compromised programs in order to determine whether they are compromised versions. Or to run semgrep for that matter.
Good callout. Evidence so far points to `nx --version` itself being safe because this was in a post-install script but we changed the rec in our post.
We took the versions in the Github security advisory and compiled it into a Semgrep rule which is MIT-licensed: https://semgrep.dev/c/r/oqUk5lJ/semgrep.ssc-mal-resp-2025-08.... Semgrep rules can be overkill for these use cases but it can be convenient to have a single command to check for all affected versions across multiple packages, especially for our users who already have Semgrep installed. That's basically what I did on all our internal repos.
We updated the blog post to note the Semgrep rule is MIT licensed. And you can run locally with Semgrep (which is LGPL: https://github.com/returntocorp/semgrep) if you curl it and run `semgrep --config=rule.yaml`
Create a blog post about a security issue. Post it on HN and get upvotes. Find people who believe they might be affected. Let them run the affected program. Boom.
I'm not sure which is worse.
It seems a lot of general "wisdom" here is thrown by people who have not looked into this particular incident or are unfamiliar with js node dev in general.
Be weary of binary wasms though, harder to analyze. In the end, because it was published and npm allows you to see the history, we can all see.
Still, from a security standpoint, anything within a “package” that is compromised, compromises the package. Don’t install it. Wait for the fix.
* https://news.ycombinator.com/item?id=45040126
* https://news.ycombinator.com/item?id=45040507
It only did run affected programs of course but it's so stupid to even talk about vscode in that case. if you used the affected nx versions you are affected no matter if you used vscode,webstorm, whatever ide of your liking. if you used a not affected nx version nothing happend no matter which vscode version you used.
I thought it was useful to include because:
* it can inform triage, if you use the extension you're more likely to be impacted * because it was VSCode, Workplace Trust actually partially mitigated this in at least 38 cases
I found the first submission on the story (https://news.ycombinator.com/item?id=45034496), which used a github url, and merged the thread into it - more explanation at https://news.ycombinator.com/item?id=45042727.
https://semgrep.dev/solutions/secure-vibe-coding/
if software development is turning into their demo:
then I'm switching careers to subsistence farming and waiting for the collapseI worked at an enormous company that made embedded products. In the entire company, there were maybe ~12 engineers that knew how to boot up the various products. None of them were capable of booting all the devices. There was another team dedicated to preserving the knowledge they had because when one would retire they didn't even bother handing over all the knowledge. Only active product lines were transitioned to another employee. If a product line was brought back for a new contract and the bootloader was not already available, there were a huge number of man hours budgeted for that activity alone.
I have Ollama running on my local PC with 128GBs of RAM. If civilization collapses will my tribe be better off compared to a tribe that doesn't have a similar system running on solar power? I would think so. And if we have a local copy of Wikipedia (25GBs compressed, 150GBs uncompressed & with basic images), then we'd be infinitely better off.
My PC isn't anything special and is made of commodity parts.
The tribe members do not have to run ollama on their phones. My PC could be the server that they connect to over tribe wifi.
Capabilities of commodity PCs continue to grow every year. This appears to make a complete civilization collapse near impossible. As long as some of us survive the initial catastrophic event, and the planet can sustain human life, humanity will not be starting from scratch and will bounce back.
Still, why does the payload only upload the paths to files without their actual contents?
Why would they not have the full attack ready before publishing it? Was it really just meant as a data gathering operation, a proof of concept, or are they just a bit stupid?
https://github.com/nrwl/nx/security/advisories/GHSA-cxm3-wv7...
Especially: given the .bashrc editing to cause shutdown. This thing is obviously trying to be as loud as possible, without being overly destructive.