I've seen this at so many startups (and worked to patch the gaps and put in best practices) including those backed by top tier VCs. The problem is that it is rare for startups to have security minded people.
It's usually designers, people who can raise money, and generalists who can stitch together apis. It's not generally platform, db, or security minded people. The proliferation of things like vercel and supabase have exacerbated this.
So you get people deploying API keys client side and dbs without rls. Or deploying service keys client side when they should be anon. I mean really basic stuff.
Yep, this has been my experience over 15 years in startups as well. There are barely any punishments, so there is no incentive for startups to change how they operate.
I keep getting emails with the content like: "I found a critical bypass vulnerability in your app what is the appropriate channel to disclose it, and do you have a bounty program?"
I tried engaging and replying to them, and it inevitably turns into: "Yeah, we don't actually have the vulnerability, but you are totally vulnerable, just let us do a security audit for you".
I have a pre-written reply for these kinds of messages now.
I run bug bounty for a fairly large OSS project and the amount of shitty/bad actor spam/beg bounties etc we get is huge.
Like 95% of the emails to security@ are straight garbage
Yeah, the signal to noise ratio on vulnerability reports is very weak, especially when the initial report withholds any detail.
I get tons of these messages too and the ones that do include details are the kind of junk you get from free "website vulnerability scanners" that are a bunch of garbage that means nothing -- "missing headers" for things I didn't set on purpose, "information disclosure vulnerabilities" for things that are intentionally there, etc... You can put google.com into these things and get dozens of results.
Sure that is perhaps a good way to inquire about the appropriate channels to disclose a security vulnerability, but email is not a secure communication method for sending the details about a security vulnerability
That would be even worse than our already bad system.
The system is already pretty bad because vendors underinvest in security, and then to fix it, researchers have to volunteer their time to investigate with no guarantee of payment. If the vendor could force researchers to hand over findings for free, nobody would want to do security research except hobbyists having fun. They're basically signing up for hours of tedious forced labor to explain vulnerabilities to the vendor.
I wish there was legislation that allowed the government to fine vendors for security vulnerabilities like this where the amount scales based on how much user data they leaked. And it could function like other whistleblower systems where a researcher who spots a leak can report it to the government and collect 50%. That way, if the vendor says, "We're not paying you," the researcher can turn around and collect the money from fines.
Finally the AI security startup hustlers will keep the other tech startup hustlers in line. Maybe the era of devastating leaks and total disregard for user privacy will come to an end (doubtful).
1. I didn't see mention of a bug bounty program giving limited authorization. How do independent researchers do this with legal safety? Especially when DoD is involved?
2. If a researcher discovered a vulnerability at a DoD contractor, and the contractor didn't seem to be resolving the problem, is there a DoD contact point that would be effective and safe for the researcher to report it?
To answer the first question, a number of veteran independent researchers probably wouldn’t have touched such a system. Plenty of companies will send their lawyers after you if you tell them that you’ve discovered a vulnerability of some sort and wish to responsibly disclose. Even if you do things in good faith, the company has zero reason to assume the best from you and can hold a sword over your head by citing poorly-written laws that lean in their favor regarding computer fraud and abuse.
DoD does appear to offer a “Defense Industrial Base - Vulnerability Disclosure Program” for all public-facing DoD/DoW systems.[1] However, this might not include contractor-controlled assets or services. I cannot view the HackerOne page that it redirects to (login is required) to view more details.
> How do independent researchers do this with legal safety?
In my experience it’s usually foreign nationals from third-world countries doing drive-by beg-bounty testing. Presumably they don’t much consider legality.
"There was no meaningful organization scoping, no tenant isolation, and no permission check preventing a low-privilege user from accessing other organizations' records."
Let me guess though. They are SOC2 and ISO compliant right ?
One hopes not as this stuff would have come up in even a cursory audit of the product - but it’s kinda like Ratings Agencies / Moody’s in 2008 right now until a big breach that occurs post-cert and they lose their credibility.
The number of FISMA-HIGH, ATO’d/RMF’d government systems I’ve seen with equivalent security issues is…substantially nonzero.
I have come to believe that most security audits, even ones conducted through widely-reputed groups or under strict standards, are much worse than useless.
Audits are a thing that can theoretically be done well/in a value-adding way, but rarely are, for the same reasons that most private-sector security teams I’ve worked with are effective only at generating internal badwill, and ineffective at increasing security above a very low baseline.
Initial take: as vulnerability stories go, this is a pretty boring one; what they have here is a target that was secured largely by the fact that few people knew about it. The most work done in this blog post is establishing that a training platform deployed by DoD might be much more sensitive than the same kinds of applications which are ubiquitous throughout corporate America and which are generally boring targets.
The vulnerability itself appears to be something anyone with mitmproxy would have spotted within minutes of looking at the platform; apparently, rotating object IDs worked everywhere in the app, and there was no meaningful authz.
It's interesting if AI systems can "spot" these, in the sense of autonomously exercising the application and "understanding" obvious failed authz check patterns. But it's a "hm, ok, sure" kind of interesting.
I wonder if this is how Handala group recently stole the list of service members.
How do people find these vulnerabilities within the immense scope of the whole internet? Are they going around with some kind of generic API scanner that discovers APIs?
Would be fascinated to know if this went through competitive procurement or if it was one of those Hegseth “let’s be lethal and ship broken shit to the warfighter” procurements.
Andreessen-Horowitz, who most people (and they themselves) refer to as a16z and have the eponymous domain name (a16z.com). They're one of the top VC firms on the planet -- exceedingly relevant to HN audiences and commonly discussed here.
> you'd rather say Andreessen-Horowitz, which is just as arbitrary as a16z
Yes. I know Andreessen-Horowitz and I don’t know a16z. Reading the title i thought it will be about the cryptography serialisation specification. Turns out i was mixing it up with ASN.1.
> Their website is literally a16z.com
I hear now. Before this if pressed i would have guessed that they probably have a website indeed. If you would have twisted my arm my guess would have been andersenhorovitz.com (yup, with the typos. I learned the correct spelling today from your comment.)
I'll be honest - I was thinking authorization (a11n?) - so I didn't read it closely enough. But despite that, and being on HN from almost the beginning (with a different account I lost the password to), I still didn't know what a16z was, though I do recognize Andreessen-Horowitz.
I didn't either. This is an ancient debate that can never be resolved completely, though — because the articles that HN submissions point to don't follow a style guide and there are always assumptions about audience priors. Best to just resolve it and move on.
It's usually designers, people who can raise money, and generalists who can stitch together apis. It's not generally platform, db, or security minded people. The proliferation of things like vercel and supabase have exacerbated this.
So you get people deploying API keys client side and dbs without rls. Or deploying service keys client side when they should be anon. I mean really basic stuff.
Well that’s pretty damning.
I tried engaging and replying to them, and it inevitably turns into: "Yeah, we don't actually have the vulnerability, but you are totally vulnerable, just let us do a security audit for you".
I have a pre-written reply for these kinds of messages now.
I get tons of these messages too and the ones that do include details are the kind of junk you get from free "website vulnerability scanners" that are a bunch of garbage that means nothing -- "missing headers" for things I didn't set on purpose, "information disclosure vulnerabilities" for things that are intentionally there, etc... You can put google.com into these things and get dozens of results.
The system is already pretty bad because vendors underinvest in security, and then to fix it, researchers have to volunteer their time to investigate with no guarantee of payment. If the vendor could force researchers to hand over findings for free, nobody would want to do security research except hobbyists having fun. They're basically signing up for hours of tedious forced labor to explain vulnerabilities to the vendor.
I wish there was legislation that allowed the government to fine vendors for security vulnerabilities like this where the amount scales based on how much user data they leaked. And it could function like other whistleblower systems where a researcher who spots a leak can report it to the government and collect 50%. That way, if the vendor says, "We're not paying you," the researcher can turn around and collect the money from fines.
1. I didn't see mention of a bug bounty program giving limited authorization. How do independent researchers do this with legal safety? Especially when DoD is involved?
2. If a researcher discovered a vulnerability at a DoD contractor, and the contractor didn't seem to be resolving the problem, is there a DoD contact point that would be effective and safe for the researcher to report it?
DoD does appear to offer a “Defense Industrial Base - Vulnerability Disclosure Program” for all public-facing DoD/DoW systems.[1] However, this might not include contractor-controlled assets or services. I cannot view the HackerOne page that it redirects to (login is required) to view more details.
[1]: https://www.dc3.mil/Missions/Vulnerability-Disclosure/DIB-Vu...
In my experience it’s usually foreign nationals from third-world countries doing drive-by beg-bounty testing. Presumably they don’t much consider legality.
Or the operation is not even illegal where they come from?
Let me guess though. They are SOC2 and ISO compliant right ?
I have come to believe that most security audits, even ones conducted through widely-reputed groups or under strict standards, are much worse than useless.
Audits are a thing that can theoretically be done well/in a value-adding way, but rarely are, for the same reasons that most private-sector security teams I’ve worked with are effective only at generating internal badwill, and ineffective at increasing security above a very low baseline.
The vulnerability itself appears to be something anyone with mitmproxy would have spotted within minutes of looking at the platform; apparently, rotating object IDs worked everywhere in the app, and there was no meaningful authz.
It's interesting if AI systems can "spot" these, in the sense of autonomously exercising the application and "understanding" obvious failed authz check patterns. But it's a "hm, ok, sure" kind of interesting.
How do people find these vulnerabilities within the immense scope of the whole internet? Are they going around with some kind of generic API scanner that discovers APIs?
Yes. I know Andreessen-Horowitz and I don’t know a16z. Reading the title i thought it will be about the cryptography serialisation specification. Turns out i was mixing it up with ASN.1.
> Their website is literally a16z.com
I hear now. Before this if pressed i would have guessed that they probably have a website indeed. If you would have twisted my arm my guess would have been andersenhorovitz.com (yup, with the typos. I learned the correct spelling today from your comment.)
> exceedingly relevant for the HN audience
We contain multitudes.