I wish people would stop sharing this website, their research is massively written by LLMs and looks good at a glance, but it goes in every direction at the same time and lacks logical connections. And the claims don't really match their sources.
Their initial publication was backed by a Git repository with hundreds of pages of documents written in just three days (https://web.archive.org/web/20260314224623/https://tboteproj...). It also contained nonsense like an "anomaly report" with recommendations from the LLM agent to itself, which covers an analysis of contributors to Linux's BPF, Android's Gerrit, and parser errors in using legislative databases. https://web.archive.org/web/20260314103202/https://tboteproj... . The repository was rewritten since, though.
This post follows their usual pattern. The second source they link to has been a dead link for 11 months (https://web.archive.org/web/20250501000000*/https://www.pala...). There's a lot about Persona's design, MCPs, vulnerabilities, data leaks, but nothing proving they use it for mass surveillance. The entire case for it being mass surveillance rests on two points: that they interact with AI companies and they offer MCP endpoints (section titled "Persona's Surveillance Architecture")
Thank you. Investigative journalism is so important and I would happily believe some of the claims made here, but when I encounter even just a few sentences that sound LLM-written, suddenly I don't trust any of the statements in the source anymore. This site goes way beyond that, with a vibe-coded UI and generated articles. There might be value in what's reported here, but currently it requires a lot of work from the reader.
The earlier you realize how little IQ and "knows a lot" means the person actually know what they're talking about, the easier life becomes. "Smart" people are wrong all the time, some say how they became smart in the first place.
> In the meantime a FOSS maintainer who is just trying to put the pieces in place to comply with the law (as written) got doxxed and harassed.
In my experience, when a country like Britain passes a censorship law, people in other countries like America don't enjoy being given the tools to comply with it, even if the tools are entirely optional.
I support a rule to ban AI-generated/edited posts.
Initially I thought they'd be fine, because AI-generated isn't intrinsically an issue and the comments can be good. But in practice, the AI posts tend to be slop, and usually there's a better human-written source for the same topic (for example, one of the many other recent "age verification is mass surveillance" posts here).
Part of this was written by AI, but with a human in "charge" who
explained which part of AI was used here. Would that also be a
bannable example for you? I am not so convinced that this is
bannable per se. Perhaps it may be different if the AI-slop was
not announced, but when it was announced and explained?
> one of the many other recent "age verification is mass surveillance"
> posts here
Well, it actually is. It taps very much into other similar laws
e. g. "chat control", aka chat sniffing.
I wonder if not private age verification could not be solved with the right cryptographic protocol.
You would have to register using a digital ID with a government agency, to get a age certificate. Most European countries already have digital IDs, used for all sorts of things: such as taxes, online banking etc.
Then that certificate could be used in some sort of challenge-response protocol with web sites to verify your age, creating a new user ID in each session but without divulging anything that identifies that particular certificate.
I'm afraid that the alternative would be that social media would instead require login with the digital ID directly.
In your proposed scheme, it is in the best interest of web sites to store the certificates from users indefinitely, since it's the only evidence they have that prove that their users are not minors.
Since authorities have the power of accessing that data and identify the user who created the certificate, this scheme is not anonymous.
Authorities can access that data via court orders today, or via a global automatic mandatory data sharing law in the future.
In the example of USA, even if for some reason people still trust the current Government (although ICE already accessed private medical records to track and arrest people), I don't see why they should trust all future Governments which will have retroactive access to all that data.
So let's make it illegal to keep the tokens more than e.g 6 months.
We should not underestimate the power of the legal system to enforce freedom and anonymity. And on the flip side, it's hard to create a technical system which can actually withstand the force of the government if it chooses to come after you.
I believe the correct battlefield for freedom is the political one, in the end it decides everything. And neither guns nor technical tricks can secure freedom against a tyrannical state.
Wuth that said, it does tickle the curiosity to think about! A technical-political solution could be to introduce a new actor, the broker. It sits between the webpage and the age-verifier, receiving the age-verification, but then giving it's own proofs to the webpage (so acting as a trusted middleman). Now to match up visitors with identities you need to get the data from both the webpage, the broker and the age-verifier.
You could imagine that the broker were in a different jurisdiction, maybe even one without a close cooperation with the government. Maybe people could even choose their own brokers (among certified ones).
So let's trust all future Governments to never remove the 6-month law?
Once the whole technical system is implemented, it will be trivial to remove that bureaucratic limitation, and somehow it will be sold as better protection for the children.
While this would solve the technical problem at hand. It lacks any safeguard against a very simple workaround of sharing your certificate or even posting for everyone to use.
All you need is one authority which defines who can verify age threshold (government). Those who can verify age threshold need to know your age and identity (bank). Those who are bound to restrict access based on age only need to know in which country you live (website). Nothing else is needed eg. bank, identity and age is not known to the website, website is not known to your bank or government.
I hate this approach to them problem, because it is not a technical problem.
Because it focuses on technical aspects and accepts the premise of 'age verification must be solved'. It doesn’t, and discretion what content and and what age children and teenagers can consume should be up to parents.
That's what I did with my Austrian goverment ID during the COVID times. Had to go the embassy to identify myself. Those times the Deutschland ticket was still cheap, so no problem.
Agreed. But would mean having to educate people on security, privacy and computing in general… Pretty sure most government like having most people uneducated on such things
You misunderstand. The child protection angle is just a cover story. The actual reason for this legislation is to ban anonymous publishing; to ensure that every post on the internet can be linked back to an identity for retaliation.
Verified anonymous age credentials don’t allow for this, so they don’t matter.
The negative privacy implications are the primary features of these laws, not a bug. It is intentional.
And what exactly would be the purpose of age verification? Because defining someone "mature" based on their age is pretty hit-and-miss: we have plenty of adults, even of a certain age, who it's hard to imagine have ever finished adolescence, for instance. On paper, they are absolutely of age. We also had a certain Alexander the Great, emperor of a large part of the planet at 20. We had 13-year-old Pharaohs active in government.
We also have gazillions of examples of apparently innocent rules being used to boil Chomsky's frog, one small temperature rise at a time. For the first time in a long while, I'm starting to sense a certain fanaticism on this topic here on HN, which sounds very much like the molecular agitation when water starts to boil.
Always with the increasing government control. Heaven forbid people go online without training wheels. We need safety nets everywhere - a grazed knee means the state failed.
It's easy-ish to verify someone is human and of-age without needing any intrusive agent. One big problem is that the folk pushing for surveillance via verification hate that model and have capital to crush the idea. Another is adoption of some system that works; where the perfect blocks what's good which results in no progress.
Its gross and I feel bad that these thoughts even exist and are nothing anyone should ever act on ... but - it would be amazing to see the panopticon the ad tech industry has created blow up in their face - looney toons style - and be subjected to a sustained doxing campaign. Not for swatting, and not for hurting, but to see people picket their personal residence, and call until their phone no longer works.
At scale, this is dangerous, unethical, and why society has laws; but when laws to protect me (Equifax) are window dressing - I can't say I would feel awful if Equifax's ceo couldn't communicate via text or email with out everyone on planet earth knowing about it.
>Stores the user's birth date for age verification, as required by recent laws in California (AB-1043), Colorado (SB26-051), Brazil (Lei 15.211/2025), etc.
what do governments get out of this? Like I get it from ad/commercial perspective, but I don't see how this is highly unpopular from governments and still being implemented
Seems like even under young voters more people support it than being against it; 30% of people aged 18-23 are strongly in favor, 57% of people in that age group supports it.
I wonder why? Maybe these types of surveys don’t consider the implementation / what you need to give up in order to have age verification?
Because the internet, for all it's good, has caused society and individuals some pretty serious problems. I don't like the idea of mandatory age verification, but having unrestricted internet access as a kid was objectively bad for me and many of the people I know.
Perhaps the voting population should first be made acutely aware of the extent of surveillance they are under, and how much age verification would expand that surveillance, and then be asked again.
They'll claim they already "know", but watch their opinion change after they get paper mail with a list of recently visited websites, or their words written on public or unencrypted chats, or their movement history thanks to phone spyware.
That's likely, but only if it's possible to materially articulate some specific negative ways in which age verification data is actually being used.
You and I can strongly suspect that there's a significant downside to these providers having so much sensitive personal data but, until that is proven, the voting population will only see the upside.
The death of online anonymity isn't negative and specific enough?
People understand this intuitively - hire someone to obviously follow them everywhere, record everything they do (or only as much as current surveillance records), and they'll want to put a quick stop to it. Do the same thing, but out of sight, out of mind, and their correctly evolved instincts fail to carry over.
Age Verification and "banning kids from social media" are two different things. The former being an overzealous method of achieving the latter.
Parental responsibility and better parental controls would be a MUCH better way of going about this.
Of course, the polling public is blissfully unaware of the wide ranging consequences of such an Age Verification implementation. People will continue to pave the road to fascist hell with good intentions.
What the public perceives it to be is the only thing that matters though. The OP question was asking how governments are getting this through, and the answer is the majority approve of what they see to be happening.
The average person is not thinking about the ability for journalists and whistleblowers to create anonymous Facebook accounts, they are thinking about Mark Zuckerberg trying to sell sex chatbots to their kids and discord pedo servers.
You have to understand children are only cute little extensions of their parents until they 18, but on that day they better be ready for the real world™. /s
Disclaimer: talking about functioning democratic governments (obviously authoritarian governments are different).
We do regulate a lot of things to protect the people, especially the children. It's common to make it illegal for children to drink alcohol, smoke stuff and drive vehicles, and it seems completely natural for many of us. We usually don't say "it should be legal for a schools to sell cigarettes and whisky to kids, because it's the responsibility of the parents to educate their kids".
The same applies to the Internet: just like we don't want children to be able to buy porn in a store, we don't want them to be able to access porn on the Internet. Or, more recently, social media. So the obvious idea to prevent that is to do what we do in store: age verification.
The problem on the Internet is mass surveillance, and done incorrectly, age verification adds to that. Technically, we can do age verification in a privacy-preserving way, but:
- Politicians are generally not competent to understand "the right technical way", and the tech giants do benefit from surveillance. Even if they mean well, it's hard for them to take the right decision out of incompetence.
- In some big countries that tend to set the technical norms (e.g. the US), many people completely distrust the government. But private companies have no interest in implementing the privacy-preserving solution, so the only viable way is with the help of government regulations (I would argue that the government should be the ones owning the service).
- The vast majority of people, including the vast majority of politicians, do not understand and do not give a damn about surveillance capitalism. It just does not exist for them. And in those conditions, there is of course no reason to even consider a privacy-preserving solution, because it is technically more complex.
I strongly believe that in many countries they mean to do well. They are just not competent to understand the problem, and they turn to tech giants who do understand it, but have an interest in making sure that the politicians implement it wrongly.
In the case of government representatives' role, I think you've reached for Hanlon's razor incorrectly. Malice better explains what is happening here than ignorance. The actual representatives are cardboard with makeup - they each have a whole team of folks doing the detailed diligence on this stuff. That team knows there's a privacy-preserving way to do this. There's a reason those solutions are not the ones on offer. Corporate regulatory capture is behind all of this.
LLM feedback loops are scary because they self-reinforce by training over their own data drift and vulnerable people interface with the noise and follow the downward spiral.
There have been pushes to implement similar instances of this for a while now. If this turns out to not be successful, expect futher efforts in a similar guise
Should society help the child, by making it more difficult for them to access harmful material, in the same way we age verify alcohol?
What if the parent is responsible, but finds themselves in a situation where they don't have the time/ability to either educate or set up robust controls? Should we make their responsibilities easier?
This makes a lot more sense than merely assuming
that Meta pushes for it. There are several actors
here and none of them have the good of the people
in mind. This is why Age Sniffing, labeled "Age
Verification", must be abolished. It's an entry
door of evil actors here. It has nothing to do
with age "verification" yet alone "protecting the
chilren" - that's just a lie. I am noticing this
more and more, e. g. if you claim to want to protect
children, but then you have underage people on youtube
create content? So how does that make sense if you want
to restrict them on the one hand (or, everyone else,
in addition to that) but then let the de-facto censorship
here be "loose"? In fact - why are any children viewable
on youtube to begin with? That contradicts those age
sniffing entities.
the internet is not the same as it was 20 years ago. the average person is now online, but they werent before. they dont understand where they are and need protection. there is still space on the internet, or whatever the next place will be, for the enthusiasts and other minorities. if we lose internet, something new will pop up. also, 20 years ago i didnt care so much about privacy on the internet, i just needed a cultural filter for the community im engaging with. privacy has always been a game of cat and mouse. 0 chance things stay the same for long
It’s good that for non SFW stuff you do the need the internet anymore, just 72GB VRAM for all modalities. Public internet only for news/payments. Everything else can be offline, no more npm or React garbage needed either for frontend.
Their initial publication was backed by a Git repository with hundreds of pages of documents written in just three days (https://web.archive.org/web/20260314224623/https://tboteproj...). It also contained nonsense like an "anomaly report" with recommendations from the LLM agent to itself, which covers an analysis of contributors to Linux's BPF, Android's Gerrit, and parser errors in using legislative databases. https://web.archive.org/web/20260314103202/https://tboteproj... . The repository was rewritten since, though.
This post follows their usual pattern. The second source they link to has been a dead link for 11 months (https://web.archive.org/web/20250501000000*/https://www.pala...). There's a lot about Persona's design, MCPs, vulnerabilities, data leaks, but nothing proving they use it for mass surveillance. The entire case for it being mass surveillance rests on two points: that they interact with AI companies and they offer MCP endpoints (section titled "Persona's Surveillance Architecture")
In the meantime a FOSS maintainer who is just trying to put the pieces in place to comply with the law (as written) got doxxed and harassed.
I hate it here
In my experience, when a country like Britain passes a censorship law, people in other countries like America don't enjoy being given the tools to comply with it, even if the tools are entirely optional.
Initially I thought they'd be fine, because AI-generated isn't intrinsically an issue and the comments can be good. But in practice, the AI posts tend to be slop, and usually there's a better human-written source for the same topic (for example, one of the many other recent "age verification is mass surveillance" posts here).
For instance, a recent example from yesterday:
https://bugs.ruby-lang.org/issues/21982
Part of this was written by AI, but with a human in "charge" who explained which part of AI was used here. Would that also be a bannable example for you? I am not so convinced that this is bannable per se. Perhaps it may be different if the AI-slop was not announced, but when it was announced and explained?
> one of the many other recent "age verification is mass surveillance" > posts here
Well, it actually is. It taps very much into other similar laws e. g. "chat control", aka chat sniffing.
- age verification
- chat control
- RTO vs. remote work
- AI bubble
- ditching American tech
You would have to register using a digital ID with a government agency, to get a age certificate. Most European countries already have digital IDs, used for all sorts of things: such as taxes, online banking etc.
Then that certificate could be used in some sort of challenge-response protocol with web sites to verify your age, creating a new user ID in each session but without divulging anything that identifies that particular certificate.
I'm afraid that the alternative would be that social media would instead require login with the digital ID directly.
Since authorities have the power of accessing that data and identify the user who created the certificate, this scheme is not anonymous.
Authorities can access that data via court orders today, or via a global automatic mandatory data sharing law in the future.
In the example of USA, even if for some reason people still trust the current Government (although ICE already accessed private medical records to track and arrest people), I don't see why they should trust all future Governments which will have retroactive access to all that data.
We should not underestimate the power of the legal system to enforce freedom and anonymity. And on the flip side, it's hard to create a technical system which can actually withstand the force of the government if it chooses to come after you.
I believe the correct battlefield for freedom is the political one, in the end it decides everything. And neither guns nor technical tricks can secure freedom against a tyrannical state.
Wuth that said, it does tickle the curiosity to think about! A technical-political solution could be to introduce a new actor, the broker. It sits between the webpage and the age-verifier, receiving the age-verification, but then giving it's own proofs to the webpage (so acting as a trusted middleman). Now to match up visitors with identities you need to get the data from both the webpage, the broker and the age-verifier.
You could imagine that the broker were in a different jurisdiction, maybe even one without a close cooperation with the government. Maybe people could even choose their own brokers (among certified ones).
Once the whole technical system is implemented, it will be trivial to remove that bureaucratic limitation, and somehow it will be sold as better protection for the children.
Set parental controls on set up, pass a single flag to websites and apps, similar to the Global Privacy Control.
No privacy is lost. Control is handed to the device owner, and implementation is technically trivial.
Fullly anonymous + untraceable attestation --> unlimited certificate sharing
https://walt.id/verifiable-credentials
Because it focuses on technical aspects and accepts the premise of 'age verification must be solved'. It doesn’t, and discretion what content and and what age children and teenagers can consume should be up to parents.
Not government, nor corporations.
You keep your own private key and the government has your public key.
People don't have to know security or cryptography to do their banking online.
Either way it would be infinitely better than the current social security number situation we have.
Verified anonymous age credentials don’t allow for this, so they don’t matter.
The negative privacy implications are the primary features of these laws, not a bug. It is intentional.
We also have gazillions of examples of apparently innocent rules being used to boil Chomsky's frog, one small temperature rise at a time. For the first time in a long while, I'm starting to sense a certain fanaticism on this topic here on HN, which sounds very much like the molecular agitation when water starts to boil.
At scale, this is dangerous, unethical, and why society has laws; but when laws to protect me (Equifax) are window dressing - I can't say I would feel awful if Equifax's ceo couldn't communicate via text or email with out everyone on planet earth knowing about it.
Lol, I think I just invented wikileaks (derp)
>Stores the user's birth date for age verification, as required by recent laws in California (AB-1043), Colorado (SB26-051), Brazil (Lei 15.211/2025), etc.
[MERGED]
https://www.theregister.com/2026/03/24/foss_age_verification...
As a parent: the hard-won lesson is that most of this threat surface shrinks when you're genuinely present (listen/talk/educate).
I wonder why? Maybe these types of surveys don’t consider the implementation / what you need to give up in order to have age verification?
Because the internet, for all it's good, has caused society and individuals some pretty serious problems. I don't like the idea of mandatory age verification, but having unrestricted internet access as a kid was objectively bad for me and many of the people I know.
They'll claim they already "know", but watch their opinion change after they get paper mail with a list of recently visited websites, or their words written on public or unencrypted chats, or their movement history thanks to phone spyware.
You and I can strongly suspect that there's a significant downside to these providers having so much sensitive personal data but, until that is proven, the voting population will only see the upside.
People understand this intuitively - hire someone to obviously follow them everywhere, record everything they do (or only as much as current surveillance records), and they'll want to put a quick stop to it. Do the same thing, but out of sight, out of mind, and their correctly evolved instincts fail to carry over.
The harms of big tech, social media, and addiction mechanics are a lot more tangible to the average person than the anonymity aspect.
Parental responsibility and better parental controls would be a MUCH better way of going about this.
Of course, the polling public is blissfully unaware of the wide ranging consequences of such an Age Verification implementation. People will continue to pave the road to fascist hell with good intentions.
The average person is not thinking about the ability for journalists and whistleblowers to create anonymous Facebook accounts, they are thinking about Mark Zuckerberg trying to sell sex chatbots to their kids and discord pedo servers.
Call we do all three?
Also, what about the irresponsible parents, or parents who don't have time/opportunity to be responsible over this issue?
We do regulate a lot of things to protect the people, especially the children. It's common to make it illegal for children to drink alcohol, smoke stuff and drive vehicles, and it seems completely natural for many of us. We usually don't say "it should be legal for a schools to sell cigarettes and whisky to kids, because it's the responsibility of the parents to educate their kids".
The same applies to the Internet: just like we don't want children to be able to buy porn in a store, we don't want them to be able to access porn on the Internet. Or, more recently, social media. So the obvious idea to prevent that is to do what we do in store: age verification.
The problem on the Internet is mass surveillance, and done incorrectly, age verification adds to that. Technically, we can do age verification in a privacy-preserving way, but:
- Politicians are generally not competent to understand "the right technical way", and the tech giants do benefit from surveillance. Even if they mean well, it's hard for them to take the right decision out of incompetence.
- In some big countries that tend to set the technical norms (e.g. the US), many people completely distrust the government. But private companies have no interest in implementing the privacy-preserving solution, so the only viable way is with the help of government regulations (I would argue that the government should be the ones owning the service).
- The vast majority of people, including the vast majority of politicians, do not understand and do not give a damn about surveillance capitalism. It just does not exist for them. And in those conditions, there is of course no reason to even consider a privacy-preserving solution, because it is technically more complex.
I strongly believe that in many countries they mean to do well. They are just not competent to understand the problem, and they turn to tech giants who do understand it, but have an interest in making sure that the politicians implement it wrongly.
WHO IS PROVIDING INTERNET TO A CHILD
they are liable
there's no such thing as free open access internet without someone paying the bill
unless it can be demonstrated the child stole internet somehow, hacking, etc.
then the person providing the internet is liable for the child's activity
Same if you aren't going to supervise your child and they come home for hours after school and watch porn on the TV
They don't age verify to get cable TV
If you have a credit card, you are an adult
Someone is paying the bill, they are the adult, they are responsible
Should society help the child, by making it more difficult for them to access harmful material, in the same way we age verify alcohol?
What if the parent is responsible, but finds themselves in a situation where they don't have the time/ability to either educate or set up robust controls? Should we make their responsibilities easier?
The social media also cant just do it themselves with a box, "are you over 16, yes no" they will require to identify against the government.
Essentially this makes it so that every user's actual ID is being tracked. Fully intended to control speech online.
Seems like a pretty big fuck up, if so. I wonder why did they not use asymmetric encryption.