One thing that struck me was, when you look through the board and the committees, it's full of scientists, finance people, doctors, academics. There's maybe a couple of technologists - ML, IT delivery.
If they've got anyone with a background in cyber security I can't see it.
> We have never seen any evidence of any UK Biobank participant being re-identified by others.
This data contains sex, at least month and year of birth. I can't see any sensible security-oriented technical person coming out with a line like that.
> In fairness, is this any worse than what Palantir will do with the whole countries NHS records?
I don’t get this trend of seeing bad thing happen and then commenting that other bad thing exists and therefore “in fairness” we should downplay it.
Bad things are bad. Comparing them to other things we don’t like doesn’t make them less bad. I don’t like Palantir either but they’re not intentionally leaking health details so this comparison doesn’t even make any sense.
no, they should not, since we already know that the contract won't stop them from using that data for other purposes and other governments. A government should act in the interest of its own citizens, first and foremost, and not pretending to believe a pinky swear by a notoriously bad actor.
"certainly" is doing a lot of work here. I'm not "certain".
In fact the people I have spoken to who have worked on Palantir platform were deeply suspicious of their users treating data with respect, and so built security and immutable auditability as foundational tech.
… As part of an explicit, openly stated mission to reshape the global political order.
Palantir is indeed in many ways just a software vendor but we shouldn’t downplay that they have a much more explicit agenda than most other companies do in seeking government contracts.
Eh. I mean, the government will do what the government will do with the software it buys. We've just seen that with Anthropic. The US government wouldn't give contracts to Palantir if it seemed like its ideology didn't line up with US aims, and they wouldn't give contracts to other vendors if it seemed like their less ideological marketing meant they weren't aligned with US aims.
“Palantir is here to disrupt and make the institutions we partner with the very best in the world and, when it’s necessary, to scare enemies and on occasion kill them,” Karp said, with a smile on his face. The CEO added that he was very proud of the work his firm is doing and that he felt it was good for America. “I’m very happy to have you along for the journey,” he said. “We are crushing it. We are dedicating our company to the service of the West, and the United States of America, and we’re super-proud of the role we play, especially in places we can’t talk about.” [1]
No, Palantir is not a "database vendor", it's an intelligence company closely working with IOF in their ongoing genocidal efforts and with DHS with mass deportations.
I'd rather see Oracle than a ghoul openly supporting targeting civilians.
“In fairness, this pot of water was already uncomfortably hot before [latest development] raised the temperature another few degrees closer to boiling.”
…says a happy frog who will be as cooked as everyone else.
There isn't much difference between giving this data to 20,000 researchers all over the world and simply publishing the data on the web.
I personally would like data like this to simply be published, together with a law that says using the data to make personalized decisions affecting those individuals is punishable with life in prison.
Basically, this data is 'opensource', but not for use to decide insurance premiums, job offers, or the contents of news articles.
> There isn't much difference between giving this data to 20,000 researchers all over the world and simply publishing the data on the web.
As a researcher who regularly deals with such data there is a MASSIVE difference. Yes, I have access to the data but I am restricted on how it can be stored (no cloud), what I can and can't do with it, and for some of it I'm even mandated to destroy it once the research project is over. I have the informed consent of every participant, some of which withdrew halfway throughout the collection without any penalty to them. I also don't need a new law because I'm already bound by existing ones, by the contract I signed when I joined, and by the confidentiality agreement I signed when the project started. While I don't know that the leaker(s) will be identified, the existence of the data itself already calls for legal action while giving a starting point for investigation.
Your suggestion, on the other hand, seems to be "let's put this data out there without people's consent and make companies pinky promise that they won't use it in their black boxes in a way that's virtually impossible to detect or prosecute". Those two things are definitely not equivalent.
I am not arguing either way, but I think you missed the point.
When you give O(20000) people you have a 1-0.9999^20000 (high) probability that that will leak anyway (either 1/20000 people not following the rules, or just the accident/attack surface area).
> together with a law that says using the data to make personalized decisions affecting those individuals is punishable with life in prison.
This works well in theory but is basically unenforceable. It's barely possible, if possible at all, to audit how FB or google make ad targeting decisions - but once stuff gets into the fragmented ecosystem of data brokers and market intelligence consultancies all hope is lost.
To say nothing of state actors, like countries who might deny you a visa based on adverse medical info or otherwise use your information against you.
or it's made the onus for the proof that the data wasn't used, so if your decision didn't come with a proof it wasn't, the party making the decision can be sued for it.
"Access this article for 1 day for: £50 / $60/ €56 (excludes VAT)"
Man, the scientific publishing cartel is something else. Note that author will generally get exactly £0 / $0 / €0 for his text.
I guess you can't imagine a free, open democratic state with rule of law either. Because when broad, independent, quality journalism with a wide audience is gone, all you'll have to worry about is that poor cat in a tree in Ottawa.
Unfortunately that is almost never enough. If your competition is populist media financed by state-level/billionaire agendas, it is impossible to compete in the long term. We would need a complete and general ban on political financing across all media to sustain such a market.
> if we don't support truly independent, objective, investigative journalism, who will?
Like Eric Schmidt, Bill Gates, Warren Buffett, George Soros and countless other billionaires through their "charities"? https://theguardian.org/
Just because they are liberal and non-profit doesn't mean they are independent, that only appears this way if you only think in the narrow confines of the Overton Window between "conservative" and "liberal" of mainstream discourse.
The general public tried over and over and over to reject the collection of such data in the first place. At every opportunity they rejected it. But the people who wanted the data just took it anyway, and when the predictable and predicted bad thing happens, nobody will be punished for it.
I honestly think health data should be public by default to any health researcher. We should do whatever we can to solve disease and live forever. Privacy be damned, I want life.
> Data for sale included people’s gender, age, month and year of birth, socioeconomic status, lifestyle habits, mental health, self-reported medical history, cognitive function, and physical measures.
If this is not traceable back to individuals, it would probably good to be made public. But I assume the UK Biobank only gives access to trusted partners since - as we know in our 'data analytics' day and age - with enough general data quantity you can trace back anything to anyone if you have the resources. And the capitalist-surveillance econonmy certainly provides the profit-motive.
I want to get my DNA digitized so I can do all sorts of health stuff for myself, but finding a place that won't leak my data is troublesome. 23andme is right out.
If we are censoring our daily activities and major life decisions like healthcare due to the data economy, then it is making us less free. But who knows how many generations will pass before a solution shows up. We would need representatives who act collectively towards motives beyond profits.
But once your data has been digitized even if it is under your control the likelihood that it gets leaked is still high. Specially now with AI agents running everywhere, or people just asking AI services for medical advice.
Today the choice for advice is between low quality local AI advice or higher quality advice but lose your data control, the rational choice is probably losing your data control even if if will almost certainly comes back to bite you.
UK Biobank health data keeps ending up on GitHub
https://news.ycombinator.com/item?id=47875843
UK Biobank health data listed for sale in China, government confirms
https://news.ycombinator.com/item?id=47874732
If they've got anyone with a background in cyber security I can't see it.
https://www.ukbiobank.ac.uk/about-us/people-and-governance/
And then the CEO comes out with:
> We have never seen any evidence of any UK Biobank participant being re-identified by others.
This data contains sex, at least month and year of birth. I can't see any sensible security-oriented technical person coming out with a line like that.
I don’t get this trend of seeing bad thing happen and then commenting that other bad thing exists and therefore “in fairness” we should downplay it.
Bad things are bad. Comparing them to other things we don’t like doesn’t make them less bad. I don’t like Palantir either but they’re not intentionally leaking health details so this comparison doesn’t even make any sense.
To many, they are. They're leaking information that has been trusted to the NHS to their own databases.
The fact that it's being done under government contract and (arguably) within the law shouldn't immediately make it any less bad.
Of course it should, to say otherwise is absurd
what, the NHS shouldn't have _any_ subcontracting? All data must only be held by sacred NHS monks in a vault somewhere?
As long as palentir are holding the data on UK servers, to modern data security standards, and they have a contract to do so, they should be able to
yes
In fact the people I have spoken to who have worked on Palantir platform were deeply suspicious of their users treating data with respect, and so built security and immutable auditability as foundational tech.
Palantir is indeed in many ways just a software vendor but we shouldn’t downplay that they have a much more explicit agenda than most other companies do in seeking government contracts.
[1] https://gizmodo.com/palantirs-billionaire-ceo-just-cant-stop...
I'd rather see Oracle than a ghoul openly supporting targeting civilians.
…says a happy frog who will be as cooked as everyone else.
I personally would like data like this to simply be published, together with a law that says using the data to make personalized decisions affecting those individuals is punishable with life in prison.
Basically, this data is 'opensource', but not for use to decide insurance premiums, job offers, or the contents of news articles.
As a researcher who regularly deals with such data there is a MASSIVE difference. Yes, I have access to the data but I am restricted on how it can be stored (no cloud), what I can and can't do with it, and for some of it I'm even mandated to destroy it once the research project is over. I have the informed consent of every participant, some of which withdrew halfway throughout the collection without any penalty to them. I also don't need a new law because I'm already bound by existing ones, by the contract I signed when I joined, and by the confidentiality agreement I signed when the project started. While I don't know that the leaker(s) will be identified, the existence of the data itself already calls for legal action while giving a starting point for investigation.
Your suggestion, on the other hand, seems to be "let's put this data out there without people's consent and make companies pinky promise that they won't use it in their black boxes in a way that's virtually impossible to detect or prosecute". Those two things are definitely not equivalent.
When you give O(20000) people you have a 1-0.9999^20000 (high) probability that that will leak anyway (either 1/20000 people not following the rules, or just the accident/attack surface area).
This works well in theory but is basically unenforceable. It's barely possible, if possible at all, to audit how FB or google make ad targeting decisions - but once stuff gets into the fragmented ecosystem of data brokers and market intelligence consultancies all hope is lost.
To say nothing of state actors, like countries who might deny you a visa based on adverse medical info or otherwise use your information against you.
Like a clean room implementation requirement.
licensing it to researchers allows you to create, monitor, and enforce policies like the one you describe
stealing it does not
Certainly not Billionaires buying newspapers (e.g. Washington Post/Bezos, ...).
Like Eric Schmidt, Bill Gates, Warren Buffett, George Soros and countless other billionaires through their "charities"? https://theguardian.org/
Just because they are liberal and non-profit doesn't mean they are independent, that only appears this way if you only think in the narrow confines of the Overton Window between "conservative" and "liberal" of mainstream discourse.
Given the whack-a-mole takedowns, its pretty clear everyone involved knew what was going on.
If this is not traceable back to individuals, it would probably good to be made public. But I assume the UK Biobank only gives access to trusted partners since - as we know in our 'data analytics' day and age - with enough general data quantity you can trace back anything to anyone if you have the resources. And the capitalist-surveillance econonmy certainly provides the profit-motive.
https://nanoporetech.com/products/sequence/minion
But once your data has been digitized even if it is under your control the likelihood that it gets leaked is still high. Specially now with AI agents running everywhere, or people just asking AI services for medical advice.
Today the choice for advice is between low quality local AI advice or higher quality advice but lose your data control, the rational choice is probably losing your data control even if if will almost certainly comes back to bite you.
...until they're inevitably sold.
Should or shouldn't in general, but THIS one database shouldn't.
If I leak your medical information you confidentially shared it with your doctor that means you are okay with it because you opted in for that?
Or does the scope / details do not matter for others, but only matter for your data.
*I have much better word but I guess I should say it.