I really wish we would take defining what it means for an artifact to be signed more seriously.
Which key(s) is it signed with? What is the hash of the corresponding unsigned artifact?
Signature verification tools should have some option which prints these things in a machine-readable format.
I did some work on reproducibility of Android apps and system images with Nix, and while defining a build step which can automatically establish these relationships sounds a bit goofy, it can make the issues with underspecified edge cases visible by defining verification more strictly. I did not do this to look for those edge cases though.
I am still working on that type of stuff now, but on more fundamental issues of trust we could start addressing with systems like Nix.
“Users are dumb” is not and was never the attitude. On average, people are average. You’ve just got completely unrealistic expectations of people. You’re asking for the world to be built around your wants, needs, preferences, and areas of expertise. Something this complex in the hands of 99.99% of the population would be entirely useless.
A few years ago everyone that had ever used a computer knew what a file and a folder was and could move a document to an USB drive.
Thanks to the efforts of Google to "simplify" smartphones the average young person now couldn't find and double-click a downloaded file if their life depended on it.
In the US, a manual car is considered an anti-theft device. In Europe, basically everyone that isn't obscenely rich has driven a manual car at some point.
Back then user base of computers was a lot smaller.
However Whatsapp/signal show how e2e can be done in a user-compatible way. By default it simply exchanges keys and shows a warning when key is changed and those who need/want can verify identity.
So the rest are actually OK with Whatsapp/Signal having the opportunity to see their messages? I would submit that most are not even aware of the issue...
The identity thing is basically the usability issue for E2EE messaging. If you don't solve that then you have not actually increased usability in a meaningful way. The PGP community understood this and did things like organize key signing parties. When is the last time anyone did anything like that for any popular E2EE capable instant messenger?
Imagine we went with "it's unrealistic to expect people to learn reading" - in the end it's just one skill and takes months to comprehend on very basic level.
Tools should surface information on the right level of abstraction for their users, and tools should have good UX no matter how much or little their users know.
Signature verification tools on the command line do not surface enough information to make it easy for their users keep track of what the unsigned input was.
I don't think their users are "end users" though. I am concerned about having better UX and making it more accessible to check these things, but for very advanced users, developers and security professionals. I think surfacing this to end users might come a few steps further down that road, but I am not thinking about that yet. I guess that's why you're talking about trust in google or f-droid, because you're thinking about end users already.
For now at least professionals should have an easy time keeping track of what the corresponding unsigned artifact to a signed artifact is, and we are far away from that right now. You have to write code for that, or inspect the binary formats of those signed and unsigned artifacts. That's not good enough. If that code is part of the tool in the first place, that automatically means that the semantics of the signature are much more well defined.
PGP is too complex. I've known my way around the command line before I learned how to hand-write, and I have to look up the commands to fetch the keys and/or verify the blob every single time. Keyservers regularly fail to respond. There's no desktop integration to speak of. The entire UX stinks of XKCD 196.
Don't blame CIA for obvious deficiencies in usability.
I was with you right up until the end. I think the only thing that would stop me from sabotaging a small project like PGP (was in the early days) is moral aversion. FOSS and academic circles where these things originate is generally friendly and open, and there is plenty of money and length of rubber hose for anyone who doesn't welcome the mole into their project.
I'm not saying I have evidence that this happened to PGP specifically, just that it doesn't seem at all implausible. If the CIA told me my code was never to get too easy to use, but otherwise I could live a long and happy life and maybe a couple of government contracts it would be hard to argue.
Why a mass-market interface never took off (GPG and other descendants notwithstanding) may indicate that the whole cryptographic idea is inherently not amenable to user-friendliness, but I don't find that hypothesis as compelling.
(It could also be an unlikely coincidence that there's a good solution not found for lack of looking, but that's even less plausible to me.)
Then why no such efforts are being pursued for PGP(GPG) nowadays?
signify[1] is approachable at least for the power users - I could print out that man page on a T-shirt. HTTPS is ubiquitous and easy, thanks to ACME & Let's Encrypt. E2EE with optional identity verification is offered in mainstream chat apps.
And of course there are usability improvements to GPG, being made by third parties: Debian introduced package verification a couple decades ago, Github does commit verification, etc. What's to stop e.g. Nautilus or Dolphin from introducing similar features?
you'd think if the cia don't want it to happen, then somebody somewhere else would make it though. it's not like the CIA and fsb would collude - they serve different oligarchs.
Encrypted email is near useless. The metadata (subject, participants, etc) is unencrypted, and often as important as the content itself. There are no ephemeral keys, because the protocol doesn't support it (it's crudely bolted on top of SMTP and optionally MIME). Key exchange is manual and a nuisance few will bother with, and only the most dedicated will rotate their keys regularly. It leaves key custody/management to the user: if there was anything good about the cryptocurrency bubble, it's that it proved that this is NOT something you can trust an average person with.
Signed email is also hard to use securely: unless the sender bothered to re-include all relevant metadata in the message body, someone else can just copy-paste the message content and use it out of context (as long as they can fake the sender header). It's also trivial to mount an invisible salamanders attack (the server needs to cooperate).
The golden standard of E2EE UX are Signal, iMessage, and WhatsApp; all the details of signing and encryption are invisible. Anything less is insecure - because if security is optional or difficult, people will gravitate towards the easy path.
The only use-case I have for PGP is verifying the integrity of downloads, but with ubiquitous HTTPS it's just easier to run sha256sum and trust the hash that was published on the website. The chain of trust is more complicated and centralised (involves CAs and browser vendors), but the UX is simpler, and therefore it does a better job.
I know more people who use terminal user interfaces for email than I know people who use Thunderbird, and I say that as a techie.
The UI still sucks, though, because people ask me what the .ASC attachments sent with all of my emails are and if I've been hacked. When I explain that's for encryption, they may ask how to set that up on their phones if they care, but most of them just look at me funny.
I do use email encryption at my job, through S/MIME, and that works fine. Encryption doesn't need terrible UI, but PGP needs support from major apps (including webmail) for it to gain any traction beyond reporting bug bounties.
Yes, but making sure you can still read your encrypted emails after something went wrong with your setup and you had to reinstall is already harder. How PGP integrates with a system is not trivial to understand.
so it's their fault that every other tool maker refuses to provide the facilities at the same level of simplicity? they gave an example to show it was possible, it doesn't mean that their example was the only way - other developers decided that the public was too dumb to use those kinds of tools.
> I have to look up the commands to fetch the keys and/or verify the blob every single time.
I have no doubt that this is true, but I very much question whether any alternate UX would solve this problem for you, because the arguments for these two tasks are given very obvious names: `gpg --receive-keys <keyIDs>` and `gpg --verify <sigfile>`. There's no real way to make it easier than that, you just have to use it more.
The tool also accepts abbreviations of commands to make things easier, i.e. you could also just blindly type `gpg --receive <keyID>` and it would just work.
If we accept that the world has moved to webmail, and use a GUI client, then the way to make it easier is bake in into the client and make it seamless so there's no manual futzing with anything. Make it like TLS certs, so there's a padlock icon for encrypted mail, yellow for insecure, and mail that fails validation gets a big red warning.
Unfortunately, purists in the community could not accept that, so it's never happened, and so gpg failed to get critical mass before alternatives popped up.
pgp is only complex because there was a jail sentence to anyone willing to discuss or improve it at the crucial start time. go learn history and rethink your argument.
with that stigma no company invested in that that entire space for decades! we are still gluing scraps from Canadian phds when it comes to pgp UX.
now that crypto is cool you will get keypass, which is the obvious evolution of "url padlock". either the login button is enabled or not. don't question whats happening behind the curtain.
... the fact this entire comment thread is mixing my loose points about the url padlock (consequence) with the CIA actions on pgp (cause)... sigh. I won't bother anymore. enjoy the bliss.
Thirty four years ago when PGP was released it was far from being the most complex thing that most people using computers and the web as it was at the time had to deal with.
My father, a farmer type born in 1935, managed to use it easily enough when shown how.
34 years ago the average person did not own a computer. What was computer ownership at in 1990, 10%? The people who owned computers tended to be wealthy, smart or hobbyists which isn't exactly indicative of the average person.
So, your father, who has has somebody who can walk him through it can figure it out. Well guess what, the average person doesn't have a technologically knowledgeable child to show it to them.
I don't like people like this. They do the work of finding a bug, but rather than try to fix it, they grandstand and shout about how bad the thing they obviously enjoy is no good at all.
If I find a vulnerability in code I enjoy, I work to fix it and then only after my ironclad fix is applied, do I mention that it existed and that I fixed it so it can never be exploited again.
"Security researchers" IMO are the most cringe and worst examples of community members possible. They do not care about making things better, they only care about their own brand. Selling themselves, and climbing the ladder of embarrassed hard working people who do things for the love of doing.
Per the write-up, they only went public with details of this exploit after F-Droid merged the "fix" that didn't actually fix the problem despite having been warned that it will not, and despite being told what they actually need to do to fix it properly.
Exactly. Ideally, we'd all follow the Benzite approach, which is to withhold any and all information from one's peers until a complete analysis has finished, and the best possible remedy to the problem has already been applied. Because how can a miscreant use a vulnerability if it hasn't even been published yet?
As contributors, we enjoy a lot of trust, as we should. That's why it's not a problem if we make seemingly random changes that don't necessarily make a lot of sense, but seem relevant to security, when they actually fix an issue in the code. After all, it's necessary to prevent bad guys from gaining sensitive information, and to keep your colleagues from being unduly bothered with challenges they could possibly help with.
Miscreants have a history of independently discovering vulnerabilities. Software vendors have a history of not fixing security issues. The current coordiated disclosure with a deadline forces vendors to fix their flaws while also allowing users to work around unfixed flaws.
While none of that applies to F-Droids primary use case (the primary F-Droid repo builds all apps from source itself), it nonetheless looks like they failed to correctly handle the issue.
The only reason this didn't turn into a disaster was pure luck.
>The only reason this didn't turn into a disaster was pure luck.
Is it? Or is it a case of "It rather involved being on the other side of this airtight hatchway"[1]? The apk signature done by fdroidserver seems totally superfluous. Android is already going to verify the certificate if you try to update an app, and presumably whatever upload mechanism is already authenticated some other way (eg. api token or username/password), so it's unclear what the signature validation adds, aside from maybe preventing installation failures.
> The apk signature done by fdroidserver seems totally superfluous. Android is already going to verify the certificate if you try to update an app, and presumably whatever upload mechanism is already authenticated some other way (eg. api token or username/password), so it's unclear what the signature validation adds, aside from maybe preventing installation failures.
If you try to update the app. Anyone installing the app from scratch will still be vulnerable. Effectively, both cases are Trust On First Use, but AllowedAPKSigningKeys moves the First Use boundary from "the first time you install the app" to "the first time F-Droid saw the app". Izzy wrote a blog post about it a while ago.[0]
> and presumably whatever upload mechanism is already authenticated some other way (eg. api token or username/password)
IzzyOnDroid (and, I believe, F-Droid) don't have their own upload UI or authentication, they poll the upstream repo periodically.
>Effectively, both cases are Trust On First Use, but AllowedAPKSigningKeys moves the First Use boundary from "the first time you install the app" to "the first time F-Droid saw the app".
1. What you're describing would have to happen on the f-droid app, but the vulnerability seems to be on fdroidserver?
2. Even if this actually affected the f-droid app, what you described seems like a very modest increase in security. The attack this prevents (ie. a compromised server serving a backdoored apk with a different signature) would also raise all kinds of alarms from people who already have the app installed, so practically such an attack would be discovered relatively quickly.
>IzzyOnDroid (and, I believe, F-Droid) don't have their own upload UI or authentication, they poll the upstream repo periodically.
Doesn't f-droid perform the build themselves and sign the apk using their own keys? They might be pulling from the upstream repo, but that's in source form, and before apks are signed, so it's irrelevant.
> 1. What you're describing would have to happen on the f-droid app, but the vulnerability seems to be on fdroidserver?
As far as I understand (I'm not an expert on F-Droid), this validation happens on the server side. The (repo) server verifies that the signature matches that of the first version it saw, the phone (when installing the APK) verifies that the signature matches that of the first version it saw.
Android keeps the fdroidserver honest for upgrades, fdroidserver provides an additional bootstrap point for Android's trust.
> 2. Even if this actually affected the f-droid app, what you described seems like a very modest increase in security. The attack this prevents (ie. a compromised server serving a backdoored apk with a different signature) would also raise all kinds of alarms from people who already have the app installed, so practically such an attack would be discovered relatively quickly.
Sure, it's the difference between "automated tooling sees the problem immediately and addresses it proactively" vs "hopefully someone will ring the alarm bell eventually".
> Doesn't f-droid perform the build themselves and sign the apk using their own keys? They might be pulling from the upstream repo, but that's in source form, and before apks are signed, so it's irrelevant.
According to the linked blog post, not anymore. Apparently, these days they serve the author's original APK, but after verifying that they can rebuild it (modulo the signature itself).
The behavior is in AOSP, so it should be in "non Google distro of Android" as well, unless the manufacturer decided to specifically remove this feature.
The primary F-droid repo also hosts the app developer builds in case of reproducible builds, where F-droid will first build from source and then compare it with the dev's build. If its identical, it uses the dev build in the repo and if its not, the build fails.
The use of AllowedAPKSigningKeys afaik is to compare that key with the key used for signing the dev build. If its not the same, the dev build is rejected.
From what I've understood from this POC, its possible to bypass this signature check. The only exploit I can think of with this bypass is that someone who gets access to the developer's release channel can host their own signed apk, which will either get rejected by Android in case of update (signature mismatch) or gets installed in case of first install. But in either case, its still the same reproducible build, only the signature is different.
> The only exploit I can think of with this bypass is that someone who gets access to the developer's release channel can host their own signed apk, which [...] gets installed in case of first install.
That still enables a supply chain attack, which should not be dismissed - virtually all modern targeted attacks involve some complex chain of exploits; a sufficiently motivated attacker will use this.
>But in either case, its still the same reproducible build, only the signature is different.
That means the attacker still has to compromise the source repo. If they don't and try to upload a backdoored apk, that would cause a mismatch with the reproducible build and be rejected. If you can compromise the source repo, you're already screwed regardless. Apk signature checks can't protect you against that.
In a previous post you said that - in case of matching builds - the dev's version is used. Why is the "dev's" version relevant? And assuming I'm correct that it isn't. What is the added benefit vs. just building from source (from a known good state, e.g. by a blessed git hash)?
Android will block any update to an existing app that wasn't signed with the same signature. The benefit of using the developer's signature (even if the app is built by F-Droid) is that the F-Droid release of the app is not treated as a "different app" by the Android OS, and thus it can be updated by other app stores or through direct APK releases from the developer. If the user chooses to stop using F-Droid in the future, they can still receive updates through other means without uninstalling and reinstalling the app.
It also allows the user to place a little less trust on F-Droid because the developer, as well as F-Droid, must confirm any release before it can be distributed. (Now that I think of it, that probably creates an issue where if malware somehow slips in, F-Droid has no power to remove it via an automatic update. Perhaps they should have a malware response or notification system?)
Also, the wording on f-droid suggests the version that f-droid hosts is built by them, rather than a version that's uploaded by the dev. If you go on any app and check the download section, it says
> It is built by F-Droid and guaranteed to correspond to this source tarball.
From what I can understand the attack scenario is as follows:
1. User downloads an app from F-Droid that supports reproducible builds.
2. The developer's account is compromised and submits an app with a different-than-expected signing key.
3. A new user installs the app (existing users aren't affected due to Android's enforcement of using the same signing key for updates).
4. This user is (external to the app) contacted by the attacker and directed to install an update to the app from them. The update contains malicious code.
F-Droid's response is concerning but this attack scenario seems pretty unlikely to work in practice.
Yeah that's the big benefit of F-Droid, reproducible builds. It builds directly from github. I like that aspect of it a lot, it adds a lot of security that other app stores don't have.
I often wonder how secure these open source projects actually are. I'm curious about using Waydroid in SteamOS, but it looks like it only runs LineageOS (apparently a derivative of CyanogenMod).
I know that people claim that open source is more secure because anyone can audit it, but I wonder how closely its security actually interrogated. Seems like it could be a massive instance of the bystander effect.
All of it gives me a bias towards using official sources from companies like Apple and Google, who presumably hire the talent and institute the processes to do things right. And in any case, having years/decades of popularity is its own form of security. You know anyone who cares has already taken shots at Android and iOS, and they're still standing.
While this is true of many projects, F-Droid has a track record of sourcing funding for security audits. To date there have been at least three audits, in 2015, 2018, and 2022.
I was involved in addressing in issues identified in the first one in 2015. It was a great experience, much more thorough than the usual "numerous static analysers and a 100 page PDF full of false positives that you often receive.
I'm surprised that several audits didn't uncover this signing issue. GrapheneOS devs do not recommend f-droid. Instead, Play Store is the safest option for now, after Aurora Store
Google isn't gonna build a ROM for waydroid so someone's going to have to make a build of Android, whom you'll have to trust. Google doesn't build ROMs for anything but their own phones.
LineageOS is popular in this field because in essence it's a derivative of AOSP (the Android project as shipped by Google) with modest modifications to support a crapload of devices, instead of the handful that AOSP supports. This makes it easier to build and easier to support new platforms.
The bulk of the security in AOSP (and thus, LineageOS) comes from all the mitigations that are already built into the system by Google, and the bulk of the core system that goes unmodified. The biggest issue is usually the kernel, which may go unpatched when the manufacturer abandons it (just like the rest of the manufacturer's ROM), and porting all the kernel modifications to newer versions is often incredibly tricky.
> I know that people claim that open source is more secure because anyone can audit it, but I wonder how closely its security actually interrogated. Seems like it could be a massive instance of the bystander effect.
It depends on the software. Something widely used and critical to people who are willing to put resources in is a lot more likely to be audited. Something that can be audited has got to be better than something that cannot be.
> All of it gives me a bias towards using official sources from companies like Apple and Google, who presumably hire the talent and institute the processes to do things right.
I am not entirely convinced about that, given the number of instances we have of well funded companies not doing it right.
> You know anyone who cares has already taken shots at Android and iOS, and they're still standing.
There has been quite a lot of mobile malware and security issues, and malicious apps in app stores. Being more locked down eliminates some things (e.g. phishing to install malware) but they are far from perfect.
I think most of the Open Source projects are inadequate from security PoV but they are not at a place that can do harm.
Android is extremely complex so I think many of the custom ROMs possibly have some security rookie mistakes and quite a bit security bugs due to mishmash of drivers. Android is still better than most of the Linux distros due to its architecture though. The default setup of many distros doesn't have much isolation if at all.
> so I think many of the custom ROMs possibly have some security rookie mistakes and quite a bit security bugs due to mishmash of drivers
I would easily believe that many Android systems have vulnerabilities owing to the horrific mess that is their kernel situation. That said, I personally doubt that aftermarket ROMs are worse than stock, as official ROMs are also running hacked up kernels.
> ...owing to the horrific mess that is their kernel situation.
Do you mean OEM drivers or the Android Kernel, specifically?
Google invests quite a bit on hardening the (Android Commons) Kernel including compile-time/link-time & runtime mitigations (both in hardware & software).
The drivers; last I heard, literally every Android device on the market was using a forked kernel in order to support its hardware. And Google keeps trying things to improve that situation, but... https://lwn.net/Articles/680109/ was ~9 years ago and since then not even Google themselves have managed to ship a device running a mainline kernel. Supposedly it should get better with their latest attempt to just put drivers and user space, but 1. I haven't heard of any devices actually shipping with an unmodified kernel, probably because 2. AIUI that doesn't cover all drivers anyways.
>I know that people claim that open source is more secure because anyone can audit it, but I wonder how closely its security actually interrogated.
The answer is that, no, nobody akshuarry audits anything. This has been proven time and time again, especially in the last few years.
>All of it gives me a bias towards using official sources from companies like Apple and Google, who presumably hire the talent and institute the processes to do things right.
What you get from commercial vendors is liability, you get to demand they take responsibility because you paid them cold hard cash. Free products have no such guarantees, you are your own liability.
And we've seen time and time again how that liability "harms" them when they whoopsie daisy leak a bunch of data they shouldn't have gathered in the first place...
Is it as bad as they're making it out to be? The fdroidserver get_first_signer_certificate can give a different result to apksigner, but then fdroidserver calls apksigner anyway for verification, and F-Droid mitigates the issue in various other ways.
I think F-Droid were acting in the right up to that point; and then the latest update (regex newlines) is 0day? Has there been a response from F-Droid about the updates?
> Instead of adopting the fixes we proposed, F-Droid wrote and merged their
own patch [10], ignoring repeated warnings it had significant flaws
(including an incorrect implementation of v1 signature verification and
making it impossible to have APKs with rotated keys in a repository).
This concerns me more than the vulnerabilities themselves. It's a pretty serious failure in leadership and shows that F-Droid is still driven by egos, not sound software engineering practices and a genuine interest in doing right for the community.
F-Droid has numerous issues:
* glacially slow to release updates even when security patches are released
Not the parent and I will continue to use F-Droid but Obtanium is a popular alternative. It allows you to install apks directly from various supported sources (forges, F-Droid repos etc) so you typically use the apk that the app maintainer has produced in their CI pipeline rather than F-Droid's reproducible builds.
F-Droid would likely get APKs from the same place (if reproducible builds are on for the app in question). If this attack is implemented successfully, then that place was compromised as well, and Obtainium can’t do much here to detect that I’m afraid.
Edit: on second thought, they could pin certificate hashes like F-Droid does on the build server, but verify them client-side instead. If implemented correctly this could indeed work. However, I think F-Droid with reproducible builds is still a safer bet, as attacker would have to get write access to source repo as well and hide their malicious code so that F-Droid can build and verify it.
Okay, but sideloading is worse? AFAICT the problem we're discussing was in F-Droid doing extra verification (somewhat incorrectly, apparently) of an APK before handing it to Android to install. Regardless of F-Droid, Android will check signatures on updates against the installed version. So your response to F-Droid imperfectly checking signatures as an extra verification on first install... is to skip that entirely and do zero verification on first install? That's strictly worse for your security.
Sideloading sounds like a massively worse option than using F-Droid even with this flaw. Humans are way more likely in making mistakes, and you lose a lot of safeguards in between you and the APK when you sideload. Also, you don’t get updates as fast, which is a whole problem in itself.
So, IMO we should not fall into that trap of immediately removing apps that had a security flaw and falling back to a way worse alternative (which sideloading is) instead.
Which key(s) is it signed with? What is the hash of the corresponding unsigned artifact?
Signature verification tools should have some option which prints these things in a machine-readable format.
I did some work on reproducibility of Android apps and system images with Nix, and while defining a build step which can automatically establish these relationships sounds a bit goofy, it can make the issues with underspecified edge cases visible by defining verification more strictly. I did not do this to look for those edge cases though.
I am still working on that type of stuff now, but on more fundamental issues of trust we could start addressing with systems like Nix.
i still believe "pgp is too complex" was the most successful cia counter action after they lost the crypto wars to the people.
solving via nix only works within the flawed assumptions that end users either fully trust google or fdroid and are incapable of anything else.
Thanks to the efforts of Google to "simplify" smartphones the average young person now couldn't find and double-click a downloaded file if their life depended on it.
In the US, a manual car is considered an anti-theft device. In Europe, basically everyone that isn't obscenely rich has driven a manual car at some point.
People learn what they're expected to learn.
However Whatsapp/signal show how e2e can be done in a user-compatible way. By default it simply exchanges keys and shows a warning when key is changed and those who need/want can verify identity.
Missing there of course openness.
So the rest are actually OK with Whatsapp/Signal having the opportunity to see their messages? I would submit that most are not even aware of the issue...
The identity thing is basically the usability issue for E2EE messaging. If you don't solve that then you have not actually increased usability in a meaningful way. The PGP community understood this and did things like organize key signing parties. When is the last time anyone did anything like that for any popular E2EE capable instant messenger?
Like you said people learn what they're expected to learn.
Signature verification tools on the command line do not surface enough information to make it easy for their users keep track of what the unsigned input was.
I don't think their users are "end users" though. I am concerned about having better UX and making it more accessible to check these things, but for very advanced users, developers and security professionals. I think surfacing this to end users might come a few steps further down that road, but I am not thinking about that yet. I guess that's why you're talking about trust in google or f-droid, because you're thinking about end users already.
For now at least professionals should have an easy time keeping track of what the corresponding unsigned artifact to a signed artifact is, and we are far away from that right now. You have to write code for that, or inspect the binary formats of those signed and unsigned artifacts. That's not good enough. If that code is part of the tool in the first place, that automatically means that the semantics of the signature are much more well defined.
PGP is too complex. I've known my way around the command line before I learned how to hand-write, and I have to look up the commands to fetch the keys and/or verify the blob every single time. Keyservers regularly fail to respond. There's no desktop integration to speak of. The entire UX stinks of XKCD 196.
Don't blame CIA for obvious deficiencies in usability.
I'm not saying I have evidence that this happened to PGP specifically, just that it doesn't seem at all implausible. If the CIA told me my code was never to get too easy to use, but otherwise I could live a long and happy life and maybe a couple of government contracts it would be hard to argue.
Why a mass-market interface never took off (GPG and other descendants notwithstanding) may indicate that the whole cryptographic idea is inherently not amenable to user-friendliness, but I don't find that hypothesis as compelling.
(It could also be an unlikely coincidence that there's a good solution not found for lack of looking, but that's even less plausible to me.)
signify[1] is approachable at least for the power users - I could print out that man page on a T-shirt. HTTPS is ubiquitous and easy, thanks to ACME & Let's Encrypt. E2EE with optional identity verification is offered in mainstream chat apps.
And of course there are usability improvements to GPG, being made by third parties: Debian introduced package verification a couple decades ago, Github does commit verification, etc. What's to stop e.g. Nautilus or Dolphin from introducing similar features?
[1]: https://man.openbsd.org/signify
I wonder why there aren't more, but there are some, for example Proton's efforts towards encrypted email.
https://proton.me/support/how-to-use-pgp
(I won't mention the relative shortcomings of HTTPS and E2E chat apps here.)
I think you are right that UI sucks in many cases, but I think its not intrinsic to PGP - its fixable.
Encrypted email is near useless. The metadata (subject, participants, etc) is unencrypted, and often as important as the content itself. There are no ephemeral keys, because the protocol doesn't support it (it's crudely bolted on top of SMTP and optionally MIME). Key exchange is manual and a nuisance few will bother with, and only the most dedicated will rotate their keys regularly. It leaves key custody/management to the user: if there was anything good about the cryptocurrency bubble, it's that it proved that this is NOT something you can trust an average person with.
Signed email is also hard to use securely: unless the sender bothered to re-include all relevant metadata in the message body, someone else can just copy-paste the message content and use it out of context (as long as they can fake the sender header). It's also trivial to mount an invisible salamanders attack (the server needs to cooperate).
The golden standard of E2EE UX are Signal, iMessage, and WhatsApp; all the details of signing and encryption are invisible. Anything less is insecure - because if security is optional or difficult, people will gravitate towards the easy path.
The only use-case I have for PGP is verifying the integrity of downloads, but with ubiquitous HTTPS it's just easier to run sha256sum and trust the hash that was published on the website. The chain of trust is more complicated and centralised (involves CAs and browser vendors), but the UX is simpler, and therefore it does a better job.
The UI still sucks, though, because people ask me what the .ASC attachments sent with all of my emails are and if I've been hacked. When I explain that's for encryption, they may ask how to set that up on their phones if they care, but most of them just look at me funny.
I do use email encryption at my job, through S/MIME, and that works fine. Encryption doesn't need terrible UI, but PGP needs support from major apps (including webmail) for it to gain any traction beyond reporting bug bounties.
I have no doubt that this is true, but I very much question whether any alternate UX would solve this problem for you, because the arguments for these two tasks are given very obvious names: `gpg --receive-keys <keyIDs>` and `gpg --verify <sigfile>`. There's no real way to make it easier than that, you just have to use it more.
The tool also accepts abbreviations of commands to make things easier, i.e. you could also just blindly type `gpg --receive <keyID>` and it would just work.
If we accept that the world has moved to webmail, and use a GUI client, then the way to make it easier is bake in into the client and make it seamless so there's no manual futzing with anything. Make it like TLS certs, so there's a padlock icon for encrypted mail, yellow for insecure, and mail that fails validation gets a big red warning.
Unfortunately, purists in the community could not accept that, so it's never happened, and so gpg failed to get critical mass before alternatives popped up.
with that stigma no company invested in that that entire space for decades! we are still gluing scraps from Canadian phds when it comes to pgp UX.
now that crypto is cool you will get keypass, which is the obvious evolution of "url padlock". either the login button is enabled or not. don't question whats happening behind the curtain.
... the fact this entire comment thread is mixing my loose points about the url padlock (consequence) with the CIA actions on pgp (cause)... sigh. I won't bother anymore. enjoy the bliss.
Do you talk to non-technical people? Some people can hardly turn their computer on. Do you really think PGP is in their grasp?
My father, a farmer type born in 1935, managed to use it easily enough when shown how.
It was typical enough of the tools of the time.
34 years ago the average person did not own a computer. What was computer ownership at in 1990, 10%? The people who owned computers tended to be wealthy, smart or hobbyists which isn't exactly indicative of the average person.
So, your father, who has has somebody who can walk him through it can figure it out. Well guess what, the average person doesn't have a technologically knowledgeable child to show it to them.
Perhaps you have a literate child that might explain context.
"Security researchers" IMO are the most cringe and worst examples of community members possible. They do not care about making things better, they only care about their own brand. Selling themselves, and climbing the ladder of embarrassed hard working people who do things for the love of doing.
As contributors, we enjoy a lot of trust, as we should. That's why it's not a problem if we make seemingly random changes that don't necessarily make a lot of sense, but seem relevant to security, when they actually fix an issue in the code. After all, it's necessary to prevent bad guys from gaining sensitive information, and to keep your colleagues from being unduly bothered with challenges they could possibly help with.
(I am just trying to push the visibility of your comment ;) )
The only reason this didn't turn into a disaster was pure luck.
Is it? Or is it a case of "It rather involved being on the other side of this airtight hatchway"[1]? The apk signature done by fdroidserver seems totally superfluous. Android is already going to verify the certificate if you try to update an app, and presumably whatever upload mechanism is already authenticated some other way (eg. api token or username/password), so it's unclear what the signature validation adds, aside from maybe preventing installation failures.
[1] https://devblogs.microsoft.com/oldnewthing/20060508-22/?p=31...
If you try to update the app. Anyone installing the app from scratch will still be vulnerable. Effectively, both cases are Trust On First Use, but AllowedAPKSigningKeys moves the First Use boundary from "the first time you install the app" to "the first time F-Droid saw the app". Izzy wrote a blog post about it a while ago.[0]
> and presumably whatever upload mechanism is already authenticated some other way (eg. api token or username/password)
IzzyOnDroid (and, I believe, F-Droid) don't have their own upload UI or authentication, they poll the upstream repo periodically.
[0]: https://f-droid.org/2023/09/03/reproducible-builds-signing-k...
1. What you're describing would have to happen on the f-droid app, but the vulnerability seems to be on fdroidserver?
2. Even if this actually affected the f-droid app, what you described seems like a very modest increase in security. The attack this prevents (ie. a compromised server serving a backdoored apk with a different signature) would also raise all kinds of alarms from people who already have the app installed, so practically such an attack would be discovered relatively quickly.
>IzzyOnDroid (and, I believe, F-Droid) don't have their own upload UI or authentication, they poll the upstream repo periodically.
Doesn't f-droid perform the build themselves and sign the apk using their own keys? They might be pulling from the upstream repo, but that's in source form, and before apks are signed, so it's irrelevant.
As far as I understand (I'm not an expert on F-Droid), this validation happens on the server side. The (repo) server verifies that the signature matches that of the first version it saw, the phone (when installing the APK) verifies that the signature matches that of the first version it saw.
Android keeps the fdroidserver honest for upgrades, fdroidserver provides an additional bootstrap point for Android's trust.
> 2. Even if this actually affected the f-droid app, what you described seems like a very modest increase in security. The attack this prevents (ie. a compromised server serving a backdoored apk with a different signature) would also raise all kinds of alarms from people who already have the app installed, so practically such an attack would be discovered relatively quickly.
Sure, it's the difference between "automated tooling sees the problem immediately and addresses it proactively" vs "hopefully someone will ring the alarm bell eventually".
> Doesn't f-droid perform the build themselves and sign the apk using their own keys? They might be pulling from the upstream repo, but that's in source form, and before apks are signed, so it's irrelevant.
According to the linked blog post, not anymore. Apparently, these days they serve the author's original APK, but after verifying that they can rebuild it (modulo the signature itself).
Will it if it's a non Google distro of Android?
The use of AllowedAPKSigningKeys afaik is to compare that key with the key used for signing the dev build. If its not the same, the dev build is rejected.
From what I've understood from this POC, its possible to bypass this signature check. The only exploit I can think of with this bypass is that someone who gets access to the developer's release channel can host their own signed apk, which will either get rejected by Android in case of update (signature mismatch) or gets installed in case of first install. But in either case, its still the same reproducible build, only the signature is different.
That still enables a supply chain attack, which should not be dismissed - virtually all modern targeted attacks involve some complex chain of exploits; a sufficiently motivated attacker will use this.
>But in either case, its still the same reproducible build, only the signature is different.
That means the attacker still has to compromise the source repo. If they don't and try to upload a backdoored apk, that would cause a mismatch with the reproducible build and be rejected. If you can compromise the source repo, you're already screwed regardless. Apk signature checks can't protect you against that.
It also allows the user to place a little less trust on F-Droid because the developer, as well as F-Droid, must confirm any release before it can be distributed. (Now that I think of it, that probably creates an issue where if malware somehow slips in, F-Droid has no power to remove it via an automatic update. Perhaps they should have a malware response or notification system?)
More: https://f-droid.org/2023/09/03/reproducible-builds-signing-k...
Which post are you talking about? https://news.ycombinator.com/item?id=42592150 was made by FuturisticGoo, not me.
Also, the wording on f-droid suggests the version that f-droid hosts is built by them, rather than a version that's uploaded by the dev. If you go on any app and check the download section, it says
> It is built by F-Droid and guaranteed to correspond to this source tarball.
1. User downloads an app from F-Droid that supports reproducible builds.
2. The developer's account is compromised and submits an app with a different-than-expected signing key.
3. A new user installs the app (existing users aren't affected due to Android's enforcement of using the same signing key for updates).
4. This user is (external to the app) contacted by the attacker and directed to install an update to the app from them. The update contains malicious code.
F-Droid's response is concerning but this attack scenario seems pretty unlikely to work in practice.
But yeah other repos don't :(
I often wonder how secure these open source projects actually are. I'm curious about using Waydroid in SteamOS, but it looks like it only runs LineageOS (apparently a derivative of CyanogenMod).
I know that people claim that open source is more secure because anyone can audit it, but I wonder how closely its security actually interrogated. Seems like it could be a massive instance of the bystander effect.
All of it gives me a bias towards using official sources from companies like Apple and Google, who presumably hire the talent and institute the processes to do things right. And in any case, having years/decades of popularity is its own form of security. You know anyone who cares has already taken shots at Android and iOS, and they're still standing.
https://www.opentech.fund/security-safety-audits/f-droid/
https://f-droid.org/2018/09/04/second-security-audit-results...
https://f-droid.org/2022/12/22/third-audit-results.html
I was involved in addressing in issues identified in the first one in 2015. It was a great experience, much more thorough than the usual "numerous static analysers and a 100 page PDF full of false positives that you often receive.
LineageOS is popular in this field because in essence it's a derivative of AOSP (the Android project as shipped by Google) with modest modifications to support a crapload of devices, instead of the handful that AOSP supports. This makes it easier to build and easier to support new platforms.
The bulk of the security in AOSP (and thus, LineageOS) comes from all the mitigations that are already built into the system by Google, and the bulk of the core system that goes unmodified. The biggest issue is usually the kernel, which may go unpatched when the manufacturer abandons it (just like the rest of the manufacturer's ROM), and porting all the kernel modifications to newer versions is often incredibly tricky.
Are you suggesting that ROMs provided through Android Studio's emulator are somehow not built by Google?
It depends on the software. Something widely used and critical to people who are willing to put resources in is a lot more likely to be audited. Something that can be audited has got to be better than something that cannot be.
> All of it gives me a bias towards using official sources from companies like Apple and Google, who presumably hire the talent and institute the processes to do things right.
I am not entirely convinced about that, given the number of instances we have of well funded companies not doing it right.
> You know anyone who cares has already taken shots at Android and iOS, and they're still standing.
There has been quite a lot of mobile malware and security issues, and malicious apps in app stores. Being more locked down eliminates some things (e.g. phishing to install malware) but they are far from perfect.
Android is extremely complex so I think many of the custom ROMs possibly have some security rookie mistakes and quite a bit security bugs due to mishmash of drivers. Android is still better than most of the Linux distros due to its architecture though. The default setup of many distros doesn't have much isolation if at all.
I would easily believe that many Android systems have vulnerabilities owing to the horrific mess that is their kernel situation. That said, I personally doubt that aftermarket ROMs are worse than stock, as official ROMs are also running hacked up kernels.
Do you mean OEM drivers or the Android Kernel, specifically?
Google invests quite a bit on hardening the (Android Commons) Kernel including compile-time/link-time & runtime mitigations (both in hardware & software).
Ex: https://android-developers.googleblog.com/2018/10/control-fl...
Has been dead for 8+ years. LineageOS is its own thing by now.
> anyone who cares has already taken shots at Android and iOS
LineageOS is based on AOSP, plus some modifications that do not affect security negatively.
The answer is that, no, nobody akshuarry audits anything. This has been proven time and time again, especially in the last few years.
>All of it gives me a bias towards using official sources from companies like Apple and Google, who presumably hire the talent and institute the processes to do things right.
What you get from commercial vendors is liability, you get to demand they take responsibility because you paid them cold hard cash. Free products have no such guarantees, you are your own liability.
Sooo how about the audits linked in https://news.ycombinator.com/item?id=42592444 ?
I think F-Droid were acting in the right up to that point; and then the latest update (regex newlines) is 0day? Has there been a response from F-Droid about the updates?
> Instead of adopting the fixes we proposed, F-Droid wrote and merged their own patch [10], ignoring repeated warnings it had significant flaws (including an incorrect implementation of v1 signature verification and making it impossible to have APKs with rotated keys in a repository).
This concerns me more than the vulnerabilities themselves. It's a pretty serious failure in leadership and shows that F-Droid is still driven by egos, not sound software engineering practices and a genuine interest in doing right for the community.
F-Droid has numerous issues:
* glacially slow to release updates even when security patches are released
* not enforcing 2FA for developer accounts
* no automatic vulnerability or malware scanning
...and more problems: https://privsec.dev/posts/android/f-droid-security-issues/
Edit: on second thought, they could pin certificate hashes like F-Droid does on the build server, but verify them client-side instead. If implemented correctly this could indeed work. However, I think F-Droid with reproducible builds is still a safer bet, as attacker would have to get write access to source repo as well and hide their malicious code so that F-Droid can build and verify it.
So, IMO we should not fall into that trap of immediately removing apps that had a security flaw and falling back to a way worse alternative (which sideloading is) instead.