I was dealing with a good old anti-tampering userspace library last week. They did everything right.
The process detects that it's traced (by asking the kernel nicely) and shuts down. So I patched the kernel and now I can connect with and poke around gdb.
I can't put a software breakpoint because the process computes checksum of it's memory and jumps through a table index computed from a hash, so I had to put the hardware read watchpoint on modified memory location, record who reads it and patch the jump index to the right one.
Of course, there is another function that checksums the memory and runs the process into sigsegv, it has tons of obfuscated confusing stuff, so I have to patch it with 'lol return 0'.
And then I can finally use frida to disable ssl pinning to mitmproxy it. It all took a week to bypass all the levels of obfuscation, find the actual thing I was looking for and extract it. Can't imagine how much time the people at $securitycompanyname spent on adding all those levels of obfuscation and anti-debug. More than a week for sure. What was it doing? A custom HOTP.
It wasn't any better on actual secure boots 20 years ago where bootloader checksummed the whole firmware before transferring control, because bootloader itself was in ROM and of course it had subtle logical bugs and you only need to find one and bootloader is there in ROM bugged forever.
How many more amateur attempts did these layers thwart? Did its creators collect enough revenue before the crack was produced?
I suppose uncrackable software, in the sense of e.g. license protection, cannot exist. Software is completely beholden to hardware, and known hardware can be arbitrarily emulated, and there's nowhere to hide any tamper-resistant secret bits. Only in a combination with locked-down, uncrackable hardware can properly designed software without critical bugs remain uncrackable; see stuff like yubikeys. Similarly, communication can remain uncrackable as long as the secret bits (like a private key) remain secret.
I'm not ever cracking anything, the software is free to use, I just wanted to mitmproxy it to see the requests and figure out some custom crypto inside of it
> All encryption is end-to-end, if you’re not picky about the ends.
This reminds of how Apple iMessage is E2E encrypted, but Apple runs on-device content detection that pings their servers, which you can't possibly even think of disabling. [1][2]
They're actually two separate claims, one of which the blogpost does support. The other one is seemingly ought to be supported by some conversations on a Discord server.
The concern is obvious though, not sure what's unclear about that: it's a bit pointless to have E2EE, if the adversary has full access to one of the ends anyways.
> the network traffic sent and received by mediaanalysisd was found to be empty and appears to be a bug.
I say "supposedly debunked" because empty traffic doesn't mean there's nothing going on. It could just be a file deemed safe. But then the author said:
> The network call that raised concerns is a bug. Apple has since released macOS 13.2, which has fixed this issue, and the process no longer makes calls to Apple servers
> You need an integrated root-of-trust in your CPU in order to solve these.
Yes, quite. The BIOS/UEFI absolutely needs to store a public key of a primary key on the TPM, probably the EKpub itself for simplicity. Without that you will be vulnerable to an MITM attack, at least early in boot, and since the MITM could fool you about the root of trust for later, as long as the MITM can commit to always being there you cannot detect the attack.
I expected something about cryptography keys hidden in a decoration somewhere (kinda like LoTR Gate of Moria style), article was not quite what I expected. Although it is in a sense
> Unexplainable security features are just marketing materials.
I feel this way about a lot of hardware-based security solutions like TPMs, and TEEs. These are actually useful solutions that can help solve problems that we have (as evidenced by this article) but unfortunately, these solutions tend to be poorly publicly documented. As a result, we rely on academics to do the work for us in order to learn how to better contextualize these solutions.
POWER9 had quite a few neat things going on. I think it's unfortunate that it never became mainstream. The switch to closed source firmware in Power10 is also a downer.
> Active physical interposer adversaries are a very real part of legitimate threat models. You need an integrated root-of-trust in your CPU in order to solve these.
It's been almost 10 years since Microsoft, based on their Xbox experience, started saying "stop using discrete TPMs over the bus, they are impossible to secure, we need the TPM embedded in the CPU itself"
The TPM itself can actually be discrete, as long as you have a root-of-trust inside the CPU with a unique secret. Derive a secret from the unique secret and the hash of the initial bootcode the CPU is running like HMAC(UDS, hash(program)) and derive a public/private key pair from that. Now you can just do normal Diffie-Hellman to negotiate encryption keys with the TPM and you're safe from any future interposers.
This matters because for some functionality you really want tamper-resistant persistent storage, for example "delete the disk encryption keys if I enter the wrong password 10 times". Fairly easy to do on a TPM that can be made on a process node that supports flash vs a general CPU where that just isn't an option.
What kind of keys are they? In that same regard, Apple holds the keys to sign software for secure enclaves on iDevices and Macs, does that make them backdoored, since they can control execution on the firmware that protects everyone's authentication data and secrets?
Yes, Apple products are backdoored - not just through esoteric keys, but also because they're uploading your pictures to the mothership "to check they're not hild porn."
It's just a link to a blogpost, like all the other links on HN. None are treated any special way. Why would content that was on lobsters first be treated differently.
The mismatch seems to stem from the fact that generated comments are against HN guidelines, but automated posts are tolerated by mods, which seems somewhat reasonable on its face, but has failure modes when OPs' own posts, even when they post first, are scooped by bots.
The issue is that it also becomes easy to game. I've abused the fact that bots repost lobste.rs by timing my post submissions, I could have had this front-page post as I posted this to lobste.rs originally.
This makes the HN front-page less of an organic thing, and this is an implicit vote ring behavior few people have access to. It also makes people interested in lobste.rs for this behavior, which our community is not interested inn.
Protecting secrets via hardware is always "decorative" in some sense, the question is just how much time+work it takes to extract them (and probability of destroying the secrets/device in the process). (outside of things like QKD)
But for software systems under a software threat model, bug-free implementations are possible, in theory at least.
Perfect security isn't a thing. Hardware/Software engineers are in the business of making compromise harder, but eyes are wide open about "perfection".
Confidential Computing is evolving, and it's steadily gotten much more difficult to bypass the security properties.
I don't follow this - the software must necessarily run on some hardware, so while the software may be provably secure that doesn't help if an attacker can just pull key material off the bus?
Okay, nothing is secure against every threat model. The only way to secure against rubber hose cryptanalysis is by hiring a team of bodyguards, and even that won't protect you from LEOs or nation-state actors. Your threat model should be broad enough to provide some safety, but it also needs to be narrow enough that you can do something about it. At a software level, there's only so much you can do to deal with hardware integrity problems. The rest, you delegate to the security team at your data centre.
> "This system is unhackable, if the user doesn't do the thing that hacks it" is not very useful.
It's the best you're gonna get, bud. Nothing's "unhackable"—you just gotta make "the thing that hacks it" hard to do.
In any case, I'm curious to hear your argument for how "PGP has some implementation problems" (unsurprising to most people that have dared to look at its internals even briefly) implies "all non-information-theoretic cryptography is futile".
What do you mean exactly? Both AMD/Intel have signed firmware, and both support hardware attestation, where they sign what they see with an AMD/Intel key and you can later check that signature. This is the basis of confidential VMs, where not even the machine physical owner can tamper with the VM in an undetectable way.
> While the data itself is encrypted, notice how the values written by the first and third operation are the same.
The fact that Intel and AMD both went with ECB leaves me with mild disbelief. I realize wrangling IVs in that scenario is difficult but that's hardly an excuse to release a product that they knew full well was fundamentally broken. The insecurity of ECB for this sort of task has been common knowledge for at least 2 decades.
Google "intel sgx memory encryption engine". Intel's designers were fully aware of replay attacks, and early versions of SGX supported a hardware-based memory encryption engine with Merkle tree support.
Remember that everything in security (and computation) is a tradeoff. The MEE turned out to be a performance bottleneck, and support got dropped.
There are legitimate choices to be made here between threat models, and the resulting implications on the designs.
There's not much new under the sun when it comes to security/cryptography/whatever (tm), and I recommend approaching the choices designers make with an open mind.
I agree with the sentiment but I'm struggling to see how this qualifies as a legitimate tradeoff to make. I thought the entire point of this feature was to provide assurances to customers that cloud providers weren't snooping on their VMs. In which case physically interdicting RAM in this manner is probably the first approach a realistic adversary would attempt.
I can see where it prevents inadvertent data leaks but the feature was billed as protecting against motivated adversaries. (Or at least so I thought.)
Fair point, my ECB remark was misguided. But I think the broader point stands? I did acknowledge the difficulty of dealing with IVs here.
It's the same issue that XTS faces but that operates under the fairly modest assumption that an adversary won't have long term continuous block level access to the running system. Whereas in this case interdicting the bus is one of the primary attack vectors so failing to defend against that seems inexcusable.
The process detects that it's traced (by asking the kernel nicely) and shuts down. So I patched the kernel and now I can connect with and poke around gdb.
I can't put a software breakpoint because the process computes checksum of it's memory and jumps through a table index computed from a hash, so I had to put the hardware read watchpoint on modified memory location, record who reads it and patch the jump index to the right one.
Of course, there is another function that checksums the memory and runs the process into sigsegv, it has tons of obfuscated confusing stuff, so I have to patch it with 'lol return 0'.
And then I can finally use frida to disable ssl pinning to mitmproxy it. It all took a week to bypass all the levels of obfuscation, find the actual thing I was looking for and extract it. Can't imagine how much time the people at $securitycompanyname spent on adding all those levels of obfuscation and anti-debug. More than a week for sure. What was it doing? A custom HOTP.
It wasn't any better on actual secure boots 20 years ago where bootloader checksummed the whole firmware before transferring control, because bootloader itself was in ROM and of course it had subtle logical bugs and you only need to find one and bootloader is there in ROM bugged forever.
I suppose uncrackable software, in the sense of e.g. license protection, cannot exist. Software is completely beholden to hardware, and known hardware can be arbitrarily emulated, and there's nowhere to hide any tamper-resistant secret bits. Only in a combination with locked-down, uncrackable hardware can properly designed software without critical bugs remain uncrackable; see stuff like yubikeys. Similarly, communication can remain uncrackable as long as the secret bits (like a private key) remain secret.
This reminds of how Apple iMessage is E2E encrypted, but Apple runs on-device content detection that pings their servers, which you can't possibly even think of disabling. [1][2]
[1] https://sneak.berlin/20230115/macos-scans-your-local-files-n... [2] Investigation in Beeper/PyPush discord for iMessage spam blocking
The concern is obvious though, not sure what's unclear about that: it's a bit pointless to have E2EE, if the adversary has full access to one of the ends anyways.
> the network traffic sent and received by mediaanalysisd was found to be empty and appears to be a bug.
I say "supposedly debunked" because empty traffic doesn't mean there's nothing going on. It could just be a file deemed safe. But then the author said:
> The network call that raised concerns is a bug. Apple has since released macOS 13.2, which has fixed this issue, and the process no longer makes calls to Apple servers
Yes, quite. The BIOS/UEFI absolutely needs to store a public key of a primary key on the TPM, probably the EKpub itself for simplicity. Without that you will be vulnerable to an MITM attack, at least early in boot, and since the MITM could fool you about the root of trust for later, as long as the MITM can commit to always being there you cannot detect the attack.
This is a great quote.
It's been almost 10 years since Microsoft, based on their Xbox experience, started saying "stop using discrete TPMs over the bus, they are impossible to secure, we need the TPM embedded in the CPU itself"
This matters because for some functionality you really want tamper-resistant persistent storage, for example "delete the disk encryption keys if I enter the wrong password 10 times". Fairly easy to do on a TPM that can be made on a process node that supports flash vs a general CPU where that just isn't an option.
Citation needed.
Also, if virtually every software that is updateable by a vendor, then going by your argument, everything is a backdoor. Not a very useful term then.
At least we got acknowledgement and not gaslighting this time: https://news.ycombinator.com/item?id=45156003
This was funny though: https://news.ycombinator.com/item?id=43249197
This makes the HN front-page less of an organic thing, and this is an implicit vote ring behavior few people have access to. It also makes people interested in lobste.rs for this behavior, which our community is not interested inn.
I like the idea of a key part of the the CPU (comment below); does anyone know why Intel/ARM/AMD have not picked up this IBM feature?
But for software systems under a software threat model, bug-free implementations are possible, in theory at least.
Perfect security isn't a thing. Hardware/Software engineers are in the business of making compromise harder, but eyes are wide open about "perfection".
Confidential Computing is evolving, and it's steadily gotten much more difficult to bypass the security properties.
> "This system is unhackable, if the user doesn't do the thing that hacks it" is not very useful.
It's the best you're gonna get, bud. Nothing's "unhackable"—you just gotta make "the thing that hacks it" hard to do.
In any case, I'm curious to hear your argument for how "PGP has some implementation problems" (unsurprising to most people that have dared to look at its internals even briefly) implies "all non-information-theoretic cryptography is futile".
* Using CSPRNGs instead of HWRNGs to generate the pads,
* Try to make it usable and share short entropy and reinvent stream ciphers,
* Share that short entropy over Diffie-Hellman RSA,
* Fail to use unconditionally secure message authentication,
* Re-use pads,
* Forget to overwrite pads,
* Fail to distribute pads off-band via sneakernet or dead drops or QKD.
OTP is also usually the first time someone dabbles in creating cryptographic code so the implementations are full of footguns.
https://tee.fail/
The fact that Intel and AMD both went with ECB leaves me with mild disbelief. I realize wrangling IVs in that scenario is difficult but that's hardly an excuse to release a product that they knew full well was fundamentally broken. The insecurity of ECB for this sort of task has been common knowledge for at least 2 decades.
Remember that everything in security (and computation) is a tradeoff. The MEE turned out to be a performance bottleneck, and support got dropped.
There are legitimate choices to be made here between threat models, and the resulting implications on the designs.
There's not much new under the sun when it comes to security/cryptography/whatever (tm), and I recommend approaching the choices designers make with an open mind.
I can see where it prevents inadvertent data leaks but the feature was billed as protecting against motivated adversaries. (Or at least so I thought.)
You need a sequence number to solve this, but they have no place where to store it.
It's the same issue that XTS faces but that operates under the fairly modest assumption that an adversary won't have long term continuous block level access to the running system. Whereas in this case interdicting the bus is one of the primary attack vectors so failing to defend against that seems inexcusable.