> "How do they accomplish their goals with project BULLRUN? One way is that United States National Security Agency (NSA) participates in Internet Engineering Task Force (IETF) community protocol standardization meetings with the explicit goal of sabotaging protocol security to enhance NSA surveillance capabilities." "Discussions with insiders confirmed what is claimed in as of yet unpublished classified documents from the Snowden archive and other sources." (page 6-7, note 8)
There's long been stories about meddling in other standards orgs (both to strengthen and to weaken them), but I don't recall hearing rumors about sabotage of IETF standards.
These rumors, about IETF in particular, predate the Snowden disclosures.
Almost immediately after that happened, a well-known cypherpunk person accused the IETF IPSEC standardization process of subversion, pointing out Phil Rogaway (an extraordinarily well-respected and influential academic cryptographer) trying in vain to get the IETF not to standardize a chained-IV CBC transport (this is a bug class now best known, in TLS, as BEAST), while a chorus of IETF gadflies dunked on him and questioned his credentials. The gadflies ultimately prevailed.
The moral of this story, and what makes these "NSA at IETF" allegations so insidious, is that the IETF is perfectly capable of subverting its own cryptography without any help from a meddling intelligence agency. This is a common failure of all standards organizations (W3C didn't need any help coming up with XML DSIG, which is probably the worst cryptosystem ever devised), but it's somewhat amplified in open settings like IETF.
> The moral of this story, and what makes these "NSA at IETF" allegations so insidious, is that the IETF is perfectly capable of subverting its own cryptography without any help from a meddling intelligence agency
The 3 letter agencies usually recrut people from academia and sensitive organizations in order to pursue their agenda.
I wonder what other institutions are infested with swarms of gadflies that'll all swear up and down in unison that something is definitely good or bad, will attack outsiders with narratives that disagree with their own, and weaken the credibility of the entire institution in the process.
Surely this can't be a phenomenon unique to technology, can it?
It's just human nature. I refuse to participate in standards work, partly because it's much more pleasant throwing rocks from a safe distance, but also because I recognize in myself the same ordinary frailties that had friends of the original IPSEC standards authors writing mailing list posts about Rogaway being a "supposed" cryptographer. There but for the grace of not joining standards lists go I.
I think --- I am not kidding here, I believe this sincerely --- the correct conclusion is community-driven cryptographic standards organizations are a force for evil. Curve25519 is a good standard. So is Noise, so is WireGuard. None of those were developed in a standards organization. It's hard to think of a good cryptosystem that was. TLS? It took decades --- bad decades --- to get to 1.3. Of that outcome, I believe it was Watson Ladd who said that if you turned it in as the "secure transport" homework in your undergraduate cryptography class, you'd get a B.
It goes further back than crypto: back in the day, there was the design-by-committee OSI versus the "rough consensus and running code" IETF. The result is that while we still teach the OSI 7-layer model in universities, in practice we use TCP/IP, often with HTTP and TLS on top.
You're basically describing politics, where specific opinions are central to a group identity. So long as there is an innate human desire to belong, there will be plenty of people who are happy to live with any amount of cognitive dissonance.
Joining a gadfly swarm just gives them an opportunity to prove their worth to the group.
It’s true that you can always imagine a perfectly-concealed conspiracy but we see the same dynamics unfold in many places where there’s no security impact so the parsimonious explanation is that there is no conspiracy, only normal human social dynamics.
DNSSEC is one of the standards where I am confident the US government did not intentionally introduce weaknesses because the US government is the largest deployed base of zones due to a poorly thought out executive order.
I'm curious as to how successful they were at subverting the IETF process. It wouldn't be impossible, but since much of the process is in the open it could be difficult, especially if they did it under their own name.
I suspect most of it was done under different corporate identities, and probaby just managed to slow adoption of systematic security architectures. Of course, once the Snowden papers came out, all that effort was rendered moot as the IETF reacted pretty hard.
Ya ever heard of the OAuth2 protocol? I spent almost half a decade working on identity stuff, and spending a lot of time in OAuth land. OAuth is an overly complicated mess that has many many ways to go wrong and very few ways to go right.
If you told me the NSA/CIA had purposefully sabotaged the development of the OAuth2 protocol to make it so complex that no one can implement it securely it'd be the best explanation I've heard yet about why it is the monstrosity it is.
Have you ever seen SAML? Now there is a protocol that seems borderline sabotaged. CSRF tokens? Optional part of spec. Which part of the response is signed? Up to you with different implementations making different choices; but better verify the sig covers the relavent part of the doc. Can you change the signed part of spec in a way that alters the xml parse tree without it invalidating the signature? Of course you can!
Oauth2 is downright sane in comparison.
[To be clear, saml is not a ietf spec, it just solves a similar problem as oauth2]
Honestly, as someone who has implemented both SAML and OAuth2 (+ OIDC) providers, I found SAML much easier to understand. Yes, there are dangerous exploits around XML. Yes, the spec is HUGE. But practically speaking, people only implement a few parts of it, and those parts were, IMHO, easier to understand and reason about.
What's insidious about SAML is that it really is mostly straightforward to understand, but it's built on a foundation of sand, bone dust, and ash; it works --- mostly, modulo footguns like audience restrictions --- if you assume XML signature validation is reliable. But XML signature validation is deeply cursed, and is so complicated that most fielded SAML implementations are wrapping libxmlsec, a gnarly C codebase nobody reads.
100% - XML vulnerabilities are the biggest issue. JWTs have also had their fair share, though I think they were mostly implementation bugs that have mostly been ironed out at this point. XML's complexity is inherent to the language.
> JWTs have also had their fair share, though I think they were mostly implementation bugs that have mostly been ironed out at this point.
The most famous JWT issue, to my mind, was people implementing JWT and -- as per spec -- accepting an encryption mode of "none" as valid.
That could be described as an "implementation bug", but it can also be described as "not an implementation bug" - all your JWT functionality is working the way it's supposed to work, it's just not doing the thing that you hoped it would do.
IMO this is a defect (sabotage? plain incompetence?) in the specs, full stop.
RFC 7519 (the JWT spec) delegates all signature / authentication validation to the JWS and JWE specs. (And says that an unholy mechanism shall be used to determine whether to follow the JWS validation algorithm or the JWE algorithm.)
JWS even discusses algorithm verification (https://www.rfc-editor.org/rfc/rfc7515.html#section-10.6), but does not suggest, let alone require, the absolutely mindbendingly obvious way to do it: when you have a key used for verificaiton, the algorithm specification is part of the key. If I tell a library to verify that a given untrusted input is signed/authenticated by a key, the JWS design is:
bool is_it_valid(string message, string key); // where key is an HMAC secret or whatever
where VerificationKey is an algorithm and the key. If you say to verify with an HMAC-SHA256 key and the actual message was signed with none or with HMAC-SHA384 or anything else, it is invalid. If you have a database and you know your key bits but not what cryptosystem those bits belong to, your database is wrong.
The JWE spec is not obviously better. I wouldn't be utterly shocked if it could be attacked by sending a message to someone who intends to use, say, a specific Ed25519 public key, but setting the algorithm to RSA and treating the Ed25519 public key as a comically short RSA key.
> The most famous JWT issue, to my mind, was people implementing JWT and -- as per spec -- accepting an encryption mode of "none" as valid.
Sure that is pretty silly. However in saml you have xmlsec accepting the non standard extension where you can have it "signed" with hmac where the hmac key is specified inside the attacker controlled document. I would call that basically the same as alg=none, although at least its non-standard.
Redirecting the user when they tap sign in from untrustednewsite.com, to a new window with the domain hidden of bigsitewithallyourdata.com and saying “Yeah, give us your login credentials” always felt like the craziest thing to me
So ripe for man in the middle attacks. Even if you just did a straight modal and said “put your google credentials into these fields”, we’re training people that that’s totally fine
OAuth2 is about the simplest thing you could come up to solve the delegated authentication problem. It's complexity stems mostly from the environment it operates in: it's an authentication protocol that must run on top of unmodified oblivious browsers.
There's a lot of random additional complexity floating around it, but that complexity tracks changes in how applications are deployed, to mobile applications and standalone front-end browser applications that can't hold "app secrets".
The whole system is very convoluted, I'm not saying you're wrong to observe that, but I'd argue that's because apps have a convoluted history. The initial idea of the OAuth2 code flow is pretty simple.
The problem is, as anyone with sufficient experience knows, it is perfectly possible and common for devs to design security disasters without any involvement by the spooks. I suspect the NSA counts on this and uses any covert influence they might have to slow corrections (eg. Profiles for OAuth2 that actually are reasonable, patches to common services, etc.)
sabotaging a design is remarkably easy. We have several individuals that almost do it effortlessly. It's almost a talent for some. I suspect that doing it maliciously while hiding behind some odd corner scenario or some compatibility requirements can't be that hard and will be almost impossible to prove or detect.
On the other hand, why would the NSA even bother? With these talented individuals providing them a never ending supply of security issues? And they don't even have to pay for them besides having their normal 0-day discovery team look for them. I think supporting this is their reaction when a 0-day they are relying on gets patched out.
I can imagine special cases where they would exert resources and effort, such as the IETF or some other economically efficient chokepoint (Rust?), but not in general.
They made the S-blocks appear less deterministic when sampling their behaviour, but made sure that anyone with prohibitively expensive equipment (say, the NSA) could brute force them in a reasonable time frame.
These are all words (mostly; "S-blocks" is not a thing, you mean "s-boxes"), but they're not coherent in a sentence. I think you're trying to point out that NSA shortened the key length from 64 to 56 bits, which has nothing to do with the DES s-boxes. NSA's interventions with the s-boxes made them more resilient to cryptanalysis, not less.
The thing you're thinking is nefarious is not (unless you're a connoisseur of good bugs and believe that differential cryptanalysis, the most important fundamental technique in block cipher analysis, should have been public earlier). It is in fact the opposite of nefarious.
I'm too young to have firsthand experience, but I've seen the speculation that IPSEC was an example of this kind of strategy. It's certainly more-- ahem-- flexible that it probably ought to be. I know there have been exploits against ISAKMP implementations. I'd assume the baroque nature of the protocol drove some of that vulnerability.
I don't know what you mean by "manipulatable". I don't know what you mean by "DSA". I assume what you're doing is casting aspersions on the NIST P-curves. They're the most widely used curves on the Internet (hopefully not for too long).
I don't think all that many serious people believe there's anything malign about them. It's easy to dunk on them, because they're based on "random seeds" generated inside NIST or NSA. As a "nothing up our sleeves" measure, NSA generated these seeds and then passed them through SHA1, thus ostensibly destroying any structure in the seed before using it to generate coefficients for the curve.
The ostensible "backdoor attack" here is that NSA could have used its massive computation resources (bear in mind this happened in the 1990s) to conduct a search for seeds that hashed to some kind of insecure curve. The problem with this logic is that unless the secret curve weakness is very common, the attack is implausible. It's not that academic researchers automatically would have found it, but rather that NSA wouldn't be able to count on them not finding it.
Search for [koblitz menezes enigma curve] for a paper that goes into the history of this, and makes that argument (I just shoplifted it from the paper). If you don't know who Neal Koblitz and Alfred Menezes are, we're not speaking the same language anyways.
The real subtext to this "P-curves are corrupted" claim is that there are curves everyone drastically prefers, most especially Curve25519 (the second-most popular curve on the Internet). Modern curves have nice properties, like (mostly) not requiring point validation for security (any 32-byte string is a secure key, which is decidedly not the case for ordinary short Weierstrass curves, and being easy to implement without timing attacks.
The author of Curve25519 doesn't trust the NIST curves. That can mean something to you, but it's worth pointing out that author doesn't trust any other curves, either. Cryptographers have proposed "nothing up my sleeves" curves, whose coefficients are drawn from mathematical constants (like pi and e). Bernstein famously co-authored a paper that attempted to demonstrate that you could generate an "untrustworthy" curve by searching through permutations of NUMS parameters. It was fun stunt cryptography, but if you're looking for parsimony and peer consensus on these issues, you're probably better off with Menezes.
Incidentally, you said "ECDSA curve". ECDSA is an algorithm, not a curve. But nobody likes ECDSA, which was also designed by the NSA. A very similar situation plays out there --- Curve25519's author also invented Ed25519, a Schnorr-like signing scheme that resolves a variety of problems with ECDSA. Few people claim ECDSA is enemy action, though; we all just sort of understand that everyone had a lot to learn back in 1997.
I don't think that quite emphasises enough why the P curves are rightly seen as suspicious.
The NSA knew all about the importance of nothing-up-my-sleeve numbers for eliminating suspicions around their involvement. That's why the curve constants are the output of a hash function.
And yet for this scheme to correctly eliminate suspicion, it's vital that the inputs to the hash function are also above suspicion. So a good choice of input to the hash function would be something like 1, or the digits of pi, or some other natural number. Then there can be no possibility of game playing even if the hash function is broken (which as we now know, SHA-1 is).
The NSA didn't do this. Instead they picked huge numbers that have no obvious origin and then refused to explain where they came from. They created something that superficially looks like it should place the scheme above suspicion, but when you look at the details you discover it doesn't actually do so.
This is catastrophic! The absolute best case explanation is rank incompetence, but that's quite simply implausible as these schemes are otherwise designed in a highly competent manner. The NSA just doesn't develop cryptographic schemes with obvious howling errors in them. Except in this case, where they supposedly did.
None the attempted justifications for why this is OK hold water, in my view. They all rest on a series of very dubious assumptions:
1. If there was a way to build exploitable curves, academics would know about it.
2. If an outside researcher did discover such a technique, they'd actually tell everyone about it and not, say, be bribed or coerced or convinced to stay silent by the NSA.
3. The NSA didn't know SHA-1 was broken despite being the designers of it, and therefore would have had to do a brute force search for an exploitable curve, which they couldn't have done.
What do we know today? Well, we know that SHA-1 has critical weaknesses which took decades to discover, we know that the NSA had an active programme of sabotaging cryptographic standards via kleptography, and we know that they are more than capable of corrupting large numbers of people across many institutions, including professional cryptographers working at places like RSA. So none of these assumptions look safe.
I think the idea that this can't be an NSA attack despite things looking exactly the way it would if it was boils down to a desire to believe that academia can't be far behind what the NSA can do. But that's a very subjective sociological belief. Having spent time at academic cryptography conferences and reading their papers, I don't find it hard to believe that the NSA could know things about ECC that aren't in the public domain. Academic incentives just aren't there to research the things the NSA cares about, especially in recent years.
Another take... How long did OpenSSL have the heartBleed vulnerability? And that was EASY to understand, it was completely open and readable code, there are a billion plus more programmers than cryptographers, and all these academics also didn't catch it.
I'm out of my depth for the math portion of the discussion, but I can't say that "other people would know" is a reason I can get behind.
He isn’t saying the CA is bad. He is saying the curves selected are arbitrary and they stuck to specific ones with no reason. That the NSA has a backdoor to at least some TLS EC algorithms.
Now, IMO I thought the whole point was that the curve itself didn’t really matter. As you are just picking X Y points on it and doing some work from there. But if there is a flaw, and it required specific curves to work, well there you go.
Circumstantial evidence is still evidence. I'll acknowledge that this is extremely tenuous, but: the NSA has gimped algos before, wants to do it again as frequently as possible, and has the capacity to do so.
Unfortunately, at some point we (non-crypto-experts) have to trust something.
Article references Russias SORM system which provides not only FSB but the police and tax agencies with basically fully access to everything on the internet including credit card transactions, this stuff started in 1995 and was penetrated by the NSA
> Under SORM‑2, Russian Internet service providers (ISPs) must install a special device on their servers to allow the FSB to track all credit card transactions, email messages and web use. The device must be installed at the ISP's expense.
originally there was a warrant system but it seemed quite liberal and they don’t bother with the secret court system “oversight” like the US:
> Since 2010, intelligence officers can wiretap someone's phones or monitor their Internet activity based on received reports that an individual is preparing to commit a crime. They do not have to back up those allegations with formal criminal charges against the suspect. According to a 2011 ruling, intelligence officers have the right to conduct surveillance of anyone who they claim is preparing to call for "extremist activity."
Then in 2016 a counter terrorism law was passed and it sounds like they ISPs/telecoms are required to store everything for 6 months and it merely has to be requested by “authorities” (guessing beyond just the FSB) without a court order
> Internet and telecom companies are required to disclose these communications and metadata, as well as "all other information necessary" to authorities on request and without a court order
> Equally troubling, the new counterterrorism law also requires Internet companies to provide to security authorities “information necessary for decoding” electronic messages if they encode messages or allow their users to employ “additional coding.” Since a substantial proportion of Internet traffic is “coded” in some form, this provision will affect a broad range of online activity.
> Then in 2016 a counter terrorism law was passed and it sounds like they ISPs/telecoms are required to store everything for 6 months and it merely has to be requested by “authorities” (guessing beyond just the FSB) without a court order
Looking beyond the EU, there are plenty of allegedly democratic countries on that wiki page with legally-required data retention:
In 2015, the Australian government introduced mandatory data retention laws that allows data to be retained up to two years. [..] It requires telecommunication providers and ISPs to retain telephony, Internet and email metadata for two years, accessible without a warrant
If you could get enough people doing it, yes. But in practice the only people who would care enough would be the people the governments want to watch. Even a decent chunk of the crypto community would rather dunk on a cryptosystem they don't like than actually encrypt their emails (although of course how much of that is the NSA disrupting things is an open question).
There are, but basically only because of Facebook, who seem to actually care quite a bit about crypto and have the clout to make things happen. I don't think it's coincidence that they're also the main sender of encrypted emails (you can set your PGP key and then they'll use it for all their emails to you).
When they find a way to "decrypt" /dev/urandom output into, you know, whatever is expedient for them at some future date, do you think a secret judge in a secret court is going to believe that you were just moving around random noise for the lulz?
If AES is cracked by the NSA then they know that there is a fault. They must therefore assume that others could know about it and might exploit that fault, either now or in the future. A massive amount of American infrastructure, including intelligence services, relies upon AES not having such holes. They wouldn't sit on such information.
> a small percentage of the overall national security-related information assurance market
That quote does not exist in the one Wikipedia citation. I am not sure where they got it. Any time data flows from one physical location to another it is encrypted with Suite A algorithms, which is arguably where it matters the most.
If the flaw in AES is subtle enough, they may be (justifiedly?) convinced that nobody else's capabilities will suffice to discover and/or exploit it - or, at least, that they have enough of a grasp of what everyone else is doing that no other actor could discover the flaw without the NSA finding out about this.
Encryption bugs are fundamentally different than other exploits; if I send traffic protected by encryption then it needs to remain encrypted as long as the information needs to be secret. With national security that can be decades.
Moving any part of the US government off of AES would be noticed. The NSA doesn't send IT people to do installs. They set standards which are then incorporated into products by outside contractors selling to government agencies. Any attempted migration away from AES would be news here on HN within hours.
If they can find a flaw in AES 256, others can too.
My theory is they've got a backdoor in the iPhone hardware random number generator. It's the most obvious high-value target, it's inherently almost undetectable (the output of a CSPRNG is indistinguishable from real random numbers), and you can keep crypto researchers busy with fights about whose cryptosystem is best, safe in the knowledge that it doesn't matter what they come up with, they were owned from the start.
I love reading the controversial things. Admittedly, it’s very commonly low quality like OP in this thread but sometimes you get good if unpopular arguments. The challenge to my worldview is a utility that’s difficult to replace.
You can't connect real things like these documents with slander by people who do nothing to step on the toes of the NSA. That is all that the BS about Assange or Appelbaum being a sex menace or Snowden being a Russian asset is. "Oh noes, they're a threat to the work we're not doing". Nobody is asking you to get drinks with Assange or Appelbaum. They don't want to be your friend. It's okay if you don't like them, for whatever personal reasons (and choosing to fall for this crap falls under personal reasons). It's not okay to be part of a mob that murders people by throwing a pebble each with this plausible deniability, in this "genuinely curious" just wondering kind of way. Enough is enough.
It certainly isn't fascinating. 3 letter agencies are torturing and murdering people, and having nothing better to do than gossip about gossip about messengers is just vulgar, boring, infantile cowardice, puffed up with not even clever words.