It is good that they kept the classical crypto along. However, the general tendency towards quantum-resistant cryptography leaves me puzzled. From my perspective as a physics PhD graduate, I firmly believe that a quantum computer capable of breaking public key crypto will never be built. This is because as you add more qubits, there's increased interference between them due to the additional connections required.

It's similar to how FM radio works: there's a main frequency and several sidebands. When you adjust the tuner to pick up a station, you're essentially "interacting" with the corresponding station. But if there are too many stations, you may no longer be able to hear the music, and as a result, there would be only a static noise present.

This leads me to a somewhat cynical conspiracy. Imagine the moment when a curios government agency realises that building a quantum computer for this purpose is a futile endeavor. Instead of admitting this, they could perpetuate the idea that its construction is just around the corner. Then, act as a wolf in sheep’s skin and introduce everyone to quantum-resistant solutions, which are unfortunate to have secret hidden backdoors by having done more advanced research on them. Has anyone thought about this?

Doesn't your argument apply to classical bits too? The more interconnected a classical bit is, the more parasitic coupling it will experience. That used to be an argument used against the feasibility of classical computers in the 40s (until von Neumann published work on fault tolerant classical computing).

Both classical and quantum computers (1) can not "scale" without error correction because of analog noise (although it is less crucial on the classical side), but (2) can be build with error correction codes integrated in them to overcome that noise.

Also, you do not need all-to-all connectivity between your qubits (or bits) to build a scalable quantum (or classical) computer.

Edit: To add to your FM radio metaphor: you can have way more FM radio channels if each channel is on a separate coax cable (or in physics speak, if you multiplex not only by frequency but by spacial mode). No need to have all your qubits be controlled by the same optical or microwave mode, you can have physically separate control lines for each qubit and then eliminating cross-talk is as simple as inverting an n-by-n matrix.

To add to the sibling comment, the reason our classical computers work is because the individual transistor errors in your CPU are basically zero.

We do use “error correction” on storage (and do see bit errors creep into data stored on disk and in RAM over time) but not “fault tolerance” on the compute. In fact there is no such thing as fault-tolerant classical compute - the CPU only works if it “perfect” or “near perfect” (or if you had an ancillary computer that was perfect to implement the correction). Note that occasionally computers do crash due to a bit error in your CPU, or you get a “unstable” CPU that you need to replace.

(We do create fault-tolerant distributed systems, where such faults can generally be modelled and remedied as network errors, not compute errors.)

Quantum fault tolerance relies on the fact that you can do “perfect” classical computation - which I find kind of amusing!

With the benefit of hindsight, it is easy to agree with what you say. But from the point of view of the scientists creating the first classical computers, classical fault-tolerance seemed just as difficult to them as quantum fault-tolerant computation seems to us. See von Neumann's "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components" from 1952 https://static.ias.edu/pitp/archive/2012files/Probabilistic_...

Not that it is impossible for OP to be right, but the argument he is using used to be applied to classical computation and ultimately turned out to be wrong in that context.

This idea that multiple redundant voting computing units would be magic to Johnny VN seems unlikely. This is because that exact scheme was used with manual "computers" (people with desk calculators) for example in code breaking and nuclear weapon simulation. You'll find descriptions of these techniques for example in Feynman talks.

> Team member Raphael Some of JPL explains: "One way to use faster, consumer CPUs in space is simply to have three times as many CPUs as you need: The three CPUs perform the same calculation and vote on the result. If one of the CPUs makes a radiation-induced error, the other two will still agree, thus winning the vote and giving the correct result."

Yes. In practice it can be worthwhile doing this in distributed compute. These methods (and those from sibling comments) do rely on the error rate being low, and the data compared being much smaller than the amount of compute (eg bytes in final answer << number of CPU instructions used). AFAICT such faults can be treated much like network and storage errors.

What we can’t do is have a set of CPUs where roughly every 1000th instruction fails and hope that plugging together a bunch of computers together to check each other’s progress will work. The specific issue is that the checking is wrong so frequently that you can’t “win” to solve a problem involving trillions of instructions by adding more computers. The overhead of “checking the checkers” just blows up.

What’s interesting about quantum computation is this is exactly what is proposed - have qubit error rates of (I think) around 0.1% and lots of measurement, classical compute, and feedback control to keep the computation “on track”. The whole scheme relies on the error rate for all of that classical compute step to be negligible.

This isn't really accurate. Fault-tolerant classical computing absolutely does exist as a field, and plenty of work has been done there. Some of the early work was done by von Neumann [1] because early computing hardware was extremely error-prone. Over time it turned out that these techniques were not really needed due to the fact that modern solid-state hardware is extremely reliable. The field of quantum computing actually resurrected a handful of ideas that were originally developed for classical computers.

More generally, nobody needs "perfect" classical computing either to make quantum computing work. Given a (quantum or classical) processor with some degree of error, the idea behind these techniques is to "boost" that into a processor with arbitrarily small error. It just turns out that with modern classical processors the error is so small that we suffer it rather than pay the cost of using these techniques.

It does not apply to classical bits in the same way. Quantum computers derive their computational power from the qubits being in a single quantum state across the qubits (an entangled one, to use physics jargon.)

This is distinct from classical computers, where you can describe a bit without needing to describe the other bits in the computer. You cannot describe a qubit (in a way that's computationally useful, at least) without describing all of them.

But the exponential cost (the need to describe the "whole") is there in the classical case too.

To describe a set of classical bits completely, you need a probability distribution over the whole system (including possible classical correlations induced by the crosstalk that OP spoke about). That probability distribution is a stochastic vector that has 2^n real positive components if we have n bits.

To describe a set of qubits completely, you need at least a "quantum probability distribution", i.e. a ket that has 2^n complex components (I am skipping the discussion of density matrices).

Both the classical and quantum description of the bits requires exponentially many real numbers. This exponential behavior on its own is not enough to the explain "quantum computational advantage". The exponential behavior is well known in plenty of classical contexts (e.g. under the name "curse of dimensionality") and classically solved with Monte Carlo modeling.

Scott Aaronson's lecture notes cover that quite well.

At the end, the issue of crosstalk is modeled in a very similar way in the quantum and classical case, and if it forbids the existence of a quantum computer, it should forbid the existence of classical ones as well. In both cases it is the use of linear binary (or stabilizer) codes that solves that issue.

I may be misunderstanding your argument, but it seems like you are saying that modeling faults in a classical computer also needs to take into account the states of all bits in relation to one another, and that this somehow proves that the problem of crosstalk is similar to interference in quantum computers.

If I have understood your argument correctly I don’t think the conclusion follows from the premise because crosstalk is not fundamental to classical computing. By that I mean that for a given logical cpu design, crosstalk can be (and is) minimized through physical design characteristics (eg grounding, wire spacing etc) without reducing the size of computation. The same cannot afaik be said of quantum computers, where the interaction between qbits is essential to computation

I do not think I follow your last two sentences. Both for bits and for qubits, two-bit gates are required (e.g. NAND for classical and CNOT for quantum). The bits/qubits need to interact for such a gate to happen and should be isolated from one another before/after the gate. And same as with bits and grounding and wire spacing, we use various decoupling techniques for qubits.

Of course, qubits are drastically more difficult, but the principle is the same.

Good point, I said computation, but I was thinking more of the storage and routing pieces of that rather than the gates, likely because I don’t understand quantum gates very well. Most of the quantum ecc I have read about was in the storage lifetime and retrieval process.

Basically what I mean is that classical computing can split things up during storage or routing to reduce coupling, but quantum can’t do this because the qbits must stay entangled to be useful

WiFi, 5G, PCIe, Ethernet, QR codes, and pretty much any other classical communication protocol uses error correcting codes where classical bits need to be sent in blocks, e.g. for PCIe6, blocks of 256 physical bytes are sent, but only 242 of information is transmitted because the rest is used to enforce some classical correlation for the purposes of error correction. We can not send smaller packets of data (unless we pad). There are probably many technical details I am oblivious to here on the classical side of things, but this "can not split things up" constraint does not seem unique to quantum correlations (entanglement).

And the way we mathematically express and model classical correlations in classical error correcting codes is virtually the same as the way to model quantum correlations (entanglement) in quantum error correcting codes.

All of this with the usual disclaimer: the quantum case is much more difficult, engineeringly speaking.

What makes qbits valuable is that they scale superlinearly the more of them you have entangled together. This means 8 individual qbits can store less information than 8 entangled qbits. Since they have to remain entangled to do valuable work, you can't physically separate them to prevent interference the way you could with classical bits. This is actually really well demonstrated in all the protocols you mention. None of those protocols operates on a bus that can send 256 bytes of data all together at once. They all chunk the data and send a small number of symbols at a time.

For example in PCIe each lane can only cary one bit at a time in each direction. In typical consumer equipment, there are at most 16 lanes of PCIe (eg a graphics card socket) meaning there can only be at most 16 bits (2 bytes) on the wire at any given time, but the bits are sent at a very high frequency allowing for high transfer rates. This only works because taking those 256 bytes and sending them one by one (or 16 by 16) over the wire doesn't lose information.

I believe there are a couple of misconceptions in your first paragraph (but it is also plausible that both of us are talking past each other due to the lack of rigor in our posts). Either way, here is my attempt at what I believe is a correction:

- Both classical probabilistic bits and qubits need exponential amount of memory to write down their full state (e.g. stochastic vectors or kets). The exponential growth, on its own, is not enough to explain the conjectured additional computational power of qubits. This is discussed quite well in the Aaronson lecture notes.

- Entanglement does not have much to do with things being kept physically in contact (or proximity), just like correlation between classical bits has little to do with bits being kept in contact.

- Nothing stops you from sending/storing entangled bits one by one, completely separate from each other. If anything, the vast majority of interesting uses of entanglement very much depend on doing interesting things to spatially separate, uncoupled, disconnected, remote qubits. Sending 1000 qubits from point A to point B does not require a bus of width 1000, you can send each qubit completely separately from the rest, no matter whether they are entangled or not.

- Not even error correction requires you to work with "big" sets of qubits at the same time. In error correcting codes, the qubits are all entangled, but you still work with qubits one by one (e.g. see Shor's syndrome measurement protocol).

- I strongly believe your first sentence is too vague and/or wrong: "What makes qbits valuable is that they scale superlinearly the more of them you have entangled together". As I mentioned, the exponential growth of the "state descriptor" is there for the classical probabilistic computers, which are believed to be no more powerful than classical deterministic computers (see e.g. derandomization and expander graphs). Moreover, Holevo's theorem does basically say that you can not extract more than n bits of information from n qubits.

- Another quote: "Since they have to remain entangled to do valuable work, you can't physically separate them to prevent interference" -- yes, you can and very much do separate them in the vast majority of interesting applications of entanglement.

Not the parent (and gotten my PhD more then 25 years ago), but the answer is no.

Quantum mechanics is a fickly thing. Schrödingers cat has not been observed ;-) because scaling kills the quantum mechanical properties.

My (entirely theoretical) PhD project dealt with a 2 dimensional electromagnetic cavity, with one 'perfect' mirror (100% reflecting) and one 'imperfect', say 99.999999999% reflecting.

My results were that if you started with a 'squeezed' vacuum, in just a few roundtrips, the 'squeeziness' disappears, due to the bath/loss of the non-perfect mirror. My 'gut feeling' tells me that if we have enough qbits to actually factor something more then 2x2, the loss/bath effect will delete any useful use of the quantum computer.

So I am in the 'will not happen' category. (Not in 1, not in 5, not in 30 years, not in 500 years).

This is different from classical error correction: as far as I understand Schors algorithm, there is no 'get N approximate answers, combine them, and get a better answer' part in there. While classically you could do 100 measurements and get a better approximation then with just one.

It doesn't change the overall point you're making but your "gut feeling" was already wrong 20 years ago. A group working for IBM factored 15 = 3x5 with a quantum computer in 2001.

You are also somewhat wrong about the combining N approximate answers and combining them. It is correct that that the "repetition code" approach (repeat the same calculation a bunch of time and average the results) does not work for quantum computers. However there are quantum error correcting codes which do appear to work, although they require sufficiently small initial error rates (so-called thresholds) in order to help.

Yes. In fact the proofs that quantum error correction works as long as you're below a certain error rate (so-called "threshold theorems" are very, very similar to the same proofs that error correction works in classical computers.

CRYSTALS-Kyber was designed by an academic team, not a government (though a government standards body refereed the competition that selected it; that competition, in turn, was driven by technical feedback that came predominantly from academic cryptography teams around the world). In general, with some exceptions (like curve isogenies), the "math" behind PQ crypto is pretty well established; it was just less attractive than curves were for performance reasons, but is now more attractive if you presume curves will be broken in 20 years by QC.

People have already said here most of what I want to say in this comment, but just to make it as explicit as possible:

Essentially the only reason anyone thinks that useful quantum computation is possible is because of things called threshold theorems, which state that as long as the noise in each qubit is less than some small but non-zero error rate you can add more qubits and use quantum error correction to make your computation arbitrarily precise. In other words as long as you're below the threshold rate quantum computers scale well.

Of course those threshold rates are very very small, and creating significiant numbers of qubits which are below the threshold rates is incredibly difficult, but theoretically it works.

>...as long as you're below the threshold rate quantum computers scale well.

Last I heard, getting below that threshold was going to take one or two orders of magnitude of noise improvement. That seems unlikely.

Say you were at a VC presentation and the company said that they had this really great system and the only thing stopping their immense success was the requirement to reduce the noise by an order or two of magnitude. Oh, and by the way, we already have the system very close to absolute zero. So you ask them what they are planning to do and they tell you that they don't have the faintest idea. Noise is always the ultimate limit on the information that can be obtained from a system. The most reasonable interpretation of the situation is that a technology doesn't exist and that there is no reason to think it would ever exist.

But when it comes to quantum computers the optimism is boundless. I am not sure why.

I agree with you that the general optimism towards quantum computing needs some tempering. It is certainly possible it doesn't work out and we never get a universal fault-tolerant quantum computer.

I think part of the reason for optimism is that there are a lot of different ways to make a quantum computer. Ion traps, linear optics, resonant superconducting circuits, topological quantum computing (if anyone ever actually discovers an anyon) and many more.

Different groups are working on wildly different directions right now, i.e. Google and IBM are doing superconducting stuff, microsoft loves topological stuff, researchers at various places are doing trapped ions and at least one company (psi-quantum) is working on linear optics.

It certainly wouldn't be surprising if several of these approaches fail to work, but the hope is that they don't all fail.

> the only thing stopping their immense success was the requirement to reduce the noise by an order or two of magnitude

I mean, i know its different context, but in general, a 1-2 magnitude improvement in hardware capability does not sound crazy on its face for most of the computer industry. Most computer hardware has improved by that much over very short time periods historically.

I don't necessarily disagree with what you're saying but multiple research teams do already have error rates below the threshold. It's still early days for the most promising QC platforms, so it's too early to tell if making a QC is possible or not.

I've heard physicists raise opinions like yours (i.e. QC will never be built for practical reasons), but I also hear ones that say the opposite. I'd err on the side of caution.

As for your conspiracy: The conclusion of that would be to continue using hybrid constructions.

Though, and I know crypto more than physics, I'd consider it as highly unlikely. Creating backdoors that others won't find is next to impossible. Why do I say that? Because we have some historic evidence how "NSA wants to plant a backdoor into an algorithm" works. Usually they've been discovered quickly. They can still be successful, because as we have seen with dual ec drbg, it was discovered quickly, yet nobody really cared and industry still used the algorithm.

But something like that won't happen in a transparent process. You can be sure that every expert in the field had a look at crystals-kyber. If anything about it was suspicious, we would know.

It's perhaps telling that NSA has been rather aggressively against the use of hybrid systems, even though they have almost no marginal cost (an extra 56 bytes on top of 1.2kb of PQ exchange) and are the obvious move esp while the PQ systems are very new.

The transparency of the process in an essential way depends on a number of people who can understand what is being proposed. It seems from the outside that the lattice-based cryptography is significantly more complex. The question is, would anyone notice and how far-reaching are the proofs made on their security?
On what basis can one prove that a computer with a novel algorithm could not break it?

> As for your conspiracy: The conclusion of that would be to continue using hybrid constructions.

As long as ordinary crypto does not get deprecated.

Anyway, the number of responses made me curious about this new novel crystals-kyber. Do you have any recommendations on the best introductory text that explains it from the bird's view?

> As long as ordinary crypto does not get deprecated.

On that note, just this month Tutanota emailed customers that their Secure Connect product is being turned off at the end of next month in order to focus developers on quantum-secure encryption solutions.

This occurs in a time when there appear to be a stark few hosted E2EE webform-submission options that don't involve either a) bigtech or b) fly-by-night operations. Tutanota was a happy medium, and is getting out of that market, apparently.

It can make one wonder what kind of pressure might exist to turn off a quite good, working solution to an actual problem. If one didn't know better, it could seem that blaming the need for quantum is just a distraction.

The GP is not the first to make the observation in a natural line of inquiry. HN guidelines ask to assume good faith, and surely we know to try to.

Honestly, that sounds more like an excuse to get out of the market. Probably facing a lot of competition from securedrop for the paranoid, and the non paranoid dont care about security at all.

> I firmly believe that a quantum computer capable of breaking public key crypto will never be built. This is because as you add more qubits, there's increased interference between them due to the additional connections required.

Seems weird to be assuming what's possible based on current technical obstruction. If you trace CPUs development, or many other technologies, many people with deep technical knowledge were certain about some thresholds which we have long passed. This bias even had some name which I forgot.

I tend to take "it's impossible" statements from scientists seriously only when the reasoning can be firmly tied to an extremely well established physical law with no "wiggle room."

For example I accept that faster than light travel and inertialess propulsion are both impossible. If either of these things were shown to be possible, it would mean that there are huge errors or oversights in the most well established areas of physics.

I also accept the conditional impossibility of things that are just provably beyond our current ability for fundamental reasons, like a Dyson sphere. I don't know of any physics that says you could not build one, but for us it'd be like dust mites building the international space station.

For everything else I leave the door open. People have historically underestimated creativity.

For the first types of things, taking preparatory steps would be irrational. We don't need to plan for the arrival of FTL travel because we have no reason to think it will ever arrive.

For the latter types of things, preparing does make some sense as long as it's not unreasonably expensive. We do have reason to believe that a large quantum computer might be possible, so mucking around with a bit of code to defend our security against a surprise seems rational.

To a first approximation, the US government uses the same cryptography that US consumers do -- AES, SHA-2, the NIST P curves, ECDSA, etc. are all categorized for various levels of data integrity and confidentiality within the government.

The same will be true of PQ signing schemes, meaning that a backdoor would be predicated on the USG believing that they have some truly remarkable NOBUS breakthrough in PQC. That feels unlikely to me; NSA interventions in cryptographic design have historically gone in the opposite direction[1].

(This is separate from the actual design question. I don't know as much about stateless PQ signing schemes, but the stateful ones are mostly "boring" hash-based cryptography that's well understood to be strong in both settings.)

> NSA interventions in cryptographic design have historically gone in the opposite direction[1].

I'm not sure I'd say that given that there are some other designs and things that have gone on[1][2]. Particularly the Dual EC debacle. They have a history of helping make suspect or down right compromised crypto if they think they can get away with it. That said it does look like they avoid doing it to anything that gets USA GOV approval for use internally but it's difficult to say to what level they would actually go to for getting a backdoor out into the world that would let them look at other secrets.

That’s fair. Maybe this is too fine of a hair to split, but I would categorize the Dual_EC fracas as less an intervention and more of a ham-fisted attempt to standardize something that mainstream cryptography was immediately suspicious of. But I suppose you could argue that there was similar suspicion around DES from the very beginning.

The main twist is that we don't know the future but we know how theorical QCs are able to break currently used cryptography, with QCs algorithm like the Shor algorithm: https://en.wikipedia.org/wiki/Shor%27s_algorithm

"As such, as quantum technology advances, there is the potential for future quantum computers to have a significant impact on current cryptographic systems. Forecasting the future is difficult, but the general consensus is that such computers might arrive some time in the 2030s, or might not arrive until 2050 or later." quoted from:

> The main twist is that we don't know the future but we know how theorical QCs are able to break currently used cryptography

Under the constraints of us correctly modelling the math of QC. Isn't it possible that we have gaps between our models of QC and how it works in reality that could make it such that these algorithms can't actually offer any speedup over classical approaches in the real world? Or similarly, even if they do work, maybe it's just impossible to build a computer with sufficiently many qubits to outperform classical approaches. Anyway, massive gap between theory and practice with no indication we're bridging it in meaningful ways.

> Under the constraints of us correctly modelling the math of QC.

You're right.

Those questioning are legit, and we don't know.

It could even turn that the post quantum procedures we chose in our times produces messages quickly breakable by classical computers. And that eventually one day, someone discovers it while trying to solve another problem and decides to warn everyone the right way (yay).

In such unpropitious (but, we still don't know) scenario, how cute we'll be if all that attempt for protecting us from the future were sincere.

> Isn't it possible that we have gaps between our models of QC and how it works in reality that could make it such that these algorithms can't actually offer any speedup over classical approaches in the real world?

If BQP=BPP or if BQP did not accurately model a quantum computer, i think that would be a much more interesting result than an actual working quantum computer. It would be world shattering.

Or BQP is a purely theoretical construct with no real-world counterpart. No world shattering result necessarily.

There’s plenty of math that exists purely in the virtual real with no connection to physical reality. Math is a language to describe any possible universe. That doesn’t mean anything we say in it necessarily applies to our universe.

The good news is that if you're willing to take a 2x slowdown for asymmetric encryption (which basically everyone is) you can get the best of both worlds by wrapping your quantum resistant crypto in regular crypto.

So you are claiming the protocol that Signal has adopted is already backdoored by the government. Extraordinary claims require extraordinary evidence. You need to provide some kind of evidence of this. We are talking 20+ years of open and public research on post-quantum cryptography.

I agree with this. My concern only applies if the classical crypt gets deprecated or new solutions use PQC exclusively using widespread hybrid use as an argument for trust.

It seems that your primary concern is that the government (or some bad actor) will be able to install a backdoor into PQC algorithms. Is that right? Why would PQC be more exposed to this type of subversion than existing public-key cryptography?

To your point about PQC being used exclusively, post-quantum encryption methods are designed to be resistant to both quantum and classical attacks. That is one of the key stated goals of the NIST post-quantum cryptography program.

it's less about a backdoor and more about just being a lot less robust in general. classical crypto is based on ~100 years of math on finite fields and ~50 years of heavy scrutiny by cryptographers. the post quantum algorithms are much newer and built on much less well studied math. (and empirically, a large number of them have been found to be completely broken). we're at least 20 years from PQC that can be widely trusted. there really just isn't an alternative to having a generation of grad students studying an algorithm that's as old as they are

For signatures, hash based signatures are quantum computer resistant and are also more secure than any other signature scheme. No reliance on a math problem if you don't count the cryptographic permutation to be one, but then everything relies on it regardless of what scheme is used.

The McEliece cryptosystem[1] is one of finalists in the PQC competition and it's also quite old - developed in 1978. It didn't face as much scrutiny as RSA or ECC due to its large key sizes which resulted in nonexistent adoption.

My understanding is that all the other PQC candidates including Kyber are much newer and far less studied.

If I was NSA, that's exactly the kind of fud (fear, uncertainty and doubt) I'd try to propagate.

While I appreciate and respect the useful criticism, I would be careful with this kind of content:

Regardless wether you read this before or authored it, if you're right then go ahead and show to the world how we're wrong, if you're wrong you're helping quantum attackers.

FM radio has frequency capture, interference there is not additive as in AM. You're very likely to capture the strongest station unless there's a lot of interference.

As for QC, we don't seem to be running into any limits yet. It may well be that any limits are well above useful amount of qbits.

I assume you’re not proposing some kind of interference limit in principle?

Are you suggesting that limiting interference will be a practical dead end that is prevents advancement?

Either way that would be a pretty significant claim. There are lots of research directions being pursued and plenty of smart people think it’s worth trying.

> Are you suggesting that limiting interference will be a practical dead end that is prevents advancement?

This is a hunch I have. Regarding the "plenty of smart people think it’s worth trying", I can only provide an analogy of the 15-14th puzzle known as the Boss puzzle at that time, for which a substantial prize was promised for the first one who could solve it. A lesser-known proof that it is impossible came to surface decades later. There is a lot of inertia in academia along those lines, where grants depend on your ability to make a convincing argument that your path will solve the problem. This sets up PhDs to know only to advance but not to question as the latter does not give the prize.

Can you explain how qubits are physically implemented in a real-world computer? I just cannot wrap my mind around what they're made of and how they operate in the physical reality.

My understanding of the more popular superconducting types is that a bit of superconductor is connected up to a junction that allows quantum tunneling of electron pairs. The number of electron pairs that tunnel through the junction determines the state and the state is read out by an exotic electrometer.

As for how the chip itself is made, as far as I know it's a relative standard lithography process except with added superconducting layers.

Given that Signal's main innovation (compared to traditional end to end encryption) was to safeguard its users against future compromises via the ratchet protocol, this actually seems like a logical move for them to make.

I don't think these incentives make sense at all. Government organizations suspected to be developing quantum computers probably have larger annual budgets than 20 billion. The ability to undermine virtually all cryptographic systems is unquantifiably large.

Once the cat is out of the bag, everyone will rush to post-quantum cryptography and all that value will be lost in a relatively short period. Indeed, we already witnessed this in the 2010s following the Snowden revelations when big tech, in a concerted effort, adopted HTTPS. Now that is the standard.

For example, "The Fiscal Year 2022 budget appropriation included $65.7 billion for the National Intelligence Program, and $24.1 billion for the Military Intelligence Program."

It's worth way more than $20B USD to have a working quantum computer that nobody knows about. You don't burn a weapon like that by inducing everyone to update immediately.

This is a myopic take, the attacker could not spend the bitcoins because of the public ledger and the value of bitcoin would drop to nothing once it is realized that wallets are not secure. They'd burn bitcoin for no gain, for a loss even, because they would reveal their capabilities and maybe even who they are.

Bitcoin is already quantum attack resistant, unless you use un-hashed public keys or reuse Bitcoin addresses (as some do).

If Bitcoin would become vulnerable, its value would collapse to zero overnight once it's known. There is limited amount of money anyone could extract before the value collapses.

That is a misleading claim. First: Any quantum key cracker would need to be fast since the operations would all have to be performed within the coherence time, so an attacker could race coins as they were spent or perform small reorgs to steal coins even if they lost the initial race. Secondly: The majority of all circulating coins are stored in addresses which have been reused. Thirdly: the common hashing scheme you mention is 160 bits, so in the presence of quantum computers would only have 80 bits of security against second preimages just by using grover's algorithim and perhaps worse with more specialization (and, in fact, somewhat less considering multi target attacks) which wouldn't and shouldn't be regarded as secure.

> If Bitcoin would become vulnerable, its value would collapse to zero overnight once it's known. There is limited amount of money anyone could extract before the value collapses.

Once its known. There have been insecure altcoins where hackers skimmed them for many months without being noticed. It is indeed technically finite, sure, but large.

> The majority of all circulating coins are stored in addresses which have been reused.

Interesting, do you have any statistics on that? But I guess with large exchange wallets, it makes sense.

> only have 80 bits of security against second preimages just by using grover's algorithim

True, but 80 bits are anything but trivial to brute-force using classical computers! I'm not that familiar with quantum complexity, but as I understand it, you'd still need 2^80 quantum operations to brute-force a 160 bit hash.

Bitcoin wouldn't "become vulnerable". Someone would discover the vulnerability. If this person put it to use before making it widely known the could definitely extract a large amount of money before the bitcoin network before the public at large noticed (at which point the price would start to plummet)

My understanding is that bitcoin addresses are quantum safe as long as you do not reuse an address after spending funds sent to it [0]. Per the linked article, this is standard practice, so I would assume the majority of addresses are actually quantum safe.

And for more context: with p2pkh addresses, you are sending to the hash of the address, and hashes are quantum safe.

Why not use something like backchannel? That way we wouldn't need phone numbers either...

The initial shared private key exchange could be done with more expensive, quantum resistant cryptography but the actual communication could be done through symmetric encryption.

> The initial shared private key exchange could be done with more expensive, quantum resistant cryptography but the actual communication could be done through symmetric encryption.

That's exactly how Signal's symmetric ratcheting works (with the addition of an asymmetric ratcheting step to limit the forward impact of a key compromise on either side, which you can't do symmetrically).

> For the key exchange itself ("PAKE")

A PAKE requires a short shared secret. That does not usually exist in Signal's scenario.

Well, I don't understand the exact details but it seems mostly to do with possible vulnerability to brute forcing if certain pseudo-random number generators are used:

> A PAKE requires a short shared secret. That does not usually exist in Signal's scenario.

I'm not too familiar with the details of how Signal's initial key exchange works, other than that from a user experience point of view it relies on people having shared phone numbers in at least one direction. So in practice this is kind of similar to the sharing of a one time password. Basically the users already need some kind of a "trusted outside channel" to exchange phone numbers.

From that vantage requiring people to "pair" in a secure context, most likely in person, wouldn't be that much different in practice from the way people usually exchange global public identifiers now (social media handles, phone numbers, etc.) over existing trusted channels. And having a shared private key that you store in a kind of pet-name address book is already so conceptually similar to an address book, which I find kind of amazing.

The big difference being that you'd obviously want to keep this address book secret, whereas phones have made it incredibly easy and have normalized leaking your contacts to any app that requests it. In practice, preventing private key leaks would be a challenging part of building out something like backchannel. The shared secret petname address books would probably need to be application specific, used in a sandbox and encrypted at rest.

Maybe there could be some standard tools for securely using one application's channels to bootstrap connections on another, but ideally it should be very, very hard for the shared secrets to leak, but easy enough for users to manage and be aware of them.

Ah, it's apparently a quantum-hardened variant of AES! But AES-256 is inherently quantum-hard and much better understood at this point. In other words, AES is really not the security bottleneck against quantum adversaries.

> I'm not too familiar with the details of how Signal's initial key exchange works, other than that from a user experience point of view it relies on people having shared phone numbers in at least one direction. So in practice this is kind of similar to the sharing of a one time password.

A user's phone number is public information; you can't use that as the password in a PAKE.

Signal has basically two ways of exchanging keys: Unauthenticated (in which case you trust Signal's service to really provide you your contact's keys, and not their own, which would allow them to MITM your chats), and "safety number comparison", for which you have to manually compare a short numerical string with each contact over a trusted channel (ideally in person). It's less convenient, but removes Signal's servers from the set of entities you need to trust.

Theoretically Signal could also offer a PAKE option to interactively authenticate a new contact remotely (if you do indeed have a shared password), but that's probably not easy to do in a way that's not prone to social engineering attacks, since you don't have a channel to communicate about what your password should be. (Remember that you don't trust the contact yet; an attacker could pick a password hint that they know the answer to as well!)

> A user's phone number is public information; you can't use that as the password in a PAKE.

For sure, I just mean that people are already used to exchanging numbers, in person, to establish phone contact. So from a user experience point of view, exchanging a PAKE password for a shared secret isn't that much different from exchanging phone numbers. The resulting shared secret could be treated very much like a phone number, except it's like a secret phone number between you and that person.

> Theoretically Signal could also offer a PAKE option to interactively authenticate a new contact remotely (if you do indeed have a shared password), but that's probably not easy to do in a way that's not prone to social engineering attacks, since you don't have a channel to communicate about what your password should be. (Remember that you don't trust the contact yet; an attacker could pick a password hint that they know the answer to as well!)

This kind of thing makes the most sense to establish a secure connection with someone you already trust over an existing trusted channel. Much in the same way that you'd commonly get someone's phone number for the first time at a party, in person. (The physical environment of the party being the trusted channel.)

Assuming these generative fake voice/video models get to the point (over the next five years) where people cannot easily differentiate between a real and fake conversation over an arbitrary/untrusted digital channel, I'd assume that the most trusted channels for these kinds of exchanges will be IRL in person or established secure channels.

> A user's phone number is public information

One of the big arguments in the backchannel proposal is that global public identifiers are vulnerable to spam, fraud, impersonation, etc... and that maybe we don't actually need them as much as we assume? Maybe we can instead share per-contact shared secrets as a form of identity?

That is very well-written, as someone else pointed out, though this common explanation for laypeople needs work (I'm not blaming Signal's blogger, who wrote it more carefully than most):

"Instead of bits as in a classical computer, quantum computers operate on qubits. Rather than 0 or 1, qubits can exist in a superposition of states, in some sense allowing them to be both values at once."

'Instead of beads as in a classical abacus, our Quabacus operates on Quabeads! Rather than positions 0 or 1, quabeads can be in both positions at once!'

Beads that are simultaneously in both positions sounds like a f$@!#g annoying glitch and not a feature - how does that help anyone record or calculate numbers? ('Would someone take a look a this broken Quabacus-abacus and resolve these g#$%!m glitching quabeads?!!!') It mocks the non-technical reader, who assumes they must have been given enough information to understand why it's faster and possibly how it works, but can't figure it out.

They have not been given enough. Does anyone who perhaps understands it better than I do want to take a stab at a new commonplace explanation, one that connects the dots between quantum superposition and performance (for certain calculations)?

You are correct. That is a horrible explanation. And incomplete.

* The state of a qubit has two basis states. Arbitrary states are linear combinations (=superposition) of these basis states. Importantly, the coefficients in the linear combination can be positive or negative.

* The state of N qubits has 2^N basis states. poly(N) gate can put the state of the N qubits in which each of the 2^N basis states has a non-zero coefficient.

* In quantum algorithms, we play with the coefficients. Basically, certain problems (like factoring) have structure such that you can change the coefficients so that the ones corresponding to the correct answer have magnitude approximately 1, while other coefficients have magnitude 0. This is enabled by the fact that coefficients are positive and negative, and so by mixing them, you can cancel some out.

* When you measure the qubits, the random answer you get is proportional to the coefficients squared. Hence, with high probability you get the right answer.

Where does the speed up come from? When you put the state of N qubits in the linear combination of 2^N basis states, and run it through the problem, that is akin to computing all answers in parallel. Like for factoring, checking all possible divisors to see which ones work. But to extract the answer out in the way described above, you have to do additional processing. This only works for some problems, with the right structure.

> In quantum algorithms, we play with the coefficients.

Is it accurate to say, more simply (simplicity being a high priority here), that 'we play with the probabilities' of collapse to one basis state or to the other - thus skipping the need to explain linear combinations or coefficients?

Well, all the playing happens with the positive/negative coefficients. Without explaining that you can cancel them out with each other, it will not be clear how all-positive probabilities collapse to some basis states.

In fact, classical stochastic computational models have exactly this deficiency. You can't pump up probabilities of some unknown states. They can only tend towards a uniform mixture.

There isn't really a great way of explaining quantum behavior using everyday (classical) terms. Any analogy you come up with will be deeply flawed yet unsatisfactory opaque to the reader.

The only way is to gear up on math to the level where you can if not reason within the theory then to at least make sense of its presented conclusions.

You're frustrated because it's a bullshit answer based on a lack of understanding of what a superposition of states represents and what quantum computing is calculating. Qbit superpositions represent probability distributions between 1 and 0. And quantum computers calculate probable ranges of correct results of certain types of problems. They can produce these ranges of probable correct results using much less memory and time then it would take a classical computer to calculate similar results. We're talking exponential improvement of using 1 more qbit compared to 1 more classical bit. For example, in 2019, Google used a 53 qbit quantum computer to complete a task in 200 seconds that Google claimed would take 10,000 years to finish with the best known classical computer at the time. (Note that IBM claims that it would only take 2.5 days on their classical computer, but you can see the scale of improvement is huge either way)

The shortest possible answer is that qubit states are modeled as two-dimensional vectors on the complex unit sphere. We arbitrarily designate two orthonormal vectors on this sphere as corresponding to classical states 0 and 1. If the qubit vector isn't in the 0 or 1 state, it's in some linear combination of them. This is called superposition. Since most people don't know what linear combination means, superposition is explained as "sort of both at the same time". Upon measurement the qubits are collapsed to 0 or 1 with some probability proportional to how close they are to the 0 and 1 states. The precise probabilities are given by something called the Born rule. I gave a longer talk aimed at computer scientists if you're interested beyond this explanation: https://youtu.be/F_Riqjdh2oM

I don't really understand your objection, that description seems like about as well as you can do when trying to summarize quantum mechanics in one sentence.

If current quantum computers were scaled up to more qubits, could they break modern crypto? Or would we need both more qubits and a new quantum computer architecture?

I wonder how possible it is that IBM could have already gone further and are already cracking modern crypto in secret, e.g. funded by the NSA. Is that a crazy conspiracy idea, or actually a possibility?

> Is that a crazy conspiracy idea, or actually a possibility?

I am investing in IBM under the assumption that this is an actual possibility. Their public QC roadmap actually looks like a realistic journey now.

I strongly believe that the NSA, et. al. currently have access to a very powerful quantum computer - likely constructed by IBM under contract.

The game theory around this is such that it is impossible for me to accept that there are zero secret quantum computers in existence by now. There is too much to lose by not playing that game as hard as you can.

Speaking as a researcher in quantum computing (albiet completely on the theory side, with no practical knowledge of experiments). It seems that actually making a quantum computer which is useful (i.e. has error rate below the threshold you need for error correction to work) is incredibly difficult. I wouldn't be surprised if various secret agencies (specifically in the USA and China) have tried, but I would be quite surprised if they had succeded.

(I deleted my previous edit because I had misread part of what you wrote.)

You are probably mistaken. The number of people with the right expertise to build QCs is very limited - only a few hundred people with world class PhDs in quantum computing are produced every year across the world. A small fraction are truly innovative - the ones who can act as leaders to build something real.

The challenge of building QCs - as evidenced by billions of dollars worth of research in them - is many orders of magnitude more difficult than say the Manhattan project. The latter put together the best of the best on the project. You are suggesting a scenario where a tiny fraction of the best of the best are secreted away, with many of their past collaborators unaware of their doings, and have successfully built a QC.

While the many brilliant best of the best who are working publicly, with many billions of dollars of research funding are currently only making very slow progress. It simply does not square.

Reminds me of how everyone who knew anything about the physics academia scene in the 30s/40s knew what was going on at Los Alamos. Second-order effects are extremely hard to obscure.

It would be interesting to guesstimate what the NSA might be doing by analyzing the skills they're looking for in their job postings and the kind of open source projects they have released.

For example their Accumulo OSS suggests they're capturing and storing a lot of data to analyze later. The Ghidra OSS being a best in class reverse engineering tool also suggests that alot of their work revolves around finding zero day vulnerabilities.

I bet at the very least, the US govt and other large govts have some way of knowing whatever is actually possible TODAY and have plans in place to make sure whenever it is practical, they get the very first useful ones built.

I would guess they probably don't have any actually useful and in production right now, but they probably have a few secreted away in development, so they will be ready to put them to use if/when they do become useful.

> If current quantum computers were scaled up to more qubits

That depends on what you mean by "scaled up". There is a concept of "Quantum Volume" that exists, which basically means the depth of the longest qubit circuit you can pull off.

That goes into a clear discussion of how to build a quantum computer and the associated thresholds that would allow you to do so. There is a minimum number of qubits needed (that work perfectly), but the paper analyzes how many qubits you'd need under realistic assumptions about how many noisy qubits you'd need to get error correcting qubits at the needed reliability.

For anyone looking for a "headline figure" from the linked arxiv manuscript, their estimate for breaking 2048 bit RSA is around the order of magnitude of a billion qubits.

Actively resisting future attackers and hardware is an incredibly forward-thinking thing to do, bravo. How long into the future is an achievable and desirable duration for encryption (barring any rapid, unforeseen paradigm shift)? If ten years is acceptable for declassification of standard documents in the US, is this a reasonable target for day to day signal chats?

Maybe we need a statue of limitations for encrypted data to help with future proofing/make the collection useless in a court of law? If you go to lengths to encrypt your data, there should be some current and future expectation of privacy around it, even if someone can decrypt it.

To my understanding, despite variance from state to state, a general "rule of thumb" for the statute of limitations outside of "the big R" and "the big M" is ten years. This squares with the generic declassification timetable. I can't think of anything I'm genuinely upset about from more than a decade ago. I feel that I am an almost completely different person than I was a decade ago. If I found out someone robbed a bank ten years ago I'd be more inclined to think "That's wild, how did that go?" than I am "Oh no this guy is going to rob me".

> How long into the future is an achievable and desirable duration for encryption (barring any rapid, unforeseen paradigm shift)?

I don't think "years of expected security" (as used to be popular for e.g. RSA key lengths for some time) is a meaningful metric anymore:

AES-256 and elliptic curve encryption are resistant against classical attackers until beyond the heat death of the universe, so their "time of security" is, for practical purposes, infinite.

I'd expect that, for quantum-safe asymmetric algorithms as well as for AES, there is a similar number corresponding to fundamental pyhsical infeasibility, and then we can also just pick that rather than any low or high number of years.

>I'd expect that, for quantum-safe asymmetric algorithms as well as for AES, there is a similar number corresponding to fundamental physical infeasibility, and then we can also just pick that rather than any low or high number of years.

Ah! My understanding is out of date. Thank you for the detailed answer.

Post-quantum secure encryption can't be developed early enough. In the end all our data today will be at risk of being decrypted in the future - unless we secure it now!

That's not exactly the most compelling blog post. Way too many buzz words. Why would end-end cloud storage even be using public-key cryptography? You're sending the encrypted data to yourself forward in time.

>PQXDH provides post-quantum forward secrecy and a form of cryptographic deniability but still relies on the hardness of the discrete log problem for mutual authentication in this revision of the protocol.

So that's why active mitm with a contemporary quantum computer is a concern mentioned in the blog post. Of course it isn't of any concern currently (since no one has the hardware to exploit this), but I'm curious why they couldn't fit the crystals-kyber method for mutual auth in this hybridized implementation? performance concerns?

That's likely simply because they don't want to switch fingerprint formats again just yet. (They are currently in the process of upgrading the format for a non-cryptographic reason [1].)

Signal fingerprints, which users can manually verify in person or over a trusted channel, are just hashes over the public keys of both users involved – and if these keys change (e.g. due to a quantum upgrade), the format would need to change as well.

Update: Seems like that's actually due to a fundamental restriction of the quantum-safe primitives used and is addressed in the technical specification [2]:

> The post-quantum KEM and signature schemes being standardized by NIST [...] do not provide a mechanism for post-quantum deniable mutual authentication [...]

Seems like Signal's neat trick of using Diffie-Hellman in a three-way manner [3] doesn't work here, since the primitive used (FIPS 203, [4]) is only a key encapsulation method, and FIPS 204 only offers "regular" post-quantum signatures of the non-deniable kind.

Signal highly values deniability, and in this version they seem to have prioritized that in favor of quantum-safe mutual authentication.

Very well written and digestable. I have much respect for the Signal people.

However I'd like to mention using usernames instead of phone numbers has been met with the classic "soon™" response for years now. When will they actually do it? This the the only thing I really really dislike about Signal - their lack of communication on oddly specific things. Like them holding back the codebase for months on GitHub so as to not spoil the.. Surprise! MobileCoin!

As much as I hate cryptocurrencies and the MobileCoin effort (that should never have been part of Signal IMO), I think that "playing with MobileCoin" was low-risk as a side project (that never took off as far as I know?).

However, moving from phone numbers to username seems very tricky to me.

- They have built their system around the idea that they can use the address book that already exists in users phones and not learn about it. It works (at least in theory), it's just that some people don't trust them for that. Or some people have a threat model that is incompatible with that, I suppose. Changing their whole system for a very small minority of very vocal users is a risk. Also if going for usernames makes them collect more metadata, then other users will complain (or maybe even the same that currently complain about the phone numbers).

- Most people use WhatsApp, of which Signal is an improvement in terms of privacy. Those people don't give a damn about sharing the phone number, so the reason why they don't move to Signal is surely not this (because Signal certainly doesn't do worse with the phone number). So changing that would be a risk.

Moving to usernames is a cost, and a risk. All for a few very vocal users who probably have a threat model that requires it. I would understand if they never moved to username, and I would be fine with it.

About time! People picked this up soon after it was committed to the repository back in May, and the beta version of Signal has had dual keys aka "safety numbers" for a while now (maybe 1.5 months?). Happy to see they decided releasing a blog post about it after all :)

The blog doesn't mention it, but based on a code comment, it seems that ~two months from now the new key fingerprints will become mandatory for peers to remain trusted after you update your client

From the blog post:

> We want to extend our sincerest thanks and appreciation to all the people who contributed to the development of this protocol upgrade. This includes the cryptographic research community, the Kyber team, and the following people who directly contributed to our whitepaper

All that behind closed doors, apparently.

There was scarcely a mention of PQXDH to be found on the web besides the Signal source code and the handful of people that picked up on it on social media. A github ticket about adding post-quantum support was responded to with "we have no announcements to make" and then closed due to inactivity. I suppose one only needs so many cooks, but why not have this whitepaper, the ideas going into the protocol design, the timeline, whatever goes into this security decision for an open source app visible, even if only read-only? Feels more like source-available than open source spirited, but I guess that's in line with "the ecosystem is moving" (Moxie's talk where he says that they can do better without a community developing more clients, integrations, federation, etc.)

I am a bit puzzled: governments and big corp are pouring indecent amounts of money in developing quantum computers, which main application, afaict, is to break cryptography.

...and this is defeated by changing our algorithms ?

Whats the use in developing quantum computers then?

three letter agencies have been collecting and harvesting encrypted communications for decades in the event that they can break the algorithms later (in this case that would mean quantum computers)

If your threat model includes someone breaking your encrypted communications X years from now, quantum resistance can be important. Signal markets itself as for reporters and whistleblowers who may have state-level adversaries.

If you're seriously worried about that you're already using disappearing messages, and the maximum retention period is 4 weeks. It's good that they're doing this but it also feels a little bit like hype, while very basic UI problems remain unfixed.

For example, you can do backups or media exports from signal, but the user isn't given any control over where they are stored. If you want to dump them to an SD card in your phone, for example, you'll have to find them in your primary storage and then copy them manually to the SD card. Bizarrely, voice messages are dumped into a 'music' folder even though they're labeled as 'audio' within the Signal UI.

It's similar to how FM radio works: there's a main frequency and several sidebands. When you adjust the tuner to pick up a station, you're essentially "interacting" with the corresponding station. But if there are too many stations, you may no longer be able to hear the music, and as a result, there would be only a static noise present.

This leads me to a somewhat cynical conspiracy. Imagine the moment when a curios government agency realises that building a quantum computer for this purpose is a futile endeavor. Instead of admitting this, they could perpetuate the idea that its construction is just around the corner. Then, act as a wolf in sheep’s skin and introduce everyone to quantum-resistant solutions, which are unfortunate to have secret hidden backdoors by having done more advanced research on them. Has anyone thought about this?

Both classical and quantum computers (1) can not "scale" without error correction because of analog noise (although it is less crucial on the classical side), but (2) can be build with error correction codes integrated in them to overcome that noise.

Also, you do not need all-to-all connectivity between your qubits (or bits) to build a scalable quantum (or classical) computer.

Edit: To add to your FM radio metaphor: you can have way more FM radio channels if each channel is on a separate coax cable (or in physics speak, if you multiplex not only by frequency but by spacial mode). No need to have all your qubits be controlled by the same optical or microwave mode, you can have physically separate control lines for each qubit and then eliminating cross-talk is as simple as inverting an n-by-n matrix.

We do use “error correction” on storage (and do see bit errors creep into data stored on disk and in RAM over time) but not “fault tolerance” on the compute. In fact there is no such thing as fault-tolerant classical compute - the CPU only works if it “perfect” or “near perfect” (or if you had an ancillary computer that was perfect to implement the correction). Note that occasionally computers do crash due to a bit error in your CPU, or you get a “unstable” CPU that you need to replace.

(We do create fault-tolerant distributed systems, where such faults can generally be modelled and remedied as network errors, not compute errors.)

Quantum fault tolerance relies on the fact that you can do “perfect” classical computation - which I find kind of amusing!

https://www.vmware.com/content/dam/digitalmarketing/vmware/e... (throughout)

https://en.wikipedia.org/wiki/Tandem_Computers

https://www.intel.sg/content/dam/doc/product-brief/high-perf... (page 5)

Not that it is impossible for OP to be right, but the argument he is using used to be applied to classical computation and ultimately turned out to be wrong in that context.

https://science.nasa.gov/science-news/science-at-nasa/2005/1...

What we can’t do is have a set of CPUs where roughly every 1000th instruction fails and hope that plugging together a bunch of computers together to check each other’s progress will work. The specific issue is that the checking is wrong so frequently that you can’t “win” to solve a problem involving trillions of instructions by adding more computers. The overhead of “checking the checkers” just blows up.

What’s interesting about quantum computation is this is exactly what is proposed - have qubit error rates of (I think) around 0.1% and lots of measurement, classical compute, and feedback control to keep the computation “on track”. The whole scheme relies on the error rate for all of that classical compute step to be negligible.

More generally, nobody needs "perfect" classical computing either to make quantum computing work. Given a (quantum or classical) processor with some degree of error, the idea behind these techniques is to "boost" that into a processor with arbitrarily small error. It just turns out that with modern classical processors the error is so small that we suffer it rather than pay the cost of using these techniques.

[1] https://www.cs.ucf.edu/~dcm/Teaching/COP5611-Spring2013/Pape...

This is distinct from classical computers, where you can describe a bit without needing to describe the other bits in the computer. You cannot describe a qubit (in a way that's computationally useful, at least) without describing all of them.

To describe a set of classical bits completely, you need a probability distribution over the whole system (including possible classical correlations induced by the crosstalk that OP spoke about). That probability distribution is a stochastic vector that has 2^n real positive components if we have n bits.

To describe a set of qubits completely, you need at least a "quantum probability distribution", i.e. a ket that has 2^n complex components (I am skipping the discussion of density matrices).

Both the classical and quantum description of the bits requires exponentially many real numbers. This exponential behavior on its own is not enough to the explain "quantum computational advantage". The exponential behavior is well known in plenty of classical contexts (e.g. under the name "curse of dimensionality") and classically solved with Monte Carlo modeling.

Scott Aaronson's lecture notes cover that quite well.

At the end, the issue of crosstalk is modeled in a very similar way in the quantum and classical case, and if it forbids the existence of a quantum computer, it should forbid the existence of classical ones as well. In both cases it is the use of linear binary (or stabilizer) codes that solves that issue.

If I have understood your argument correctly I don’t think the conclusion follows from the premise because crosstalk is not fundamental to classical computing. By that I mean that for a given logical cpu design, crosstalk can be (and is) minimized through physical design characteristics (eg grounding, wire spacing etc) without reducing the size of computation. The same cannot afaik be said of quantum computers, where the interaction between qbits is essential to computation

Of course, qubits are drastically more difficult, but the principle is the same.

Basically what I mean is that classical computing can split things up during storage or routing to reduce coupling, but quantum can’t do this because the qbits must stay entangled to be useful

And the way we mathematically express and model classical correlations in classical error correcting codes is virtually the same as the way to model quantum correlations (entanglement) in quantum error correcting codes.

All of this with the usual disclaimer: the quantum case is much more difficult, engineeringly speaking.

For example in PCIe each lane can only cary one bit at a time in each direction. In typical consumer equipment, there are at most 16 lanes of PCIe (eg a graphics card socket) meaning there can only be at most 16 bits (2 bytes) on the wire at any given time, but the bits are sent at a very high frequency allowing for high transfer rates. This only works because taking those 256 bytes and sending them one by one (or 16 by 16) over the wire doesn't lose information.

- Both classical probabilistic bits and qubits need exponential amount of memory to write down their full state (e.g. stochastic vectors or kets). The exponential growth, on its own, is not enough to explain the conjectured additional computational power of qubits. This is discussed quite well in the Aaronson lecture notes.

- Entanglement does not have much to do with things being kept physically in contact (or proximity), just like correlation between classical bits has little to do with bits being kept in contact.

- Nothing stops you from sending/storing entangled bits one by one, completely separate from each other. If anything, the vast majority of interesting uses of entanglement very much depend on doing interesting things to spatially separate, uncoupled, disconnected, remote qubits. Sending 1000 qubits from point A to point B does not require a bus of width 1000, you can send each qubit completely separately from the rest, no matter whether they are entangled or not.

- Not even error correction requires you to work with "big" sets of qubits at the same time. In error correcting codes, the qubits are all entangled, but you still work with qubits one by one (e.g. see Shor's syndrome measurement protocol).

- I strongly believe your first sentence is too vague and/or wrong: "What makes qbits valuable is that they scale superlinearly the more of them you have entangled together". As I mentioned, the exponential growth of the "state descriptor" is there for the classical probabilistic computers, which are believed to be no more powerful than classical deterministic computers (see e.g. derandomization and expander graphs). Moreover, Holevo's theorem does basically say that you can not extract more than n bits of information from n qubits.

- Another quote: "Since they have to remain entangled to do valuable work, you can't physically separate them to prevent interference" -- yes, you can and very much do separate them in the vast majority of interesting applications of entanglement.

Quantum mechanics is a fickly thing. Schrödingers cat has not been observed ;-) because scaling kills the quantum mechanical properties.

My (entirely theoretical) PhD project dealt with a 2 dimensional electromagnetic cavity, with one 'perfect' mirror (100% reflecting) and one 'imperfect', say 99.999999999% reflecting.

My results were that if you started with a 'squeezed' vacuum, in just a few roundtrips, the 'squeeziness' disappears, due to the bath/loss of the non-perfect mirror. My 'gut feeling' tells me that if we have enough qbits to actually factor something more then 2x2, the loss/bath effect will delete any useful use of the quantum computer.

So I am in the 'will not happen' category. (Not in 1, not in 5, not in 30 years, not in 500 years).

This is different from classical error correction: as far as I understand Schors algorithm, there is no 'get N approximate answers, combine them, and get a better answer' part in there. While classically you could do 100 measurements and get a better approximation then with just one.

You are also somewhat wrong about the combining N approximate answers and combining them. It is correct that that the "repetition code" approach (repeat the same calculation a bunch of time and average the results) does

notwork for quantum computers. However there are quantum error correcting codes which do appear to work, although they require sufficiently small initial error rates (so-called thresholds) in order to help.Essentially the only reason anyone thinks that useful quantum computation is possible is because of things called threshold theorems, which state that as long as the noise in each qubit is less than some small but non-zero error rate you can add more qubits and use quantum error correction to make your computation arbitrarily precise. In other words as long as you're below the threshold rate quantum computers scale well.

Of course those threshold rates are very very small, and creating significiant numbers of qubits which are below the threshold rates is incredibly difficult, but theoretically it works.

Last I heard, getting below that threshold was going to take one or two orders of magnitude of noise improvement. That seems unlikely.

Say you were at a VC presentation and the company said that they had this really great system and the only thing stopping their immense success was the requirement to reduce the noise by an order or two of magnitude. Oh, and by the way, we already have the system very close to absolute zero. So you ask them what they are planning to do and they tell you that they don't have the faintest idea. Noise is always the ultimate limit on the information that can be obtained from a system. The most reasonable interpretation of the situation is that a technology doesn't exist and that there is no reason to think it would ever exist.

But when it comes to quantum computers the optimism is boundless. I am not sure why.

I think part of the reason for optimism is that there are a lot of different ways to make a quantum computer. Ion traps, linear optics, resonant superconducting circuits, topological quantum computing (if anyone ever actually discovers an anyon) and many more.

Different groups are working on wildly different directions right now, i.e. Google and IBM are doing superconducting stuff, microsoft loves topological stuff, researchers at various places are doing trapped ions and at least one company (psi-quantum) is working on linear optics.

It certainly wouldn't be surprising if several of these approaches fail to work, but the hope is that they don't

allfail.I mean, i know its different context, but in general, a 1-2 magnitude improvement in hardware capability does not sound crazy on its face for most of the computer industry. Most computer hardware has improved by that much over very short time periods historically.

As for your conspiracy: The conclusion of that would be to continue using hybrid constructions.

Though, and I know crypto more than physics, I'd consider it as highly unlikely. Creating backdoors that others won't find is next to impossible. Why do I say that? Because we have some historic evidence how "NSA wants to plant a backdoor into an algorithm" works. Usually they've been discovered quickly. They can still be successful, because as we have seen with dual ec drbg, it was discovered quickly, yet nobody really cared and industry still used the algorithm.

But something like that won't happen in a transparent process. You can be sure that every expert in the field had a look at crystals-kyber. If anything about it was suspicious, we would know.

> As for your conspiracy: The conclusion of that would be to continue using hybrid constructions.

As long as ordinary crypto does not get deprecated.

Anyway, the number of responses made me curious about this new novel crystals-kyber. Do you have any recommendations on the best introductory text that explains it from the bird's view?

On that note, just this month Tutanota emailed customers that their Secure Connect product is being turned off at the end of next month in order to focus developers on quantum-secure encryption solutions.

This occurs in a time when there appear to be a stark few hosted E2EE webform-submission options that don't involve either a) bigtech or b) fly-by-night operations. Tutanota was a happy medium, and is getting out of that market, apparently.

It can make one wonder what kind of pressure might exist to turn off a quite good, working solution to an actual problem. If one didn't know better, it could seem that blaming the need for quantum is just a distraction.

The GP is not the first to make the observation in a natural line of inquiry. HN guidelines ask to assume good faith, and surely we know to try to.

Seems weird to be assuming what's possible based on current technical obstruction. If you trace CPUs development, or many other technologies, many people with deep technical knowledge were certain about some thresholds which we have long passed. This bias even had some name which I forgot.

firmlytied to an extremely well established physical law with no "wiggle room."For example I accept that faster than light travel and inertialess propulsion are both impossible. If either of these things were shown to be possible, it would mean that there are huge errors or oversights in the most well established areas of physics.

I also accept the conditional impossibility of things that are just provably beyond our current ability for fundamental reasons, like a Dyson sphere. I don't know of any physics that says you could not build one, but for us it'd be like dust mites building the international space station.

For everything else I leave the door open. People have historically underestimated creativity.

For the first types of things, taking preparatory steps would be irrational. We don't need to plan for the arrival of FTL travel because we have no reason to think it will ever arrive.

For the latter types of things, preparing does make some sense as long as it's not unreasonably expensive. We do have reason to believe that a large quantum computer

mightbe possible, so mucking around with a bit of code to defend our security against a surprise seems rational.The same will be true of PQ signing schemes, meaning that a backdoor would be predicated on the USG believing that they have some

trulyremarkable NOBUS breakthrough in PQC. That feels unlikely to me; NSA interventions in cryptographic design have historically gone in the opposite direction[1].(This is separate from the actual design question. I don't know as much about stateless PQ signing schemes, but the stateful ones are mostly "boring" hash-based cryptography that's well understood to be strong in both settings.)

[1]: https://en.wikipedia.org/wiki/Data_Encryption_Standard#NSA's...

I'm not sure I'd say that given that there are some other designs and things that have gone on[1][2]. Particularly the Dual EC debacle. They have a history of helping make suspect or down right compromised crypto if they think they can get away with it. That said it does look like they avoid doing it to anything that gets USA GOV approval for use internally but it's difficult to say to what level they would actually go to for getting a backdoor out into the world that would let them look at other secrets.

[1] https://en.wikipedia.org/wiki/Export_of_cryptography_from_th... [2] https://en.wikipedia.org/wiki/Dual_EC_DRBG

"As such, as quantum technology advances, there is the potential for future quantum computers to have a significant impact on current cryptographic systems. Forecasting the future is difficult, but the general consensus is that such computers might arrive some time in the 2030s, or might not arrive until 2050 or later." quoted from:

https://datatracker.ietf.org/doc/draft-ietf-pquip-pqc-engine...

Under the constraints of us correctly modelling the math of QC. Isn't it possible that we have gaps between our models of QC and how it works in reality that could make it such that these algorithms can't actually offer any speedup over classical approaches in the real world? Or similarly, even if they do work, maybe it's just impossible to build a computer with sufficiently many qubits to outperform classical approaches. Anyway, massive gap between theory and practice with no indication we're bridging it in meaningful ways.

You're right.

Those questioning are legit, and we don't know.

It could even turn that the post quantum procedures we chose in our times produces messages quickly breakable by classical computers. And that eventually one day, someone discovers it while trying to solve another problem and decides to warn everyone the right way (yay).

In such unpropitious (but, we still don't know) scenario, how cute we'll be if all that attempt for protecting us from the future were sincere.

If BQP=BPP or if BQP did not accurately model a quantum computer, i think that would be a much more interesting result than an actual working quantum computer. It would be world shattering.

There’s plenty of math that exists purely in the virtual real with no connection to physical reality. Math is a language to describe any possible universe. That doesn’t mean anything we say in it necessarily applies to our universe.

I would consider that world shattering. It would suggest significant flaws in our understanding of the universe which would be very exciting.

Honestly i can't think of a more earth shattering discovery. It would be on par with aliens landing and saying we come in peace.

> There’s plenty of math that exists purely in the virtual real with no connection to physical reality.

Obviously.

The earth shattering part is if quantum physics goes from an accurate description of most of the universe to one that isn't.

What you are basically saying is any theory could be wrong. Well duh, but that describes literally all earth shattering scientific discoveries.

To your point about PQC being used exclusively, post-quantum encryption methods are designed to be resistant to both quantum and classical attacks. That is one of the key stated goals of the NIST post-quantum cryptography program.

The McEliece cryptosystem[1] is one of finalists in the PQC competition and it's also quite old - developed in 1978. It didn't face as much scrutiny as RSA or ECC due to its large key sizes which resulted in nonexistent adoption.

My understanding is that all the other PQC candidates including Kyber are much newer and far less studied.

[1] https://en.wikipedia.org/wiki/McEliece_cryptosystem

While I appreciate and respect the useful criticism, I would be careful with this kind of content:

Regardless wether you read this before or authored it, if you're right then go ahead and show to the world how we're wrong, if you're wrong you're helping quantum attackers.

lotof interference.As for QC, we don't seem to be running into any limits yet. It may well be that any limits are well above useful amount of qbits.

Are you suggesting that limiting interference will be a practical dead end that is prevents advancement?

Either way that would be a pretty significant claim. There are lots of research directions being pursued and plenty of smart people think it’s worth trying.

This is a hunch I have. Regarding the "plenty of smart people think it’s worth trying", I can only provide an analogy of the 15-14th puzzle known as the Boss puzzle at that time, for which a substantial prize was promised for the first one who could solve it. A lesser-known proof that it is impossible came to surface decades later. There is a lot of inertia in academia along those lines, where grants depend on your ability to make a convincing argument that your path will solve the problem. This sets up PhDs to know only to advance but not to question as the latter does not give the prize.

Of course it happens the other way around also - things are thought not to be possible and later realized through some unexpected result or insight.

But I wouldn’t guess that’s as common or as systemic.

My understanding of the more popular superconducting types is that a bit of superconductor is connected up to a junction that allows quantum tunneling of electron pairs. The number of electron pairs that tunnel through the junction determines the state and the state is read out by an exotic electrometer.

As for how the chip itself is made, as far as I know it's a relative standard lithography process except with added superconducting layers.

I think it will be pretty obvious when someone gets a quantum computer working.

Once the cat is out of the bag, everyone will rush to post-quantum cryptography and all that value will be lost in a relatively short period. Indeed, we already witnessed this in the 2010s following the Snowden revelations when big tech, in a concerted effort, adopted HTTPS. Now that is the standard.

For example, "The Fiscal Year 2022 budget appropriation included $65.7 billion for the National Intelligence Program, and $24.1 billion for the Military Intelligence Program."

Source: https://irp.fas.org/budget/index.html

If Bitcoin would become vulnerable, its value would collapse to zero overnight once it's known. There is limited amount of money anyone could extract before the value collapses.

That is a misleading claim. First: Any quantum key cracker would need to be fast since the operations would all have to be performed within the coherence time, so an attacker could race coins as they were spent or perform small reorgs to steal coins even if they lost the initial race. Secondly: The majority of all circulating coins are stored in addresses which have been reused. Thirdly: the common hashing scheme you mention is 160 bits, so in the presence of quantum computers would only have 80 bits of security against second preimages just by using grover's algorithim and perhaps worse with more specialization (and, in fact, somewhat less considering multi target attacks) which wouldn't and shouldn't be regarded as secure.

> If Bitcoin would become vulnerable, its value would collapse to zero overnight once it's known. There is limited amount of money anyone could extract before the value collapses.

Once its known. There have been insecure altcoins where hackers skimmed them for many months without being noticed. It is indeed technically finite, sure, but large.

Interesting, do you have any statistics on that? But I guess with large exchange wallets, it makes sense.

> only have 80 bits of security against second preimages just by using grover's algorithim

True, but 80 bits are anything but trivial to brute-force using classical computers! I'm not that familiar with quantum complexity, but as I understand it, you'd still need 2^80 quantum operations to brute-force a 160 bit hash.

And for more context: with p2pkh addresses, you are sending to the hash of the address, and hashes are quantum safe.

[0] https://www2.deloitte.com/nl/nl/pages/innovatie/artikelen/qu...

The initial shared private key exchange could be done with more expensive, quantum resistant cryptography but the actual communication could be done through symmetric encryption.

https://www.inkandswitch.com/backchannel/

For the key exchange itself ("PAKE") maybe something like this: https://journal-home.s3.ap-northeast-2.amazonaws.com/site/ic...

And for the symmetric encryption: https://github.com/Steppenwolfe65/eAES

That's exactly how Signal's symmetric ratcheting works (with the addition of an asymmetric ratcheting step to limit the forward impact of a key compromise on either side, which you can't do symmetrically).

> For the key exchange itself ("PAKE")

A PAKE requires a short shared secret. That does not usually exist in Signal's scenario.

> And for the symmetric encryption: [...]

Why that over regular AES?

> Why that over regular AES?

Well, I don't understand the exact details but it seems mostly to do with possible vulnerability to brute forcing if certain pseudo-random number generators are used:

https://eprint.iacr.org/2019/1208

> A PAKE requires a short shared secret. That does not usually exist in Signal's scenario.

I'm not too familiar with the details of how Signal's initial key exchange works, other than that from a user experience point of view it relies on people having shared phone numbers in at least one direction. So in practice this is kind of similar to the sharing of a one time password. Basically the users already need some kind of a "trusted outside channel" to exchange phone numbers.

From that vantage requiring people to "pair" in a secure context, most likely in person, wouldn't be that much different in practice from the way people usually exchange global public identifiers now (social media handles, phone numbers, etc.) over existing trusted channels. And having a shared private key that you store in a kind of pet-name address book is already so conceptually similar to an address book, which I find kind of amazing.

The big difference being that you'd obviously want to keep this address book secret, whereas phones have made it incredibly easy and have normalized leaking your contacts to any app that requests it. In practice, preventing private key leaks would be a challenging part of building out something like backchannel. The shared secret petname address books would probably need to be application specific, used in a sandbox and encrypted at rest.

Maybe there could be some standard tools for securely using one application's channels to bootstrap connections on another, but ideally it should be very, very hard for the shared secrets to leak, but easy enough for users to manage and be aware of them.

Ah, it's apparently a quantum-hardened variant of AES! But AES-256 is inherently quantum-hard and much better understood at this point. In other words, AES is really not the security bottleneck against quantum adversaries.

> I'm not too familiar with the details of how Signal's initial key exchange works, other than that from a user experience point of view it relies on people having shared phone numbers in at least one direction. So in practice this is kind of similar to the sharing of a one time password.

A user's phone number is public information; you can't use that as the password in a PAKE.

Signal has basically two ways of exchanging keys: Unauthenticated (in which case you trust Signal's service to really provide you your contact's keys, and not their own, which would allow them to MITM your chats), and "safety number comparison", for which you have to manually compare a short numerical string with each contact over a trusted channel (ideally in person). It's less convenient, but removes Signal's servers from the set of entities you need to trust.

Theoretically Signal could also offer a PAKE option to interactively authenticate a new contact remotely (if you do indeed have a shared password), but that's probably not easy to do in a way that's not prone to social engineering attacks, since you don't have a channel to communicate about

whatyour password should be. (Remember that you don't trust the contact yet; an attacker could pick a password hint that they know the answer to as well!)For sure, I just mean that people are already used to exchanging numbers, in person, to establish phone contact. So from a user experience point of view, exchanging a PAKE password for a shared secret isn't

thatmuch different from exchanging phone numbers. The resulting shared secret could be treated very much like a phone number, except it's like a secret phone number between you and that person.> Theoretically Signal could also offer a PAKE option to interactively authenticate a new contact remotely (if you do indeed have a shared password), but that's probably not easy to do in a way that's not prone to social engineering attacks, since you don't have a channel to communicate about what your password should be. (Remember that you don't trust the contact yet; an attacker could pick a password hint that they know the answer to as well!)

This kind of thing makes the most sense to establish a secure connection with someone you already trust over an existing trusted channel. Much in the same way that you'd commonly get someone's phone number for the first time at a party, in person. (The physical environment of the party being the trusted channel.)

Assuming these generative fake voice/video models get to the point (over the next five years) where people cannot easily differentiate between a real and fake conversation over an arbitrary/untrusted digital channel, I'd assume that the most trusted channels for these kinds of exchanges will be IRL in person or established secure channels.

> A user's phone number is public information

One of the big arguments in the backchannel proposal is that global public identifiers are vulnerable to spam, fraud, impersonation, etc... and that maybe we don't actually need them as much as we assume? Maybe we can instead share per-contact shared secrets as a form of identity?

If you haven't watched it already, I highly recommend this talk about backchannel: https://www.youtube.com/watch?v=5ClLkuaoE-o

"Instead of bits as in a classical computer, quantum computers operate on qubits. Rather than 0 or 1, qubits can exist in a superposition of states, in some sense allowing them to be both values at once."'Instead of beads as in a classical abacus, our Quabacus operates on Quabeads! Rather than positions 0 or 1, quabeads can be in both positions at once!'

Beads that are simultaneously in both positions sounds like a f$@!#g annoying glitch and not a feature - how does that help anyone record or calculate numbers? ('Would someone take a look a this broken Quabacus-abacus and resolve these g#$%!m glitching quabeads?!!!') It mocks the non-technical reader, who assumes they must have been given enough information to understand why it's faster and possibly how it works, but can't figure it out.

They have not been given enough. Does anyone who perhaps understands it better than I do want to take a stab at a new commonplace explanation, one that connects the dots between quantum superposition and performance (for certain calculations)?

* The state of a qubit has two basis states. Arbitrary states are linear combinations (=superposition) of these basis states. Importantly, the coefficients in the linear combination can be positive or negative.

* The state of N qubits has 2^N basis states. poly(N) gate can put the state of the N qubits in which each of the 2^N basis states has a non-zero coefficient.

* In quantum algorithms, we play with the coefficients. Basically, certain problems (like factoring) have structure such that you can change the coefficients so that the ones corresponding to the correct answer have magnitude approximately 1, while other coefficients have magnitude 0. This is enabled by the fact that coefficients are positive and negative, and so by mixing them, you can cancel some out.

* When you measure the qubits, the random answer you get is proportional to the coefficients squared. Hence, with high probability you get the right answer.

Where does the speed up come from? When you put the state of N qubits in the linear combination of 2^N basis states, and run it through the problem, that is akin to computing all answers in parallel. Like for factoring, checking all possible divisors to see which ones work. But to extract the answer out in the way described above, you have to do additional processing. This only works for some problems, with the right structure.

> In quantum algorithms, we play with the coefficients.

Is it accurate to say, more simply (simplicity being a high priority here), that 'we play with the probabilities' of collapse to one basis state or to the other - thus skipping the need to explain linear combinations or coefficients?

In fact, classical stochastic computational models have exactly this deficiency. You can't pump up probabilities of some unknown states. They can only tend towards a uniform mixture.

Everything is in the minus sign.

The only way is to gear up on math to the level where you can if not reason within the theory then to at least make sense of its presented conclusions.

If current quantum computers were scaled up to more qubits, could they break modern crypto? Or would we need both more qubits and a new quantum computer architecture?

I wonder how possible it is that IBM could have already gone further and are already cracking modern crypto in secret, e.g. funded by the NSA. Is that a crazy conspiracy idea, or actually a possibility?

I am investing in IBM under the assumption that this is an actual possibility. Their public QC roadmap actually looks like a realistic journey now.

I strongly believe that the NSA, et. al.

currentlyhave access to a very powerful quantum computer - likely constructed by IBM under contract.The game theory around this is such that it is impossible for me to accept that there are zero secret quantum computers in existence by now. There is too much to lose by not playing that game as hard as you can.

(I deleted my previous edit because I had misread part of what you wrote.)

The challenge of building QCs - as evidenced by billions of dollars worth of research in them - is many orders of magnitude more difficult than say the Manhattan project. The latter put together the best of the best on the project. You are suggesting a scenario where a tiny fraction of the best of the best are secreted away, with many of their past collaborators unaware of their doings, and have successfully built a QC.

While the many brilliant best of the best who are working publicly, with many billions of dollars of research funding are currently only making very slow progress. It simply does not square.

For example their Accumulo OSS suggests they're capturing and storing a lot of data to analyze later. The Ghidra OSS being a best in class reverse engineering tool also suggests that alot of their work revolves around finding zero day vulnerabilities.

I would guess they probably don't have any actually useful and in production right now, but they probably have a few secreted away in development, so they will be ready to put them to use if/when they do become useful.

That depends on what you mean by "scaled up". There is a concept of "Quantum Volume" that exists, which basically means the depth of the longest qubit circuit you can pull off.

https://en.wikipedia.org/wiki/Quantum_volume

'Simply' (it's never simple ;) ) adding qubits to a machine does not necessarily increase its Quantum Volume. Decreasing the noise typically will.

However, there is a threshold at which point you can scale up mostly indefinitely. This is what the whole Quantum Error Correction is all about.

https://en.wikipedia.org/wiki/Quantum_error_correction

There is a paper

https://arxiv.org/abs/1905.09749

That goes into a clear discussion of how to build a quantum computer and the associated thresholds that would allow you to do so. There is a minimum number of qubits needed (that work perfectly), but the paper analyzes how many qubits you'd need under realistic assumptions about how many noisy qubits you'd need to get error correcting qubits at the needed reliability.

futureattackers and hardware is an incredibly forward-thinking thing to do, bravo.How longinto the future is an achievable and desirable duration for encryption (barring any rapid, unforeseen paradigm shift)? If ten years is acceptable for declassification of standard documents in the US, is this a reasonable target for day to day signal chats?I don't think "years of expected security" (as used to be popular for e.g. RSA key lengths for some time) is a meaningful metric anymore:

AES-256 and elliptic curve encryption are resistant against classical attackers until beyond the heat death of the universe, so their "time of security" is, for practical purposes, infinite.

I'd expect that, for quantum-safe asymmetric algorithms as well as for AES, there is a similar number corresponding to fundamental pyhsical infeasibility, and then we can also just pick that rather than any low or high number of years.

Ah! My understanding is out of date. Thank you for the detailed answer.

At Tutanota we also use the Signal protocol to build post-quantum secure encryption for email and drive: https://tutanota.com/blog/pqdrive-project

Post-quantum secure encryption can't be developed early enough. In the end all our data today will be at risk of being decrypted in the future - unless we secure it now!

>PQXDH provides post-quantum forward secrecy and a form of cryptographic deniability but still relies on the hardness of the discrete log problem for mutual authentication in this revision of the protocol.

So that's why active mitm with a contemporary quantum computer is a concern mentioned in the blog post. Of course it isn't of any concern currently (since no one has the hardware to exploit this), but I'm curious why they couldn't fit the crystals-kyber method for mutual auth in this hybridized implementation? performance concerns?

Signal fingerprints, which users can manually verify in person or over a trusted channel, are just hashes over the public keys of both users involved – and if these keys change (e.g. due to a quantum upgrade), the format would need to change as well.

Update: Seems like that's actually due to a fundamental restriction of the quantum-safe primitives used and is addressed in the technical specification [2]:

> The post-quantum KEM and signature schemes being standardized by NIST [...] do not provide a mechanism for post-quantum deniable mutual authentication [...]

Seems like Signal's neat trick of using Diffie-Hellman in a three-way manner [3] doesn't work here, since the primitive used (FIPS 203, [4]) is only a key encapsulation method, and FIPS 204 only offers "regular" post-quantum signatures of the non-deniable kind.

Signal highly values deniability, and in this version they seem to have prioritized that in favor of quantum-safe mutual authentication.

[1] https://support.signal.org/hc/en-us/articles/360007060632-Wh...

[2] https://signal.org/docs/specifications/pqxdh/#active-quantum...

[3] https://signal.org/docs/specifications/x3dh/

[4] https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.203.ipd.pdf

However I'd like to mention using usernames instead of phone numbers has been met with the classic "soon™" response for years now. When will they actually do it? This the the only thing I really really dislike about Signal - their lack of communication on oddly specific things. Like them holding back the codebase for months on GitHub so as to not spoil the.. Surprise! MobileCoin!

However, moving from phone numbers to username seems very tricky to me.

- They have built their system around the idea that they can use the address book that

already existsin users phones andnot learn about it. It works (at least in theory), it's just thatsomepeople don't trust them for that. Orsomepeople have a threat model that is incompatible with that, I suppose. Changing their whole system for a very small minority of very vocal users is a risk. Also if going for usernames makes them collect more metadata, then other users will complain (or maybe even the same that currently complain about the phone numbers).- Most people use WhatsApp, of which Signal is an improvement in terms of privacy. Those people don't give a damn about sharing the phone number, so the reason why they don't move to Signal is surely not this (because Signal certainly doesn't do worse with the phone number). So changing that would be a risk.

Moving to usernames is a cost, and a risk. All for a few very vocal users who probably have a threat model that requires it. I would understand if they never moved to username, and I would be fine with it.

Pick your platform: https://mobile.twitter.com/Th3Zer0/status/166107804719636481... https://ch.linkedin.com/posts/dr-angie-qarry-397538127_add-k... https://chaos.social/@luc/111048883207848400 (disclosure: the latter is myself; there was another Mastodon post I'm pretty sure, but when I search for PQXDH there it only shows my own post)

The blog doesn't mention it, but based on a code comment, it seems that ~two months from now the new key fingerprints will become mandatory for peers to remain trusted after you update your client

From the blog post:

> We want to extend our sincerest thanks and appreciation to all the people who contributed to the development of this protocol upgrade. This includes the cryptographic research community, the Kyber team, and the following people who directly contributed to our whitepaper

All that behind closed doors, apparently.

There was scarcely a mention of PQXDH to be found on the web besides the Signal source code and the handful of people that picked up on it on social media. A github ticket about adding post-quantum support was responded to with "we have no announcements to make" and then closed due to inactivity. I suppose one only needs so many cooks, but why not have this whitepaper, the ideas going into the protocol design, the timeline, whatever goes into this security decision

for an open source appvisible, even if only read-only? Feels more like source-available than open source spirited, but I guess that's in line with "the ecosystem is moving" (Moxie's talk where he says that they can do better without a community developing more clients, integrations, federation, etc.)...and this is defeated by changing our algorithms ?

Whats the use in developing quantum computers then?

For example, you can do backups or media exports from signal, but the user isn't given any control over where they are stored. If you want to dump them to an SD card in your phone, for example, you'll have to find them in your primary storage and then copy them manually to the SD card. Bizarrely, voice messages are dumped into a 'music' folder even though they're labeled as 'audio' within the Signal UI.

the tinfoil hatters have been proven again and again.

why not take the precautions we can to protect today’s data from being fed into some AI in a few years decades or whatever