Google Quantum AI

(quantumai.google)

187 points | by segasaturn 13 days ago

19 comments

  • fooker 13 days ago
    The fact this prize exists is admitting that no one has figured out a use for quantum computers.

    I have heard this mentioned several times in the last decade or so : "The only thing a quantum computer definitively does better than a classical computer is simulating a quantum computer."

    Whether this capability is useful is up in the air.

    Note that in practice, classical computers are going to be better at factoring numbers for the foreseeable future.

    • iaseiadit 13 days ago
      There have been XPRIZE competitions for vehicle efficiency, oil spill technology, more efficient rockets, health sensors, AI systems, genomics, etc.

      Whether or not quantum computers have practical applications, the prize itself is not evidence of that.

      • throwup238 12 days ago
        > There have been XPRIZE competitions for vehicle efficiency, oil spill technology, more efficient rockets, health sensors, AI systems, genomics, etc.

        All of which are based on existing technologies that have been delivering for decades if not an entire century (vehicle efficiency). Even something as nebulous as "AI systems" has been around for twenty years in the form of Google's original semantic search capabilities.

        This "Quantum AI" prize, however, is a solution in search of a problem.

    • thomk 13 days ago
      Ther are plenty of well documented uses for quantum computers, the hardware is just too nascent to fully accommodate them. The most powerful quantum computers today still only have just over 1,000 qbits.
      • fooker 12 days ago
        I don't think this is totally accurate.

        If you have significantly better quantum computers, you can solve realistic problems, yes.

        But what's not being spelled out here is that as far as we know classical computers will still totally smoke them unless you allow a large probability of inaccurate results.

        And if you are fine with inaccurate results, classical randomized algorithms make it a much more difficult deadline to beat.

      • germandiago 13 days ago
        What is the benchmark to have something useful for real-world use in number of qbits?
        • vtomole 13 days ago
          20 million physical qubits to break RSA 2048: https://quantum-journal.org/papers/q-2021-04-15-433/.
        • ziofill 13 days ago
          Physicist here. It highly depends on a bunch of factors (the type of qubits, the error correcting code, the error rate, the algorithm…), but a ballpark number for practical usefulness is 1 million physical qubits.

          Keep in mind that qubit requirements keep tumbling down as people work hard to squeeze out as much as possible from a limited number of qubits.

      • thegrim33 12 days ago
        Assuming what the public knows about is the state of the art, of course, which I doubt is a good assumption to make. I'm sure major governments have been funneling billions for years into secret projects to be the first to be able to break the (non-post-quantum) communications of everyone else.
    • becquerel 13 days ago
      Einstein did not have GPS in mind when he was developing his theories of relativity.
      • Sesse__ 13 days ago
        The theory of relativity does not in any way enable GPS. GPS is subject to (some) relativistic effects, but that is merely a source of bias, which could be corrected for with just an experience-based correction factor even if we did not understand relativity. If relativity did not exist as a physical concept, GPS would be easier, not harder or impossible. (I guess this misconception comes from xkcd in some form?)

        A perhaps more relevant example: Einstein did not have the cell phone camera in mind when developing his theory of the photoelectric effect.

        • razodactyl 1 day ago
          Heh. This is an interesting comment. Imagine if we didn't know about relativity - we would have discovered it as an annoyance/weird quirk instead as we ran into it.

          Reminds me of that story about the self-evolving chip that was tasked to learn how to classify tones and instead took advantage of specific flaws in its own package.

        • adrian_b 12 days ago
          A more relevant example would be that Einstein did not predict how to make a laser when he discovered the theory of the stimulated emission of radiation (the "SER" in "LASER").

          The photoelectric effect had been well known for decades, Einstein has just given a good explanation of its behavior that was already known from experiments. It would have been equally easy for the designers of the first video camera vacuum tubes, which were used in the early television, to design them based only on the known experimental laws, ignoring Einstein's explanation.

          On the other hand, the formulae of the stimulated emission of radiation, complementing the previously known phenomena of absorption and spontaneous emission, were something new, published for the first time by Einstein in 1917. They are the most original part of Einstein's work, together with the general relativity, but their practical applications are immensely more important for now than the applications of general relativity, which are limited to extremely small corrections in the results of some measurements with very high resolutions.

          The inventions of the masers and lasers after WWII would not have been possible without knowing Einstein's theory of radiation.

        • Shawnj2 13 days ago
          I’m pretty sure humans still knew about the speed of light/radio waves being limited to c which is all you need to know to develop GPS. Time running slower on GPS would be an issue eventually though. Relativity does make it easier
        • karmakaze 13 days ago
          How could he? there were no cell phones.
          • jlev1 13 days ago
            Yes, that was the point of the parent comment.
            • karmakaze 12 days ago
              Thanks for reaffirming Poe's law. I was amused by how 'cell phone' was taken as a given, when talking about a CCD sensor.
              • adrian_b 12 days ago
                I believe that most, if not all, cell phone cameras have cheaper CMOS sensors, not CCD sensors (which have a lower image noise, but they need a more expensive manufacturing process, less compatible with modern digital logic and more similar to the manufacturing processes used for DRAM).

                AFAIK the CCD technology continues to be used only in large-area expensive sensors inside some professional video cameras, in applications like astronomy, microscopy, medical imaging and so on.

                • karmakaze 12 days ago
                  Quite true even full-frame DSLRs typically use CMOS sensors for some time now.

                  CCD was the first thing that came to mind as 'charge' is right in the name.

                  Out of curiosity, looked up invention dates for CCD 1969 and CMOS 1963 and CMOS sensor 1993 (quite a gap). I was playing with DRAM light sensitivity in the lab in the late 80's. I'm guessing CMOS had too much noise to be useful for a long while or something.

              • Sesse__ 12 days ago
                No, it was not taken as a given, it was an example of a very common product that digital image sensors enabled. I could have chosen e.g. digital cinema cameras, but they would not nearly have the same profound effect as cell phone cameras have had on society.
      • fooker 13 days ago
        Survivorship bias. There is a lot more science that has not panned out.

        I'm not saying quantum computing won't pan out, but if it has to there's some fundamental piece that is missing so far.

        In contrast this effort is trying to imagine and monetize GPS before relativity is discovered.

        • leononame 13 days ago
          That's not the point. The point is that a lot of discoveries and inventions wouldn't have happened if it weren't for researching just for curiosity's sake. Research results will often be useless for product development or capitalism in general. However, focusing research on achieving specific goals only might actually take you further away from your goals. You can't focus on something you don't know exists, you have to discover it first.

          Maybe, when we have quantum computers, one nerd makes an accidental discovery that enables us to build a room temperature superconductor, and maybe not. But if we don't let people research freely what they're interested in and only things that will pan out, we're going to lose out on a lot of things.

          • fooker 13 days ago
            I agree.

            I didn't say quantum computing research is useless.

            My point is that we are not at stage that we can offer a small prize and find monetizable uses for it.

            Fundamental research requires a lot more funding than this.

            • vtomole 13 days ago
              > In contrast this effort is trying to imagine and monetize GPS before relativity is discovered.

              The theory of relativity was discovered decades before GPS. Similarly, the theory of quantum computing was discovered in the 1990s.

              I agree with the sentiment: trying to find applications for a technology (Large fault tolerant quantum computer) that doesn't exist yet. I just think relativity is the wrong comparison. I do not think that this effort if not worth it due to not having fault tolerant quantum computers. Theory alone can take one very far.

    • maxboone 13 days ago
      The origins of quantum computing give it a clear use: simulation of many-body systems.

      Number factorization and anything else in BQP is also an use for them.

      • bckr 13 days ago
        Looking forward to when leetcode problems require BPQ complexity analysis
        • heyoni 12 days ago
          We might never retire but at least we’ll grind out leetcode in our 60’s while on adderall, ozempic, lions mane etc…and won’t feel a day over 55!
          • rrrix1 12 days ago
            Don't tempt me with a good time!
      • fooker 12 days ago
        N body problems are usually non linear.

        Quantum-everything is linear, how is this distinction overcome?

        Also, doesn't solving this problem hint at quantum gravity?

    • DiogenesKynikos 13 days ago
      From Feynman's 1982 talk on quantum computing:

      "[N]ature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy."

      0. https://s2.smu.edu/~mitch/class/5395/papers/feynman-quantum-...

      • constantcrying 13 days ago
        >and if you want to make a simulation of nature, you'd better make it quantum mechanical

        That is an absurd claim, if taken by itself. Most of nature relevant to us behaves classically. If you want to do simulations of a building or a car or an earthquake or the climate, modeling it as a quantum system would be absurd.

        • DiogenesKynikos 13 days ago
          He's obviously talking about the nonclassical regime, where the fundamental quantum nature of reality matters.
      • fooker 13 days ago
        You can simulate quantum mechanics with classical computers pretty well, as long as you stick to the copenhagen interpretation.
        • throwaway63467 13 days ago
          No even simulation of pure quantum states scales exponentially with the number of degrees of freedom, that’s irrespective of any interpretation or invoking of non-unitary evolution like measurements, just pure simulation of the Schrödinger equation. If you simulate an environment to e.g. incorporate wave function collapse or measurement operations you’ll work with a master equation that also grows linearly with the complexity of the density matrix simulation.
        • DiogenesKynikos 13 days ago
          Feynman's lecture explains why classical computers are terrible at simulating quantum systems.

          The basic problem is that the number of states grows exponentially with the size of the system. You very quickly have to start making approximations, and it takes an enormous amount of classical computing power and memory to handle even relatively small systems.

          • eru 13 days ago
            Yes, you have to make approximations and deal with (estimates of) errors.

            However, quantum computers also have to deal with noise and errors. So far, that's not very different.

            (If we manage to build error-correcting quantum computers, that might change.)

            • DiogenesKynikos 13 days ago
              The approximations don't just introduce small errors. To simulate quantum systems classically, you need to make drastic assumptions that fundamentally change the nature of the system.

              This is very different from, say, an approximation that adds in a small amount of noise that you can estimate. The approximations in simulating quantum systems classically can radically change the behavior of the system, in ways that you might not understand or be able to easily estimate.

        • eru 13 days ago
          Huh? Your simulation doesn't care about your interpretation. All interpretations of quantum mechanics make the same predictions.
        • amelius 13 days ago
          We currently can't even simulate a hydrogen atom.
          • fsh 13 days ago
            We can absolutely simulate the hydrogen atom. This paper lists the equations and fundamental constants that allow calculating the hydrogen energy levels with around 13 digits of accuracy: https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.93....
            • adrian_b 12 days ago
              That is not a simulation and those are not fundamental constants.

              Because a simulation is too difficult, there are approximate formulae for computing the quantities of interest, like the energy levels of the spectrum of the hydrogen atom.

              These approximate formulae include a large number of constants which are given in the paper linked by you and which are adjusted to match the experimental results.

              A simulation of the hydrogen atom would start from a much smaller set of constants: the masses of the proton and of the electron, the magnetic moments of the proton and of the electron, the so-called fine structure constant (actually the intensity of the electromagnetic interaction) and also a few fundamental constants depending on the system of units used, which in SI would be the elementary electric charge (determined by the charge of a coulomb in natural units), the Planck constant (determined by the mass of a kilogram in natural units) and the speed of light in vacuum (determined by the length of a meter in natural units).

              • fsh 12 days ago
                The inputs of the formulas for the hydrogen energy levels in the paper are: The Rydberg constant, the fine structure constant, the electron-to-proton mass ratio, the electron-to-muon mass ratio, the Compton wavelength of the electron, and some nuclear properties (charge radius, Friar radius, and nuclear polarizability). All inputs except the nuclear properties are as fundamental as it gets according to our current understanding of physics (note that the Rydberg constant and Compton wavelength are simple combinations of other physical constants). Nuclear physics is dominated by quantum chromodynamics which is not nearly as well developed as QED.

                The constants are determined by fitting the theory to the best available measurements (not only in hydrogen). This is exactly what fundamental constants do: They convert unit-less theory expressions into measurable quantities.

            • tsimionescu 13 days ago
              We know how to simulate it, but we can't do it. Those equations though require too much computation if you solve them with any known classical algorithm.
              • fsh 12 days ago
                This is completely wrong. My laptop can solve the equations in fractions of a second. I believe that with some optimizations it should be trivial to do the calculations on a 1960s mainframe.
                • adrian_b 12 days ago
                  That is not true.

                  You can solve such equations in fractions of a second only for very low precisions, much lower than the precision that can be reached in measurements.

                  For higher precision in quantum electrodynamics computations, you need to include an exponentially increasing number of terms in the equations, which come from higher order loops that are neglected when doing low precision computations.

                  When computing the energy levels of the spectrum of a hydrogen atom with the same precision as the experimental results (which exceed by a lot the precision of FP64 numbers, so you need to use an extended precision arithmetic library, not simple hardware instructions), you need either a very long time or a supercomputer.

                  I am not sure how much faster can that be done today, e.g. by using a GPU cluster, but some years ago it was not unusual for the comparisons between experiments and quantum electrodynamics to take some months (but I presume that the physicists doing the computations where not experts in optimizations, so perhaps it would have been possible to accelerate the computations by some factor).

                  • fsh 12 days ago
                    I believe you might be confusing the QED calculations of hydrogen with those of the electron g-factor. Just have a look into the paper I linked (section VII). Most of the QED corrections are given analytically, no computers involved at all. You could in principle calculate this with pen-and-paper (and a good enough table of transcendental functions).

                    The most accurate hydrogen spectroscopy (of the 1S-2S transition) has reached a relative accuracy of a few parts in 1E15 which is around an order of magnitude above the precision of FP64 numbers.

                    • adrian_b 12 days ago
                      The "few parts in 1E15" claim is applicable only to the absolute value of the frequency of the 1S-2S transition, which is 1 233 030 706 593 514 Hz.

                      That absolute frequency is computed from the ratio between an optical frequency and the 9 GHz frequency of a cesium clock, which is affected by large uncertainties due to the need for bridging the gap between optical frequencies and microwave frequencies.

                      The frequency ratios between distinct lines of the hydrogen atom spectrum or between lines of the hydrogen atom spectrum and lines in the optical spectra of other atoms or ions can be known with uncertainties in parts per 1E18, one thousand times better.

                      When comparing a simulation with the experiment, the simulation must be able to match those quantities that can be measured with the lowest uncertainty, so the simulated values must also have uncertainties of at most parts per 1E18, or better per 1E19.

                      This requires more bits than provided by FP64. The extended precision of Intel 8087 would barely be enough to express the final results, but it would not be enough for the intermediate computations, so one really needs quadruple precision computations or double-double-precision computations, which are faster where only FP64 hardware exists.

                      I have not attempted to compute the QED corrections myself, so I cannot be certain how difficult that really is.

                      Nevertheless, the section VII from this CODATA paper and also the previous editions of the CODATA publications, some of which had been more detailed, are not consistent with what you say i.e. with them being easy to compute.

                      For each correction there is a long history of cited research papers that would need to be found and read to determine how exactly they have been computed. For many of them there is a history of refinements in their computations and of discrepancies between the values computed by different teams, discrepancies that have been some times resolved by later more accurate computations, but also some where the right value was not yet known at the date of this publication.

                      If the computations where so easy that anyone could do them with pen and paper there would have been no need for several years to pass in some cases until the validation of the correct computation and for a very slow improvement in the accuracy of the computed values in other cases.

                      • fsh 12 days ago
                        The accuracy of the hydrogen 1S-2S measurement was mainly limited by the second-order Doppler shift of the moving atoms (and to a lesser degree the AC Stark shift of the excitation laser and the 2S-4P quench light). The comparison between the laser frequency and the Cesium fountain clock was done with an optical frequency comb which introduces a negligible uncertainty (< 1E-19).

                        Isn't it fun to get your own field of expertise (wrongly) explained to you on the internet?

                        Edit: I never said that it is easy to derive the corrections listed in the CODATA paper. However, it is relatively easy to calculate them.

    • eru 13 days ago
      > I have heard this mentioned several times in the last decade or so : "The only thing a quantum computer definitively does better than a classical computer is simulating a quantum computer."

      And, hopefully, other quantum systems in general!

      I can see that helping with material science. That can have huge multiplier effects on the rest of the economy.

      But I agree with you, that other serious applications of quantum computers seem to be thin on the ground.

      • jacobsimon 13 days ago
        I thought one of the main advantages of QC was that it could (theoretically) solve existing problems that have exponential time complexity with more efficiency. Isn’t the idea that it could make everything faster? Or did I fall for the marketing.
        • edanm 13 days ago
          > Isn’t the idea that it could make everything faster?

          If that is your understanding, then yes, you have unfortunately fallen for the very mistaken reporting on this.

          There are specific algorithms that Quantum Computing can solve faster than regular computers. Some of these algorithms are incredibly important, and a faster solution to them would cause serious changes to the world, namely Shor's algorithm, which would make factoring large numbers much faster. This would effectively break many/most encryption schemes in the world. So it's a big deal!

          Nevertheless, this isn't true in general - not every algorithm is known to have such a speedup. (I'm not an expert, I'm not sure if we know that there isn't a general speedup, or think there might be but haven't found it.)

          • cwillu 13 days ago
            Public key crypto is vulnerable to period finding, but symmetrical key cryptography is pretty safe from quantum computing advances.
            • immibis 13 days ago
              Quadratic speedup, IIRC - a 128-bit key can be found by brute force in (roughly) 2^128 steps by a normal computer, or 2^64 steps by a quantum computer. This applies to all brute force algorithms, so just make your keys and hashes twice as long as you think they should be, and you're good.
            • edanm 12 days ago
              This might be true (I'm not that up to date on whether there are symmetrical algorithms that negate the advantage of QC), but most of the internet / world commerce relies on public key crypto.
          • jacobsimon 12 days ago
            Huh yeah I guess I need to learn more. My layman’s assumption was that it would help with a lot of NP problems that involved recursion or backtracking algorithm would benefit from it. From some quick googling it seems like they have already designed QC algorithms for traveling salesman etc. Isn’t that sort of meaningful or am I missing something?

            I could totally see the argument that they are physically impractical and therefore not likely to be actually used vs parallelizing conventional computers.

            • edanm 12 days ago
              Well they are physically impractical now, but they might get practical in the future.

              But no, I'm fairly sure that QC in general isn't known to be able to solve NP problems. And since Traveling Salesman is NP complete (iirc), I don't think there's a QC alg to solve traveling salesman in P (otherwise that would imply QC would solve all of NP in P, which I know isn't true). Where did you see indication otherwise?

              FWIW, my favorite CS blogger is Scott Aaronson, and the subtitle of his blog has always been: "If you take nothing else from this blog: quantum computers won't solve hard problems instantly by just trying all solutions in parallel." This reflects the very common misunderstanding of how QC works.

            • eru 12 days ago
              > Huh yeah I guess I need to learn more. My layman’s assumption was that it would help with a lot of NP problems that involved recursion or backtracking algorithm would benefit from it.

              Recursion is more of a property of how you write your algorithm, than of the problem.

              Eg Scheme or Haskell as languages have no built-in facilities for iteration, no for-loops, no while-loops; the only thing you get is function calls. (Well, that and exceptions etc.)

              However you can built for-loops as libraries in both Scheme and Haskell. But they will be built on top of recursion.

              To make matters even more interesting: once you compile to native code, all the recursion (and also all the for-loops and other fancy 'structured programming' constructs) go away, and you are back to jumps and branches as assembly instructions.

              None of this changes whether a given class of problems is in P or NP or (almost!) any other complexity class. Just like getting a faster computer doesn't (usually!) change the complexity class of your problems.

              > I could totally see the argument that they are physically impractical and therefore not likely to be actually used vs parallelizing conventional computers.

              Quantum computers are impractical now. But as far as we can tell, that's "just" an engineering problem.

              In the long run one of two things will happen:

              - Either engineering improves and we will be able to build good quantum computers (though perhaps still not better than classical computers for the same amount of effort) - Or, we discover new principles of quantum mechanics.

              Basically, quantum mechanics is one of our most successful physical theories, if not the most successful theory. And as far as we understand it, it allows quantum computers to be built.

              So either we will eventually manage to built them, or (more excitingly!) that's not possible, and we will discover new physics that explains why. We haven't really discovered new physics like that in a while.

        • HarHarVeryFunny 13 days ago
          It could make somethings faster, not everything, but so far the number of useful somethings that people have come up with is very small.

          A quantum computer is not a general purpose computer than can be programmed in the way we're used to - it's more like an analog computer that is configured rather than programmed. It's only useful if the problem you are trying to solve can be mapped into a configuration of the computer.

        • jncfhnb 13 days ago
          The basic idea is that for a certain class of problems, you can have the quantum computer skip certain incorrect paths on the calculation by having their probability amplitudes cancel each other out.
          • eru 12 days ago
            With emphasis very much on '_certain_ class of problems'. It's only a precious few problems quantum computers actually help with as far as we know, even theoretically.
        • cwillu 13 days ago
          If a problem can be reduced to efficiently sampling from the summation of an exponentially large number of FFT's, then a quantum computer will destroy a classical computer.

          If a task can't be efficiently reduced to such a problem, then a QC probably won't ever help at all; the square root time advantage from grover's algorithm is too easily overwhelmed by simple engineering factors.

          • fooker 12 days ago
            What's stopping classical computers from doing this sampling?

            If it's sampling, you don't have to deal with the exponential here.

            • cwillu 12 days ago
              See https://www.scottaaronson.com/papers/optics.pdf

              “We give new evidence that quantum computers—moreover, rudimentary quantum computers built entirely out of linear-optical elements—cannot be efficiently simulated by classical comput- ers. In particular, we define a model of computation in which identical photons are generated, sent through a linear-optical network, then nonadaptively measured to count the number of photons in each mode. This model is not known or believed to be universal for quantum com- putation, and indeed, we discuss the prospects for realizing the model using current technology. On the other hand, we prove that the model is able to solve sampling problems and search problems that are classically intractable under plausible assumptions.”

              Which is the basis for the experiment discussed here: https://scottaaronson.blog/?p=5122

        • eru 12 days ago
          You got a useful response already, so let me give you a good response: https://www.smbc-comics.com/comic/the-talk-3
        • edgyquant 13 days ago
          This is true in theory but don’t think it’s ever been proven in practice
    • dudeinjapan 13 days ago
      It's a "solution looking for a problem." Nothing wrong with that. Lasers were theorized in 1917 by Einstein and invented at Bell Labs in 1958, but it took another 20 years before anyone had any idea what the heck to do with them. Now they are the backbone of the internet, among thousands of other applications. Patience, grasshopper.
    • tgsovlerkhgsel 13 days ago
      Wouldn't the D-Wave kinda-but-not-really quantum computer that came out a decade ago be ideal for AI? Annealing sounds like exactly the kind of problem that needs to be solved for ML training?
      • versteegen 13 days ago
        I can't find mention of it online, but back in 2013 Lockheed Martin purchased a D-Wave machine because they wanted to use it for "AI", which it turned out meant software verification (of fighter jets?) I believe by searching for the possibility of some kind of invalid program state in a large program, which IIRC they couldn't manage to solve with standard solvers. But in that case the number of qubits in a D-Wave machine appears to me far too few for that to be possible, although I don't know the task exactly.

        If by "AI" you include operations research (as opposed to statistical machine learning), yes, adiabatic quantum annealing makes sense for certain optimisation problems which you can manage to naturally formulate as a QUBO problem. By 'naturally' I mean it won't blow up the number of variables/qubits, as otherwise a far simpler algorithm on classical computer would be more efficient. I know someone who published QUBO algorithms and ran them on a NASA D-Wave machine, while I myself was using a lot of annealing for optimisation, I didn't want to get involved in that field.

        But if you want to do machine learning on a large amount of data using quantum annealing, no, that's terribly badly matched, because the number of physical qubits needed is proportional to the amount of data you want to feed in.

      • eru 13 days ago
        Well, but to be useful, it would need to be better at this annealing than a classical computer that just uses good old (pseudo) random numbers.
      • pyinstallwoes 13 days ago
        [flagged]
    • fragmede 13 days ago
      "better". There currently isn't a practical way to brute force factor a 4k rsa key with classical computing. doesn't shor's algorithm on a quantum computer make that feasible? that would be a big driver for quantum computing though maybe my understanding is off.
      • adrianN 13 days ago
        It is still open whether we can build quantum computers with sufficiently low noise to run Shor‘s algorithm.
        • vtomole 13 days ago
          > It is still open whether we can build quantum computers with sufficiently low noise to run Shor‘s algorithm.

          This statement should delimit between theory and experiment.

          Theoretically, the question of building a quantum computer with low enough noise to run Shor's has been solved. In fact it was solved by Shor himself in the 1990s: https://arxiv.org/abs/quant-ph/9605011.

          Experimentally, we are just getting started with building quantum computers with low enough noise: https://blogs.microsoft.com/blog/2024/04/03/advancing-scienc....

          Experimentally, it will always be open whether a quantum computer can run Shor's until it actually run Shor's. The point is that progress in the field has not stagnated since it's founding.

          • adrian_b 12 days ago
            That paper of Shor just shows how a quantum computer with a large number of bad qubits can be the equivalent of a quantum computer with a small number of good qubits.

            The paper does not prove anything about the upper limit for the number of bad qubits that are physically realisable.

            There are doubts that this upper limit, which is unknown yet, is high enough for most practical applications.

            • vtomole 12 days ago
              > The paper does not prove anything about the upper limit

              Nothing can prove how many qubits can be realizable except trying to realize them. There will never be a theorem that says "The maximum number of qubits that can ever be controlled in a lab is X". That's what experiment is for.

              I will say, it's difficult to doubt that the upper limit to the number of qubits we can realize is infinity. We can now trap ~10 million atoms and efficiently control hundreds of them: https://www.nature.com/articles/s41586-023-06927-3. The question is not "Could we ever realize billions of qubits?". It's "When can we realize billions qubits?". The answer could be decades or centuries but as long as people are building these devices, it will happen eventually.

          • adrianN 13 days ago
            Yeah well you can rephrase my statement to say that the theory underlying quantum computers still hasn’t been validated experimentally, so it’s still open whether it models reality sufficiently well to allow us to run the algorithms it predicts on physical machines.
        • sgt101 13 days ago
          I've been going down this rabbit hole for a few weeks. Here is what I now think.

          Cai 2023 (https://pages.cs.wisc.edu/~jyc/Shor-Algorithm-with-Noise.pdf) showed that there is a definite noise floor for Shor's and gave a way to determine this.

          New algorithms are better than Shor's the in that they are smaller in space and time and more resilient to noise. The state of the art is Ragavan et-al https://arxiv.org/abs/2310.00899. The insight in Cai's about the QFT applies to this algorithm but is less damaging as is scales the noise floor at n^3/2 not n^2. It does seem that it is believed that error correction can be implemented to bring the effective noise down, but it seems that this will be very expensive for large numbers - probably at the scale of previous estimates that indicate that more than 20 million gates will be required to be operational for about 8 hours. The gates required will have to be much better than the current ones (my reading is about an order of magnitude) but this is believed to be on the table for the future. I think these two assertions by the community are debatable but honestly these folks are the experts and we have to go with what they say and respect them on this until it turns out that they were wrong.

          From what I read it was never the case that Shor's was thought to be any more noise tolerant than other algorithms, but Cai proved it, which is different. There is some debate about the insight, because a chunck of the community is like "this is not helpful and will make us even more misunderstood than we already are" but personally I find this attitude really irritating becuase it's part of the gradual reveal that QC people do about the practicality of the tech. I have no respect for anyone working in QC who doesn't say something like "there is no chance of a practical application in the next 30 years and it's likely that there will be no practical application for 50 years at least. But, this is really interesting and important physics and the techniques we are developing may help us build new classical devices or other quantum devices in a shorter life time." This rider should be mandated for all grant applications and in every popular interview. Instead of which we hear (repeatedly) "whooo whooo QC will crack RSA any day now and help us do machine learning better than you can ever imagine". These folks say "whelp I never said that" but the reality is that they use the idea to seduce politicians and CEO's to give up significant money that they would not if they had a clear idea of the technical problems with QC and which could be used to do a lot of good if spent on things like feeding kids and stopping climate change.

          This is introducing new security issues because things like QKD and post quantum are problematic in new ways. QKD has end points that are silently vulnerable to people with guns, pliers and evil intents. Post quantum is very computationally expensive and errr suspicious in errr various ways. Introducing it is going to create gaps, and those gaps are unnecessary if current encryption is not threatened.

          Quantum ML is another thing that make me really annoyed. The algorithms need data to be loaded (surprise) which is extremely problematic and slow, and they need quantum memory - which exists only on a very small scale and the tech used just seems wildly impractical to me. Yet, folks are out there talking about it as if it's coming in the next 5, 10, 20 years! The reality is that we are about 6 major breakthroughs from doing this and once we have those 6 breakthroughs expect that this is going to take 10 years for a practical implementation at least Again - I have no problem with the theoretical exploration of the topic, but to simulate circuits and make claims about how they will work without mentioning that you don't have a clue how to implement the system that will implement it is pretty bad behaviour.

          All they need to do is put a single line in the paper like "This technique has theoretical interest in Computer Science and Physics but major open questions prevent it being practical for real world problem solving in the near or medium term." I have total respect. 100%. And I think that the work is interesting

          But no, because "want money" and "don't give a shit about the damage we are doing".

    • CuriouslyC 13 days ago
      Quantum computers SHOULD destroy regular computers at knapsack style problems and other combinatorial explosions. The problem there is that decomposing real world problems into 128bit combinatorial selection is really hard.
      • cwillu 13 days ago
        There's no reason to think that quantum computers will have any fundamental advantage at knapsack problems; the √n advantage from grovers is not substantial when classical computers are going to be many orders of magnitude bigger than quantum computers for the foreseeable future.

        If a problem can be reduced to efficiently sampling from the summation of an exponentially large numbers of FFT's, then a quantum computer will destroy a classical computer.

        It's largely an open research problem whether there are useful quantum algorithms between those two problem classes.

      • fooker 12 days ago
        >Quantum computers SHOULD destroy regular computers at knapsack style problems and other combinatorial explosions.

        You can get a Turing award if you can show this, even theoretically.

    • pyinstallwoes 13 days ago
      Also hilariously they mark stage 3 as "useful quantum computation" as in, none has happened yet. Because, no quantum computer actually exists yet.
      • sebzim4500 13 days ago
        I don't get the joke. Has Google ever said anything to imply their current quantum hardware can do anything useful?
    • throwaway63467 13 days ago
      The big one is simulation of quantum systems, I don’t get how that’s not enough by itself? Classical computers will never be able to simulate quantum systems efficiently so we need quantum computers for that. In general achieving arbitrary precision control of quantum states is something that will open avenues in many areas like sensing as well, so just from that angle alone a working quantum computer would open a new branch of technological progress.

      But yeah it’s not guaranteed or even likely that quantum computers will be very useful for many computer science problems, though I’m also optimistic about that given the progress made in the last 30 years. Physics / science isn’t guaranteed to be easy or progress at a steady pace.

      • eru 13 days ago
        > In general achieving arbitrary precision control of quantum states is something that will open avenues in many areas like sensing as well, [...]

        And developing new materials.

      • jncfhnb 13 days ago
        What does it mean to “simulate” a quantum system and why is that useful
        • throwaway63467 13 days ago
          Simulate roughly means designing a quantum computer that mimicks the dynamics of a real quantum system and then run a simulation of that to learn how it would behave under certain conditions e.g. to optimize its parameters. Similar to how we simulate planes or complex circuits today. Quantum systems are everywhere e.g. lasers, cold atoms, transistors or other complex semiconductors, superconductors, enzymes, … I’d say if we don’t develop the ability to understand these systems through simulation we will never progress beyond a certain technological boundary, and quantum computers are a necessary technology for that, just like classical computers were instrumental for the past scientific revolution.
          • jncfhnb 13 days ago
            That sounds… wrong.

            A simulation of a plane is a model, implemented by humans, of how physics works.

            A simulation of a quantum system is still a human implementation of mechanics. A quantum computer per my understanding should not be able to add anything to that. You’re not able to just say well this thingy has quantum mechanics and this is a quantum computer so it’s better able to do that. Quantum computers are about speeding up calculations by structuring specific problems such that wrong answers are not calculated. Not emulating quantum mechanics and then pulling the answer from the ether.

            So… what am I missing here? How could quantum computer aided search spaces specifically aid simulating quantum systems that are not quantum computer aided search spaces?

            • cwillu 13 days ago
              “Wrong answers are not calculated” is a popular simplification of this, but it's misleading you here.

              Quantum computers are a means of systematically creating and modifying complicated sums of exponentially large FFT's, and then efficiently sampling from the resulting distribution.

              Note that you typically still need to sample many times to get a meaningful answer, which is where the “wrong answers are not calculated” ultimately comes from: if you can arrange for most or all of the factors corresponding to “wrong” answers in the sampled distribution to cancel out (such as the term for the number 4 when trying to factor 15), then when you sample several times from that distribution, very few or perhaps none of those samples will have been drawn from the “wrong” part of the distribution, and so you waste less time testing bad samples.

              A quantum computer is potentially useful for simulating quantum systems because the _models_ for those systems are ridiculously complex in _exactly_ this way. It won't help if the model is wrong, but our problem is currently that we can't really run the calculations our current models imply beyond slightly-larger-than-toy examples.

              • jncfhnb 13 days ago
                > so you waste less time testing bad samples.

                How is this not “wrong answers are not calculated”? You gave a lot more detail on the mechanics of how these probability amplitudes are canceling each other out but the answer seems the same?

                I don’t follow how this maps to helping simulate the quantum systems. Quantum computers are good at finding solutions to problems efficiently. But the quantum systems we are describing are not solution seeking systems. They’re going to be just interacting components with entanglement whatever’s going on. How would the avoidance of bad samples aid the simulation of a system like that?

                • cwillu 13 days ago
                  For simulation, it's not about the bad samples: the point is information about the distribution itself.
                  • jncfhnb 12 days ago
                    But simulating a process is not the same thing as efficiently routing space exploration. Quantum computing grants you the latter. Why does it impact the former?
            • throwaway63467 13 days ago
              No that’s not how it works, quantum mechanics cannot be simulated efficiently on a classical computer, the state space grows exponentially with the number of degrees of freedom, every degree has relative phases to every other degree even when only looking at pure states, that’s why even the largest super computers cannot simulate more than 50 quantum degrees of freedom currently (see quantum supremacy).
              • jncfhnb 13 days ago
                I’m not claiming quantum mechanics can be efficiently simulated on a normal computer. I’m questioning whether arbitrary quantum mechanics systems can be effectively simulated by a quantum computer.
                • adrian_b 12 days ago
                  I have not seen any description of a general-purpose programmable quantum computer, which could simulate an arbitrary quantum system, like a digital computer.

                  All the examples given that I have seen were for making a hardwired simulator for a concrete quantum system, e.g. some chemical macromolecule of interest, to be used much in the same way as analog computers were used in the past for simulating systems governed by differential equations too complex to be simulated by the early digital computers in an acceptable time.

                  • jncfhnb 12 days ago
                    Are hard coded quantum simulators even the same thing though? Like, if we’re saying that the goal of a quantum simulator is merely to run a high dimensional problem and exploit the nature of qubits to make it simpler to handle those imaginary vectors and what not…

                    Does that actually follow the same framework as “traditional” “quantum computing”, e.g. struggles with error correction / problem formulation specifically designed to avoid unnecessary calculations?

                    It feels like although a quantum simulator could viably work to simulate a specific system, it shouldn’t make it any easier to actually understand the system it is simulating and could maybe just indicate how complex by the amount of variation in simulation outcomes (which isn’t useless). Is that accurate?

                    Excuse my terminology here

              • fooker 12 days ago
                Simulation usually involves randomized and sampling algorithms, not a full state space exploration.

                A good part of theoretical chemistry today relies on simulating quantum systems.

    • amarcheschi 13 days ago
      I had a guy I knew in a PhD in quantum computing and he told me that he was working on making an algorithm to perform edge detection on an image in O(1), apparently it's possible to do some operation much quicker with quantum computing

      A paper i found on edge detection https://arxiv.org/html/2404.06889v1#:~:text=Quantum%20Hadama....

      • fooker 13 days ago
        This might be a good candidate for Google's prize, unless I'm missing something?
        • amarcheschi 13 days ago
          I have literally 0 experience in this field, so I can't comment
    • breck 13 days ago
      > no one has figured out a use for quantum computers.

      It is my understanding that a use is very straightforward: quickly solving problems in the Big(O) Factorial Class (n!).

      I could be misunderstanding QC though.

      • tsimionescu 12 days ago
        Yes, you are. Quantum computers are only believed to be faster than classical computers for a handful of very specific problems. And even those are not 100% proven (we don't have a proof that a faster classical algorithm doesn't exist for any of them, except Grover's search, which is only a quadratic speedup, not an exponential one).
        • breck 12 days ago
          > only believed to be faster than classical computers for a handful of very specific problems

          But aren't these problems all in the factorial and exponential classes?

          • tsimionescu 12 days ago
            I think they are believed to be, but the opposite is not true. There are a lot of exponential or factorial problems that have no efficient QC algorithm, and are not beleieved to be likely to have one.

            Also, at least the most famous problem which has an efficient QC algorithm, integer factorization, has no proven lower complexity bound on classical computers. That is, while the best algorithm we know for it is exponential (or, technically, sub-exponential), it is not yet proven that no polynomial classical algorithm exists. The problem isn't even NP-complete, so it's not like finding an efficient algorithm would prove P=NP.

  • buildbot 13 days ago
    With things like these, I like to think how much money went into the development of the random PR assets like the weird progression diagram. Apparently, that money is better spent of these than say, someone on the Python Language Committee, or your entire Python team.
    • fastball 13 days ago
      I don't think the random PR assets cost as much as you think they do. I know people that could knock that out in a day or so.
      • kamov 13 days ago
        Knowing the industry there's a bunch of people out there on the payroll continuously knocking out these assets all day long every day.
        • habosa 13 days ago
          Google routinely uses agencies for this kind of work (I have been on both sides). A marketing website like this would be a ~$100,000 spend (depends how many pages and how many creative assets).
          • fastball 10 days ago
            That sounds a bit high, but I'll trust your expertise.

            Regardless, 1/50 the value of the prize doesn't seem egregious to me.

    • huytersd 13 days ago
      [flagged]
      • brainless 13 days ago
        [flagged]
        • dang 11 days ago
          Crossing into personal attack is not ok in HN comments, and we've had to ask you at least once before not to do this.

          I'm not going to ban your account right now because it doesn't look like you've recently been making a habit of doing this, but please don't do it again. Your comment would be fine without the first two sentences.

          If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.

          • brainless 2 days ago
            I did not see this earlier. Thanks for taking the time and I agree the comment looks more aggressive than I had in my mind.
            • dang 1 day ago
              Appreciated!
        • fastball 13 days ago
          The prize is $5m. I wouldn't call that peanuts. How much do you think they spent on fluff and buzz? Doesn't seems like that much to me. A couple assets don't break the bank.
        • tudorw 13 days ago
          Fluffy and buzzy are communication documents, have you considered you are maybe not the target for these messages.
        • huytersd 13 days ago
          [flagged]
          • dang 11 days ago
            We've banned this account for breaking the site guidelines and ignoring our request to stop.

            If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.

      • bigfudge 13 days ago
        I think your comment is rude, and naive in that it misses the point. The investment here is small relative the the marketing budget, which tells us something about their priorities.
        • huytersd 13 days ago
          [flagged]
          • pquki4 13 days ago
            I would very much like to know how to make one person spend only one day to set the whole thing up, including setting up a new domain, allocating/pointing to a web server, finding the pictures with the correct resolution and apply modification, creating the illustrations, making sure that "sign in" button at the top actually works, setting the background gradient in such a way that it is complaint with the company design language, designing and implementing "The Qubit Game" [0], designing and implementing the fancy "our quantum journey" interactive page [1], etc, in a company as large as Google. I am really curious who this Superman at Google is, who can do both design work and dev work and creating educational materials, get all things approved and reviewed and shipped within a day, and ensure links don't break to this day.

            [0] https://quantumai.google/education/thequbitgame

            [1] https://quantumai.google/learn/map

          • bigfudge 12 days ago
            If you think setting this up is simple you have no idea how big corporations make decisions like this. Even if the actual website setup was simple, many people will have been involved and the 'marketing cost' has to account for the time of senior staff deciding on the goals of the project (which seem largely about PR, at least to me). It's not just the cost of a single web dev hacking a few pages together.
    • bitcharmer 13 days ago
      Ultimately Google will use this tech to push more ads. That's what they reduced themselves to. Just an ad business.
  • throwup238 13 days ago
    I just looked at the Killed By Google gallery and it's got 295 rows of 3 dead projects per row (except for the first and last rows of 2). That's 883 kills. That is truly impressive!

    I look forward to welcoming "Quantum AI" to that graveyard.

    • gruntled 13 days ago
      Maybe, like Schrodinger’s cat, we can all just avoid looking at Google Quantum AI and it will stay in a superposition of dead and alive.
    • Intralexical 13 days ago
      Hard to kill that which was never alive.
    • SOVIETIC-BOSS88 13 days ago
      The goog has no qualms in pulling the trigger, that's for sure. Jacquard by ATAP was a favourite of mine. I was going to say that's the price of innovation, but it has been a while since a new successful product release. Maybe the graveyard's fate is to go down in history like xerox parc (innovative but remembered as a footnote). Quite sad.
  • seydor 13 days ago
    Since they are doing buzzwords salad, they could go for Google Quantum Blockchain AI VR
    • nextworddev 13 days ago
      Think this was launched to support some L8 promo
      • eru 13 days ago
        Yes, almost reads like classic promotion driven engineering.

        (Or Promotion Oriented Programming.)

        • vinnyvichy 12 days ago
          A cousin of funding driven grant writing (or funding oriented research)
    • vletal 13 days ago
      There is no AI on the whole front page. Right?

      Where is the AI?

    • crazygringo 13 days ago
      Seriously. My first thought was, is this April Fools'?

      And if so, why aren't they throwing crypto into the mix?

      "Quantum Blockchain AI" does kind roll off the tongue nicely...

    • VHRanger 13 days ago
      Your calling it VR instead of metaverse made Mark Zuckerberg cry
  • anonylizard 13 days ago
    How is this related to AI in any way? Is it just branded as AI to get some share of that sweet AI money (Probably Google only allows funding new projects if they are AI related at this point)
  • constantcrying 13 days ago
    Hilarious. Even the connection between AI and Quantum computing is deeply opaque.

    When computers first began development during WW2, they were a response to immediate demands for particular functionality from many technical areas. Quantum computing seems to go the exact opposite route, first of building up an (admittedly very interesting) technology and then later figuring out if there actually is anything useful to do with it. The connection to AI is particularly interesting, because it seems to be built entirely out of a combination of two buzzwords.

    • vdfs 13 days ago
      I think Quantum Blockchain AI is the future
    • eitally 13 days ago
      If you read the bio of the head of Google's Quantum Computing (now "Quantum AI") org, you can connect the dots:

      https://www.xprize.org/about/people/hartmut-neven

    • maxboone 13 days ago
      Quantum Machine Learning is a pretty solid connection: - https://youtu.be/Lbndu5EIWvI
    • fsmv 13 days ago
      The AI connection comes from the idea that you can use Grover's algorithm to speed up generic optimization problems.
    • randohostage 13 days ago
      idk enough physics to say quantum computing is it or not intuitively it seems like understanding matter at a sub atomic level and having a machine that can interact with it seems like a super powerful thing to have. but I think quantum computing is in the same place ML was in the 80s; influential people (mainly Marvin Minsky) said the perceptron and neural nets were a dead end same thing could be happening to quantum computing because of the high expectations
      • constantcrying 13 days ago
        Sure on some level it is extremely interesting and I don't think there is anything wrong with doing research on it. The real issue is pretending that it is something that in the near future will change the world, while having yet to figure what you actually could do with it.

        IBM and Google have invested massively because someone there thought that it actually would be useful. But that hasn't happened yet and to be honest it doesn't look like that will change any time soon.

        Inregards to Neural Networks, they were pretty much a complete dead end until computing power increased enough to make them viable. In that case an external technology had to come along to make it work.

        • randohostage 13 days ago
          True the current use cases don't seem like they're worth the effort other than maybe applications to crypto but idk don't you think if you could interact with or model matter accurately from a personal computer it would be an absolute game changer? even more so than personal computers?

          Plus what is the near future anyway? If big tech companies spend 2 billion a year on quantum computing for the next 25 years (which is how long it took to get from Geoffrey Hinton's book to fully commercial applications) that's 50 billion. OpenAI + Anthropic are valued at >100 billion. Even if they just broke even doesn't that seem worth it to control the next generation of computing?

          And idk other than this, robotics (self driving, manufacturing, agriculture) and AGI what else is there to bet on for the coming decades?

  • tromp 13 days ago
    The milestones specify number of physical qubits, and error rate of logical qubits.

    Shouldn't they also specify how many physical qubits are needed to build one logical qubit with the given error rate?

    Milestone 6 reduces the error rate from 10^-6 to 10^-13 while increasing physical qubits by only 10x. That doesn't work out well if you need 1000x more physical qubits per logical one...

  • ladzoppelin 13 days ago
    This is an intimidating contest, are there many groups working on these problems besides the people on their website? I wonder if they put "AI" in the branding to grab the attention of people in quantum about to pivot to LLM's..
    • vtomole 13 days ago
      > I wonder if they put "AI" in the branding to grab the attention of people in quantum about to pivot to LLM's..

      The Quantum AI lab was founded in 2013, about a decade before LLM's took off. The founder of the lab is researching how quantum can be applied to AI: https://www.youtube.com/watch?v=3iEEvRfTTEs

  • somishere 13 days ago
    Just an aside but Google should really stop outsourcing their web builds. The quality is terrible
    • versteegen 13 days ago
      I'm quite taken aback to be viewing a Google website where the 5.8MB images are loading at a crawl, because they're completely unoptimised pngs equal in size to bmps of the same resolution!

      Oh hey apparently there are some 26MB of mp4s that didn't load at first, they served a 2.4MB "Mobile" mp4 instead, so it's not all bad.

    • smokel 13 days ago
      I fail to see what is so wrong with it. It works on mobile, and it has some actual information. I suppose at least 95% of the internet is worse off. Are you sure it is not a taste issue?
      • somishere 13 days ago
        It works, that's true! It's just not designed or built to a very high standard (and I expect the arbitrators of the internet to build to a very high standard) ... So I'd say it's a quality issue.

        Massive assumption that it was built by an external team. But I'd hope so :)

  • Sytten 13 days ago
    Is this the new cheat code to get infinite money from governments?

    No seriously the amount of cash the our goverments (Canadian here) pumps into projects like this is always annoy me.

  • Hewlberno 13 days ago
    The lack of knowledge and ignorance of quantum computing in this thread is incredible.

    Quantum computers do exist.

    They are extremely useful and do things that digital computers cannot do. Categorically cannot.

    The cynicism in this thread is crazy

    • latexr 13 days ago
      Why not list what some of those things are? Links are fine if you’re pressed for time.

      I’m not being snarky; I haven’t read much about quantum computing and I am genuinely interested in practical applications.

      As it is, your comment just reads as “you people are a bunch of idiots, you’re all wrong and crazy, and I’ll insult you repeatedly but not offer one single corrective argument”. It would be insane for anyone to change their mind based on your comment, you’ll only get people to double down.

    • brabel 13 days ago
      As someone who has heard talk about Quantum Computers for over a decade now, and never seen a single actual application, I would be very interested if you could provide examples of where I could make use of them right now for things that I couldn't do with classical computers. If they "do exist" and "are extremely useful" then I believe you have exactly what I am asking in mind?
    • constantcrying 13 days ago
      >They are extremely useful and do things that digital computers cannot do.

      But that is false. A classical computer can simulate a quantum computer. Performance is the difference, not inherent ability.

      >The cynicism in this thread is crazy

      The cynicism stems from people telling others that practical quantum computers will change the world for at least a decade. Even the URL invokes deep cynicism in me, as it randomly combines Quantum computing with the current hot thing. When will openAI switch to quantum computing for their LLMs? Next year?

      • vtomole 13 days ago
        > The cynicism stems from people telling others that practical quantum computers will change the world for at least a decade.

        Practical quantum computers will change the world (break RSA 2048). The question is "when". The people who have a timeline of ~10 years instead of decades contribute to what we in our community call "Quantum hype" and it's very much frowned upon by most of the members in the community.

        > combines Quantum computing with the current hot thing

        The founder of this lab has an illustrious career in machine learning and is now researching how quantum computers can help with that.

        • hedora 12 days ago
          How would breaking RSA change the world?

          Perhaps via some practical (non-crypto) application of factoring large numbers?

    • saagarjha 13 days ago
      Ok, how about listing some of the stuff they can do?
      • eru 13 days ago
        There's a few things they can do better in theory.

        But as far as I know none of the existing quantum computers works well enough to achieve that 'quantum supremacy', yet.

        So I have no clue what Hewlberno is talking about with their comment.

    • bitcharmer 13 days ago
      Are all those extremely useful applications in the room with us right now?
    • knifie_spoonie 13 days ago
      I'd really like to see just one example a useful problem that a quantum computer can solve and which classical computers can't.

      And I'll qualify that as a real quantum computer which exists in physical form today, not something theoretical in someone's research paper.

    • roomey 13 days ago
      I think the problem here is that people are assuming it is just a different type of what we have now, instead of something completely different.

      So we have a new tech like Blockchain, it takes a few years and gets really "big".

      We have an AI revolution, ok the research has been ongoing for years but it seems like there has been overnight advancement.

      I'm sure people are expecting the same with quantum computing, but it's not just a re-use of existing tech (transistors) it is development of brand new tech.

      It is better to imagine it as research into fusion power generation. It will take a long time, no one is sure just how useful it will be.

      But saying a research quantum computer can't outperform an existing computer is like saying cern can't outperform a coal fired power plant.

      Not only two different things, but one of them isn't even designed to generate power.

    • randohostage 13 days ago
      totally minsky 1969 vibes :)
  • ChrisArchitect 13 days ago
    Anything new here?

    Last milestone was February 2023:

    https://blog.google/inside-google/message-ceo/our-progress-t...

  • mise_en_place 13 days ago
    They claim to have 10^2 qubits which is cool but how much energy does it take to keep things cool. I can only think about the massive amount of energy wasted on it.
    • foota 13 days ago
      If your only problem with a quantum computer is the energy required to run it, then you have a very good problem.

      edit: it might make sense to be worried about the feasibility of cooling it, since I think the infrastructure required for keeping the quantum-y parts of the computer cold is extensive, but the actual energy itself is probably irrelevant.

  • szundi 13 days ago
    Some of the qubits on the images are higher than others
  • H8crilA 13 days ago
    Quantum AI running self driving cars in the blockchain cloud, social VR controlled. Experience our agile experience powered with renewable big data. Gamifying the digital transformation of tomorrow!
    • vtomole 13 days ago
      I'm aware I'm replying to a comment that doesn't add anything substantive to this conversation, but the quantum AI lab is very serious research effort https://quantumai.google/research.
    • tux3 13 days ago
      Okay team, when can we supercharge this?
      • michelb 13 days ago
        Let's add some more people and do it in half the time.
      • Drakim 13 days ago
        It's all about finding a balance.
        • SomeoneFromCA 13 days ago
          And pushing the envelope, while paying close attention to identifying the pain points.
          • qiine 13 days ago
            careful with that envelope its all crinkly now !
        • Wherecombinator 13 days ago
          We’ll need to circle back
    • ssss11 13 days ago
      Don’t forget the tracking and ads!
      • These335 13 days ago
        Of course, how can you possibly raise awareness without an ad campaign!
    • polygamous_bat 13 days ago
      The synergy is off the charts!
    • jc_811 13 days ago
      Sounds great, but we need Web3 in there! Or are we already onto Web4 or 5?
    • nyoomboom 13 days ago
      What's old is new again: The Return of the Turbo-encabulator!
    • sgt101 13 days ago
      Green and diverse. You forgot green and diverse. Also youth.
      • rubberband 13 days ago
        All of these provide synergy.
        • sgt101 13 days ago
          When I hear input like this I get excited by our emerging talent, going forward we should be prepared to leverage this.
    • cpill 12 days ago
      so what learning were learnt for this lesson?
  • anon115 13 days ago
    [flagged]
  • codegladiator 13 days ago
    > Error correction is the fundamental requirement for creating a truly useful quantum computer. It enables the scale required to perform complex calculations.

    A hover tip on "quantum computing". Scale, useful, complex, fundamental, quantum

    what did i miss ?

    oh , computer !

  • andsoitis 13 days ago
    LLMs have started to show what they will never be able to do, as discussed here https://news.ycombinator.com/item?id=40179232

    Quantum machine learning is postulated to overcoming the issue with LLMs of exhibiting incredible intuition but limited intelligence.