Hey guys, I learned electronics from a nobel laureate!
Throughout my physics career including PhD, analog electronics was the most difficult but probably also the most rewarding class to me. I fondly remember staying until 2am in broida at ucsb trying to get a filter to work, getting a few hours sleep, then being back in the lab before sunrise. Of course, this was mostly the result of procrastination, but damn were those good times.
One thing that really bothered me then was the idea of a current source. I was perfectly happy with a voltage source, perhaps naively(1). But a current source seemed magical. I was asking Martinis about this and he seemed dumbfounded that I didn't understand. Of course, the answer is feedback. And, of course, good voltage sources also require feedback. But he was so familiar with feedback control he didn't even consider saying that's whats happening, while I never even heard of controls.
Long story short, sometime later I asked to join his lab as an undergrad researcher. He said no, and to this day I think it's because I didn't understand current sources. Or maybe I was too late, or maybe the A- (see the aforementioned procrastination). That led me to asking a biophysicist, and therefore I became a biophysicist instead of condensed matter/QI/QC. In hindsight, I think this was fortunate. I would've never considered biophysics, which has been one of the loves of my life since then. Who knows, maybe I would've been just as happy with quantum stuff. I'm working through Mike and ike now and find it fascinating.
Funny enough, after my PhD, I co-founded a startup in industrial control & automation. Now I understand feedback quite while, and thus current sources, albeit many years too late.
(1) Of course, good voltage sources vary their resistance just like good current sources vary their voltage. My best guess as to the reason I was more bothered by the current sources is that I was so familiar with voltage sources with confidently claimed constant voltages (batteries). Not a very good reason, I should've questioned it more. In practice, it's much easier to make a near ideal voltage source (very high resistance) than a near ideal current source (0 resistance).
There are two different ways to produce a voltage source. The first is to put a variable resistor in series with the power source, which you can adjust to control the voltage. This works, but dissipates a lot of heat. The second is to put a switch in series with the power source and a capacitor in parallel with the load, then switch the power source on and off very rapidly and use the duty cycle to control the voltage. This is how modern switching power supplies work. In actual practice, there are also inductors in the circuit which cause resonance, and allow the switching to happen when there is no current flowing through the switch. This is how modern switching power supplies can be so efficient.
Ok, thanks. But I was calling attention to a point in the previous comment that makes it difficult to see a kind of "dualism" between idealized current sources to idealized voltage sources. Idealized current or voltage sources don't necessarily have any series resistance, and it doesn't matter how they're realized.
I'm certainly no electrical engineer, and I see how the nice duality was lost in my description.
Yes, I was thinking about the ones I'd actually worked with, where the resistance was the control knob. I haven't done much electronics since, so my recollection isn't perfect. I have recently been doing some esp-32 control projects where I just used some power supply I bought. I should look into how it works!
Edit- I just looked up switching power supplies and remembered that I did actually know about those!
This is pretty much what we do to apply small bias currents to our superconducting circuits. The signals are small (<1 uA), and the power is dissipated outside of the cryostat, so this method is very simple and effective. The voltage and resistors don’t even need to be that huge, ~10 MOhm or below, and correspondingly, <10 V.
It works. In most applications it would be wasteful of electrical power (and money). You have to generate this very large voltage that (it turns out) you don't really need.
On the other hand that circuit is very easy to understand and build and test.
Both Devoret and Martinis are also highly involved in pushing quantum engineering to new levels - Devoret at Google Quantum AI and Martinis (formerly at Google) with his company, Qolab.
Coincidentally, I have a close friend doing his PhD with Devoret and know someone working with Martinis. I am curious to see if they will ever see their respective supervisors again, given that the Nobel Prize attention will likely garner them countless invitations for talks and keynotes...
Invitations is one thing I think they can mostly pick on their own volition with one exception.
The prize rules stipulate that they need to hold one lecture related to their winning topic with the institution that picked the winner within 6 months.
Iirc the 2024 physics prize lecture (on the roots of neural networks) was held in the days just before the prize giving ceremony and can be watched on the Swedish broadcasters "education" channel as well as youtube.
I spent time in UCSB’s physics department and Prof. Martinis was one of those experimental physicists who knew more about electronics and measurements than the typical electrical engineer. He used to have a wiki page containing documentation, cad files, etc of circuits that his group developed for his measurements and he also had open source software for controlling electronics. Very cool prize and happy to see UCSB getting one more Nobel prize!
It is worth noting that the research that Martinis is being awarded the Nobel prize was largely performed while at NIST (National Institute of Standards and Technology), part of the Dept of Commerce.
For those looking for a good pop-sci introduction to these sort of quantum effects and why demonstrating macroscopic quantum effects is a big deal for the foundations of the field I recommend “Through Two Doors at Once” by Anil Ananthaswany.
Great to see the University of California, Berkeley and the University of Cambridge, UK continuing to add to their already outstanding number of Nobel laureate alumni.
The Paris-Sud University was a new name to me. Apparently, this will be the 4th Nobel laureate associated with the university.
French higher education institutions work in very different ways from the standard US model; two relevant caracteristics are: a) good institutions for education and for research might not be the same at all and b) institutions work in a much more networked ways, so many labs will be joint ventures between like 5 universities/schools and 3 national research centres, students might get three degrees with each of them being a joint program between several schools.
I went to a tier 2 state school, no real research to speak of.
Amazing professors, great students to prof ratio, professors were in their offices all the time and happy to see students. The night before final labs were due profs would be up helping students debug problems.
Only 2 courses I took there even had TAs. Work was typically hand graded by professors as soon as you passed the first intro to course, and quite a few of the intro courses were fully taught by professors as well.
Does my school have any good research coming out of it? Not really. Not the point. It has a bunch of professors who are there because they want to teach, and it has a bunch of students who are getting to benefit from those professors.
"We know that the ball will bounce back every time it is thrown at a wall. A single particle, however, will sometimes pass straight through an equivalent barrier in its microscopic world and appear on the other side. This quantum mechanical phenomenon is called tunnelling."
Is the particle just failing to collide with the wall since objects are mostly empty space? Or is something more spooky or interesting happening?
The idea that a particle could pass through a wall by luckily avoiding collisions is a classical way of thinking. In that view, a particle is a tiny solid ball and a wall is just a collection of other tiny balls with space between them.
Quantum tunneling is based on a completely different concept. In quantum mechanics, the "wall" is not a physical object but a high energy barrier. Classically, a particle cannot be in a region if it doesn't have enough energy to overcome that barrier (this is why people often use the idea of a high wall and a ball that cannot make it over the wall). However, quantum mechanics treats particles as having wave-like properties. This wave is related to the probability of finding the particle at any given location. While the probability of finding the particle inside the high-energy barrier is very low, it is not zero. The wave's amplitude shrinks inside the barrier, but a small portion of it extends to the other side. This means there is a small but finite probability that if you measure the particle's position, you will find it on the other side. When that happens, we say the particle has "tunneled" through.
The surprising success of the experiments that led to the Nobel Prize today is that it wasn’t just a single particle (like an electron) that they measured tunneling through a barrier, it was a macroscopic group of particles. These particles were able to tunnel through the barrier because they were kept in a coherent state that allowed them to have a wave function that coherently extended through the barrier. This meant that they had a reasonable finite amplitude on the other side of the barrier so that a measurement could show that they tunneled through the barrier.
Thanks, I'm still thinking about your answer but really appreciate your explanation. Would this mean that there is some (possibly currently unknown) maximum size for a group of particles that could be forced to maintain the correct state to pass through the wall?
This is another great question. It's still a matter of debate how the wave-like behavior of quantum mechanics turns into the particle-like behavior of large objects that we observe. Some people believe there should be a maximum size, some people believe that there isn't. In fact Nature did a survey a few months ago and found that physicists disagree wildly on this topic. https://www.nature.com/articles/d41586-025-02342-y
Have you heard of Schrodinger's cat, which is hypothetically dead and alive at the same time? Schrodinger described this thought experiment to argue that quantum mechanics led to absurdities if you took it too far. Ironically, many physicists now believe that such an experiment is possible in principle, though it would be extremely difficult.
There is no obvious limit to how big you could scale up this experiment right here. In practice, these circuits are already big enough that you can quite literally see them with the naked eye (~mm in length). Nothing really stops you from making one meters in diameter, aside from the obvious impracticality of cooling such a massive structure to the required temperature. Nobody expects this to break any known physics by doing so. In fact, some of these experiments have also been used to estimate lower bounds for nonlinear versions of quantum mechanics (where collapse is a real thing, and the larger an object is, the faster it collapses).
But you should not really think of a physical wall in this case. The experiments that have proven the macroscopic tunneling behave according to the same exact math, but nothing is really tunneling through macroscopic walls. The cooper pair electrons are, but that’s a different Nobel prize (Josephson, 1973).
The real limit is not based on the size or number of particles, but on the coherence of the group of particles. Using the word coherence is probably not helpful without context, so let me give a quick explanation of what that means.
As I mentioned in my answer above, particles can exhibit wave-like properties. A group of particles will each have their own wave packet. In our everyday lives, two particles, even those right next to each other, are jiggling around randomly due to temperature and experiencing slightly different environments. You can think of each of these separate random jiggles as a measurement that collapses the wave function of that particle. Then, after the particle's wave function collapses, it begins to evolve again until the next measurement. As a side note, saying a measurement collapses the wave function is quantum mechanics talk for the observed reality that when we measure where a particle is, we do not find a wave, we find a particle. So, the shorthand for this view of quantum mechanics is that a measurement collapses the wave function.
Ok, so now we have a bunch of particles, like a chair. Why won't a chair tunnel through a wall? Well, all the particles that make up the chair are not just physically separated, but they are jiggling due to their temperature and their slightly different environments. So, all of these particles that make up the chair keep having their wave function collapsed randomly. The chance of one particle tunneling through the wall is small. The chance of all 10^27 particles in the chair independently tunneling through the barrier at once is not going to happen before the universe ends.
Back to coherence. All of these particles of the chair are independently jiggling around, and each one has its own wave function collapsed very quickly. For this reason, you can treat each particle as an independent particle. We would say the wave functions of these particles are not coherent with each other.
Now, imagine that we have two particles right next to each other. At room temperature, they are constantly jiggling and having their wave functions collapsed. If we cool them down to reduce the jiggling, and they are close enough to each other, their respective wave functions can start to overlap. When the jiggling of the particles is small enough and their wave functions overlap sufficiently, they begin to behave as a single quantum entity. This is a coherent state. In a suitably constructed experiment, these coherent particles can then exhibit quantum behaviors such as tunneling together.
Back to your question: is there some maximum size for a group of particles that could be forced to maintain the correct state to pass through the wall? Since the group of particles must be in a coherent quantum state to tunnel, the real question is how big of a group can be put into such a state. You have to cool them to slow the jiggling, isolate them from anything in the environment that might collapse their wave functions, and get them close enough together for their wave functions to overlap. There is likely a theoretical limit that could be calculated, but as a practical matter, extraordinary engineering efforts are required to get even a very small group of particles into a coherent quantum state. The direct answer to your question is that while there may be a theoretical maximum possible size of a coherent state for our universe, the real limit is set by the immense practical challenges of creating and maintaining a coherent state. This is what makes the work of this year’s Nobel Prize winners so impressive.
It's more spooky and interesting. When we switch to discussing the single particle case, the "wall" is a metaphor. You have a single particle in a low energy state at position A. There is another low energy state at position C, but the path from A to C entails (ostensibly) passing through an intermediate high energy state at position B. Without lending energy into the system, you wouldn't expect the particle to be able to move from A to C, because the particle needs a lot of energy temporarily to move from A to B (which it would give up again as it moves from B to C). Yet the particle does move from A to C, in the absence of an energy source that could explain what happened. This raises the spooky question: Did it actually pass through B on its way? It seems like it did not.
The simplest version of this problem involves a "potential barrier." Another loose classical analogy here is considering a ball rolling towards a hill. Everyone knows from experience in classical systems that with sufficient speed i.e. kinetic energy, the ball can go over the hill i.e. there is more kinetic energy in the ball when it meets the bottom of the hill than there is potential energy the ball would have at the top of the hill. If it has less energy, it will not make it past the hill. The weird thing about the analogous situation in quantum mechanics is that even if the particle has less energy than the potential barrier (the hill), it has a non-zero probability of being on the other side due to the wave function exponentially decaying in the barrier.
In quantum mechanics, the "ball" (or in this case an ideal particle) has a "wave function" associated with it. This wave function effectively describes the probability that the particle can be at a certain location.
It so happens that when you solve for this problem, a ball bouncing against a wall, in this wave function paradigm then you end up with a non-zero probability that the ball appears on the other side of the wall.
I'm not sure if there is a deeper explanation at play here but that's how I understand it.
As far as I know, the "single particle" referred to here is not a "classical particle" like a ball. It's a "quantum object" that, depending on how you look at it, behaves like a wave or an object. Definitely spooky!
I read the NY Times article about this earlier this morning. I thought it was not very good. I came to HN to see if it had something better. It did. The linked article is also at something like a high school level, but it gave me (retired PhD Physics) a good idea of the experiment and the theory. Thanks.
I remember being introduced to this research when reading a weird paper on the unexpected efficiency of photosynthesis, but now I can't find that paper. Anyone got any hints?
Can't help you with "a weird paper on the unexpected efficiency of photosynthesis", try asking a biologist at your local university, or possibly an organic chemist.
Yeah, I looked at Google Scholar to try and find cross-references to anything to do with photosynthesis and came up empty-handed. Annoying because I've been telling people about these guys for years, but can't find the original paper that introduced me to them!
Always exciting to see what groundbreaking discoveries the Nobel Physics Prize will honor this year. Can't wait to learn about the latest advancements!
Maybe you're reading a different version of HackerNews but I've found the amount of politics discussed to be fairly low compared to other outlets + in this case it is actaully relevant and an interesting point.
Also, look at the other content, this didn't come from nowhere.
I think that the majority of them will realize that salaries for academics are pretty low compared to the US, and they will either join private labs or professionalize.
€10K seems high to me. In France it would be lower than that, I think, save for maybe a handful of outliers. Google search suggests top earners in physics in academia in Germany is probably between €80K and €90K, or maxing out at €7.5K a month.
A lot of their researchers are immigrants and will just return to their country. New students will stop coming and stay in their home country. America is cooked.
No, attending a US university in a STEM subject is one of the only reliable ways to migrate to the country now that they've made it much harder to get H1b visas. The flow of immigrants seeking an American education will not stop any time soon.
That still involves H1B though? Student visas are non-immigrant, you still have to transition to an immigrant visa like H1B to actually stay past 1+2 years of STEM OPT
From that viewpoint of all of chemistry and biology is just a consequence of theoretical physics. A lot of the 'consequences' of physics are very surprising and not at all obvious from the fundamental equations. There's a great paper you might want to read:
Many great discoveries follow from new instrumentation leading to better and novel data, and less often some conceptual leap. This is why the genius of Einstein is all the more remarkable in coming up with relativity. Interestingly, he got his Nobel for something else :)
They discovered that the theory worked in a regime it hadn't been tested before; I'm not sure what "new physics" means in your sentence: it is a core assumption of physics that it's rules are always true, that all physics has always existed.
New physics in this context means previously unknown effects or mechanisms, or even a new theory/framework for an already understood phenomenon. Using "physics" in this way is common amongst academics.
It's highly non-trivial claim that macroscopic system can have quantized energy levels and exhibit measurable quantum effects. You can't just solve Shroedinger equation of 10^24 particles to show that.
I don't think so, I just think they expect that Nobel Prize level physics should feel less incremental, and everything that doesn't involve a revolution in physics (like supersymmetry) or at least an expected confirmation of an old revolution (like the Higgs boson or gravitational waves) feels incremental.
It would be pretty crazy to have enough big breakthroughs in physics to warrant a prize every year. I guess that’s how it was for a brief period in the early 1900s.
Exactly, also in it's goal to "demonstrate quantum tunneling macroscopically" haven't we had tunneling diodes for quite a while? The device uses tunneling for its basic functionality
Michel H. Devoret, Chief Scientist at Google Quantum AI, and in April 2020 John M. Martinis resigned from Google after being reassigned to an advisory role. [0]
Sounds like there was some politics shenanigans between them where Martinis was moved into a useless role and took the hint at the height of covid lockdown.
He left because the reorg stripped his hardware decision authority, and he wouldn’t stay without control to drive the technical plan.[0]
He kept working on the problem. The question now looking back over 6 years is did Google make a mistake. I've seen this at IBM and other large research divisions where people who did something significant 20 or 30 years in fields like AI become stagnant burning tens and hundreds of millions. There was a political battle and Martinis got pushed out. The question I have is did the people who pushed him out know the path forward in quantum computing or did they just know how to play office politics licking the correct behinds.
Throughout my physics career including PhD, analog electronics was the most difficult but probably also the most rewarding class to me. I fondly remember staying until 2am in broida at ucsb trying to get a filter to work, getting a few hours sleep, then being back in the lab before sunrise. Of course, this was mostly the result of procrastination, but damn were those good times.
One thing that really bothered me then was the idea of a current source. I was perfectly happy with a voltage source, perhaps naively(1). But a current source seemed magical. I was asking Martinis about this and he seemed dumbfounded that I didn't understand. Of course, the answer is feedback. And, of course, good voltage sources also require feedback. But he was so familiar with feedback control he didn't even consider saying that's whats happening, while I never even heard of controls.
Long story short, sometime later I asked to join his lab as an undergrad researcher. He said no, and to this day I think it's because I didn't understand current sources. Or maybe I was too late, or maybe the A- (see the aforementioned procrastination). That led me to asking a biophysicist, and therefore I became a biophysicist instead of condensed matter/QI/QC. In hindsight, I think this was fortunate. I would've never considered biophysics, which has been one of the loves of my life since then. Who knows, maybe I would've been just as happy with quantum stuff. I'm working through Mike and ike now and find it fascinating.
Funny enough, after my PhD, I co-founded a startup in industrial control & automation. Now I understand feedback quite while, and thus current sources, albeit many years too late.
(1) Of course, good voltage sources vary their resistance just like good current sources vary their voltage. My best guess as to the reason I was more bothered by the current sources is that I was so familiar with voltage sources with confidently claimed constant voltages (batteries). Not a very good reason, I should've questioned it more. In practice, it's much easier to make a near ideal voltage source (very high resistance) than a near ideal current source (0 resistance).
Did you mean to say "good voltage sources vary their current just like good current sources vary their voltage"?
(I know nobody really cares, and I promise I'll seek help for whatever neurological condition I seem to have.)
Or that might have just been a mistake.
Edit- I just looked up switching power supplies and remembered that I did actually know about those!
For a design without feedback, and in an energetically inefficient way, maybe this can work too:
1. Determine what will be the maximum resistance of the "current consumer" part of your circuit throughout its operation.
2. Prepare a resistor several magnitudes larger than the resistance above.
3. Connect to the resistor above a (huge) voltage source so that the resulting current is the one you target for your current source
4. Put the "current consumer" part of your circuit in series with the large resistor.
On the other hand that circuit is very easy to understand and build and test.
Both Devoret and Martinis are also highly involved in pushing quantum engineering to new levels - Devoret at Google Quantum AI and Martinis (formerly at Google) with his company, Qolab.
Coincidentally, I have a close friend doing his PhD with Devoret and know someone working with Martinis. I am curious to see if they will ever see their respective supervisors again, given that the Nobel Prize attention will likely garner them countless invitations for talks and keynotes...
The prize rules stipulate that they need to hold one lecture related to their winning topic with the institution that picked the winner within 6 months.
Iirc the 2024 physics prize lecture (on the roots of neural networks) was held in the days just before the prize giving ceremony and can be watched on the Swedish broadcasters "education" channel as well as youtube.
https://urplay.se/program/239905-nobelforelasningar-2024-geo...
https://www.youtube.com/watch?v=XDE9DjpcSdI
The Paris-Sud University was a new name to me. Apparently, this will be the 4th Nobel laureate associated with the university.
So it's hard to have a good grasp.
I'm pretty sure this is true in the US, too. Maybe we're just a little more delusional about it here.
Amazing professors, great students to prof ratio, professors were in their offices all the time and happy to see students. The night before final labs were due profs would be up helping students debug problems.
Only 2 courses I took there even had TAs. Work was typically hand graded by professors as soon as you passed the first intro to course, and quite a few of the intro courses were fully taught by professors as well.
Does my school have any good research coming out of it? Not really. Not the point. It has a bunch of professors who are there because they want to teach, and it has a bunch of students who are getting to benefit from those professors.
"We know that the ball will bounce back every time it is thrown at a wall. A single particle, however, will sometimes pass straight through an equivalent barrier in its microscopic world and appear on the other side. This quantum mechanical phenomenon is called tunnelling."
Is the particle just failing to collide with the wall since objects are mostly empty space? Or is something more spooky or interesting happening?
The idea that a particle could pass through a wall by luckily avoiding collisions is a classical way of thinking. In that view, a particle is a tiny solid ball and a wall is just a collection of other tiny balls with space between them.
Quantum tunneling is based on a completely different concept. In quantum mechanics, the "wall" is not a physical object but a high energy barrier. Classically, a particle cannot be in a region if it doesn't have enough energy to overcome that barrier (this is why people often use the idea of a high wall and a ball that cannot make it over the wall). However, quantum mechanics treats particles as having wave-like properties. This wave is related to the probability of finding the particle at any given location. While the probability of finding the particle inside the high-energy barrier is very low, it is not zero. The wave's amplitude shrinks inside the barrier, but a small portion of it extends to the other side. This means there is a small but finite probability that if you measure the particle's position, you will find it on the other side. When that happens, we say the particle has "tunneled" through.
The surprising success of the experiments that led to the Nobel Prize today is that it wasn’t just a single particle (like an electron) that they measured tunneling through a barrier, it was a macroscopic group of particles. These particles were able to tunnel through the barrier because they were kept in a coherent state that allowed them to have a wave function that coherently extended through the barrier. This meant that they had a reasonable finite amplitude on the other side of the barrier so that a measurement could show that they tunneled through the barrier.
Have you heard of Schrodinger's cat, which is hypothetically dead and alive at the same time? Schrodinger described this thought experiment to argue that quantum mechanics led to absurdities if you took it too far. Ironically, many physicists now believe that such an experiment is possible in principle, though it would be extremely difficult.
But you should not really think of a physical wall in this case. The experiments that have proven the macroscopic tunneling behave according to the same exact math, but nothing is really tunneling through macroscopic walls. The cooper pair electrons are, but that’s a different Nobel prize (Josephson, 1973).
The real limit is not based on the size or number of particles, but on the coherence of the group of particles. Using the word coherence is probably not helpful without context, so let me give a quick explanation of what that means.
As I mentioned in my answer above, particles can exhibit wave-like properties. A group of particles will each have their own wave packet. In our everyday lives, two particles, even those right next to each other, are jiggling around randomly due to temperature and experiencing slightly different environments. You can think of each of these separate random jiggles as a measurement that collapses the wave function of that particle. Then, after the particle's wave function collapses, it begins to evolve again until the next measurement. As a side note, saying a measurement collapses the wave function is quantum mechanics talk for the observed reality that when we measure where a particle is, we do not find a wave, we find a particle. So, the shorthand for this view of quantum mechanics is that a measurement collapses the wave function.
Ok, so now we have a bunch of particles, like a chair. Why won't a chair tunnel through a wall? Well, all the particles that make up the chair are not just physically separated, but they are jiggling due to their temperature and their slightly different environments. So, all of these particles that make up the chair keep having their wave function collapsed randomly. The chance of one particle tunneling through the wall is small. The chance of all 10^27 particles in the chair independently tunneling through the barrier at once is not going to happen before the universe ends.
Back to coherence. All of these particles of the chair are independently jiggling around, and each one has its own wave function collapsed very quickly. For this reason, you can treat each particle as an independent particle. We would say the wave functions of these particles are not coherent with each other.
Now, imagine that we have two particles right next to each other. At room temperature, they are constantly jiggling and having their wave functions collapsed. If we cool them down to reduce the jiggling, and they are close enough to each other, their respective wave functions can start to overlap. When the jiggling of the particles is small enough and their wave functions overlap sufficiently, they begin to behave as a single quantum entity. This is a coherent state. In a suitably constructed experiment, these coherent particles can then exhibit quantum behaviors such as tunneling together.
Back to your question: is there some maximum size for a group of particles that could be forced to maintain the correct state to pass through the wall? Since the group of particles must be in a coherent quantum state to tunnel, the real question is how big of a group can be put into such a state. You have to cool them to slow the jiggling, isolate them from anything in the environment that might collapse their wave functions, and get them close enough together for their wave functions to overlap. There is likely a theoretical limit that could be calculated, but as a practical matter, extraordinary engineering efforts are required to get even a very small group of particles into a coherent quantum state. The direct answer to your question is that while there may be a theoretical maximum possible size of a coherent state for our universe, the real limit is set by the immense practical challenges of creating and maintaining a coherent state. This is what makes the work of this year’s Nobel Prize winners so impressive.
It so happens that when you solve for this problem, a ball bouncing against a wall, in this wave function paradigm then you end up with a non-zero probability that the ball appears on the other side of the wall.
I'm not sure if there is a deeper explanation at play here but that's how I understand it.
https://pmc.ncbi.nlm.nih.gov/articles/PMC431568/
>Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems [2007]
https://www.nature.com/articles/nature05678
>Explaining the Efficiency of Photosynthesis: Quantum Uncertainty or Classical Vibrations? [2022]
https://pubs.acs.org/doi/10.1021/acs.jpclett.2c00538
>Reassessing the role and lifetime of Qx in the energy transfer dynamics of chlorophyll a [2025]
https://pubs.rsc.org/en/content/articlelanding/2025/sc/d4sc0...
>Full microscopic simulations uncover persistent quantum effects in primary photosynthesis [2025]
https://www.science.org/doi/10.1126/sciadv.ady6751
Can't help you with "a weird paper on the unexpected efficiency of photosynthesis", try asking a biologist at your local university, or possibly an organic chemist.
Tip: this page links to further reading of older stuff.
hints galore.
(Still better than last year's award which wasn't really physics at all!)
https://www.tkm.kit.edu/downloads/TKM1_2011_more_is_differen...
He also won a Nobel by the way.
Many great discoveries follow from new instrumentation leading to better and novel data, and less often some conceptual leap. This is why the genius of Einstein is all the more remarkable in coming up with relativity. Interestingly, he got his Nobel for something else :)
This is a practical implementation of something that was only theoretically possible or observed on very small scale.
There's a tradition of Nobel Prizes awarded for clever experiments, even if they do not uncover new fundamental laws.
Sounds like there was some politics shenanigans between them where Martinis was moved into a useless role and took the hint at the height of covid lockdown.
[0] https://en.wikipedia.org/wiki/Michel_Devoret
He kept working on the problem. The question now looking back over 6 years is did Google make a mistake. I've seen this at IBM and other large research divisions where people who did something significant 20 or 30 years in fields like AI become stagnant burning tens and hundreds of millions. There was a political battle and Martinis got pushed out. The question I have is did the people who pushed him out know the path forward in quantum computing or did they just know how to play office politics licking the correct behinds.
[0] https://www.forbes.com/sites/moorinsights/2020/04/30/googles...