This particular part of "physicist's history of physics" about quantization and Planck's work, promulgated by some sources, is well-known to be a false account of history and motivations, and has been criticized in mainstream literature before, e.g. by Helge Kragh [1] (and probably by many others). The present authors apparently are not aware of this, which makes me suspicious that they did not do their homework on this topic...
It's so interesting. This isn't the only paper written about this, because I actually came across this same idea last month in a much older paper. Here's an extract from it:
> It might have been thought, by some scientists in the 1890's, that refined mathematical analysis of this kind would play a role in resolving the fundamental problems of classical physics associated with the apparent failures of the equipartition theorem. But that is not what happened.
> Although the quantum hypothesis did dispose of the paradox of specific heats of polyatomic gases, and eliminated the possibility that ether-vibrations (having an infinite number of degrees of freedom) would drain an indefinite amount of energy out of material systems at any finite temperature, these were not the anomalies that provoked the introduction of the quantum hypothesis in the first place. Max Planck was not one of the physicists who worried about the validity of the equipartition theorem before 1900, and the myth that his distribution law for blackbody radiation was concocted merely to escape from an "ultraviolet catastrophe" predicted by the Rayleigh-Jeans law has now been thoroughly demolished. It was Paul Ehrenfest who invented the ultraviolet catastrophe (eleven years after the publication of Rayleigh's and Planck's papers in 1900) in order to dramatize what would have been the consequences of the equipartition theorem if it had been valid for all classical dynamical systems (though neither Rayleigh nor Planck believed that it was).
I have this saved as a note, but can't find the exact source atm. Here's another source though, from the 60s:
I would be very interested to read this. I found one comment in the abstract a little bit off-putting:
> Planck did not consider this a quantization, but merely a mathematical trick to be able to calculate the entropy of the oscillators.
My understanding was that Planck absolutely understood that his approach would have been a mathematical trick if he took the limit h → 0, but that in stopping at a nonzero small number he was explicitly aware that he was saying something peculiar about the energies in the system, and had strayed far away from that realm of pure mathematics into something that we would today effortlessly identify as quantum, even if that word did not exist at the time.
Often, when inventing something new - it can be difficult to assess how novel the "new" thing is. Particularly when there isn't yet a word for it. Planck may have simply believed that this hinted towards something equivalent of an "atom" of light. Atom's were after all a relatively recent discovery in 1827, why couldn't you have an atom-like construct for light? and why would light atoms be any different regular atoms?
While we now know a great deal about this topic - placing such speculation in a paper would be problematic. Hypothesizing that such quantizations are common for other quantities would be even more problematic.
EDIT: Removed eroneous mention of michelson-moorly instead of milikan oil drop experiment.
> Atom's were after all a relatively recent discovery in 1827
Atoms were controversial well past that. Their existence was not definitively settled until Einstein's work on Brownian motion in the early 20th century (which, incidentally, is what he won the Nobel Prize for, not relativity — Update: turns out I got that wrong. See child comment.). The controversy around the atomic theory was so intense it drove Ludwig Boltzmann to suicide [1].
The question of whether light is a particle or a wave dates back to at least Newton (who thought it was a particle) and Hyugens (who thought it was a wave). By the end of the 19th century, before Einstein brought up the photoelectric effect, the consensus opinion was pretty firmly on the "wave" side of the dispute, and apparently Planck was not an outlier. See another comment: https://news.ycombinator.com/item?id=39349027
Even though this is tangential, I think it's important to note that this experiment should be called as the Millikan-Fletcher oil drop experiment to acknowledge Harvey Fletcher's contribution to this experiment as a grad student which he was coerced into relinquishing to receive his PhD
This paper is nice but appears to stretch its result quite a bit.
First, the authors make a general claim about "most physics textbooks" without providing a single example. I think one will often encounter more nuanced statements in the better and more widely used textbooks.
And I think the paper sorely lacks evidence for the general claim in the concluding sentence: "The idea that physics progress through a series of crisis, is hard to defend." Not only do they present only a single example, but even in that case one could claim that the "crisis" started after the discovery of Planck's formula! After all, it fitted the data supremely well but required this mystery constant: h.
It took physicist a quarter century to resolve the deeper meaning of Planck's constant. If that was not a crisis in physics then I do not know what would qualify as one.
I wish it were otherwise, but there are some weird dynamics in the textbook industry that reward looking like other textbooks more than accuracy and usefulness.
A classic essay showing this is "The Case of the Creeping Fox Terrier Clone" in Gould's book Bully for Brontosaurus" that traces the history of textbooks comparing Hyracotherium* to a fox terrier. This, despite the fact that most students and authors have absolutely no real idea how big a fox terrier is. And it wouldn't help if they did know, given that Hyracotherium actually weighed over twice as much!
Another classic essay showing how textbooks repeat other textbooks without properly questioning what should be taught is https://web.williams.edu/Mathematics/lg5/Rota.pdf. If you've taken some variant of the differential equations course that he discusses, I highly recommend reading his essay. I guarantee that it is far from the only standard course with such levels of silliness.
> Another classic essay showing how textbooks repeat other textbooks without properly questioning what should be taught is https://web.williams.edu/Mathematics/lg5/Rota.pdf. If you've taken some variant of the differential equations course that he discusses, I highly recommend reading his essay. I guarantee that it is far from the only standard course with such levels of silliness.
Thank you for sharing Rota's delightful lecture! I wish more professors/instructors took a critical look at their material, which is nearly always is taught in the order that was put in place for some now utterly irrelevant or forgotten reasons. If the science progresses at the speed of the hearse, the education of scientific subjects barely moves at all.
I was curious, so I grabbed three undergraduate-level physics texts I had nearby.
One explicitly recites the Ultraviolet Catastrophe prompted Planck story, complete with Rayleigh's incomplete formula.
One essentially matches the story in section 2, using the lesser version of Rayleigh's formula, but (just like the story) does not explicitly tie Planck's work to it. (That textbook notes "an act of desperation" is a quote from one of Planck's letters.)
The third one is interesting! It says that "late nineteenth century physicists tried to understand the shape of the blackbody spectrum [...] using their knowledge of thermodynamics and electromagnetic waves. Their efforts ended in failure." This third text never mentions Rayleigh by name and doesn't specifically show "Rayleigh's Lesser Formula", but it does graph that formula vs. the observed blackbody radiation (interestingly, as a function of frequency instead of wavelength).
The text then eventually says that in 1900, Planck used a photon argument "to make a theoretical prediction that is in excellent agreement with the experimental spectrum". It does not explicitly state cause and effect, but it's kinda implied from the structure of the writing.
Reading into the third text a smidge, it feels like the result of wanting to use the Rayleigh/Catastrophe story and yet knowing it wasn't quite true.
The third source is still wrong, as Planck certainly would not agree that he used the photon argument to make a prediction. He tried to explain the experimental data on blackbody radiation, which manifested the spectral peak, and an agreement with the Rayleigh-Jeans and Wien laws in the two frequency limits. Thus not a prediction, but an explanation of the observed thing.
And he did not believe in photons, he interpreted his work in terms of classical EM radiation obeying some entropy condition, and quanta of energy that he used were considered either a math trick to make calculations with that entropy, or a condition on the emission process only in his later theories. He never assumed or believed that EM radiation consists of quanta.
It's not a stretch at all in my opinion, not only was I taught this myself (multiple times at two separate universities in the US) but there was a paper addressing this myth as far back as the 60s.
Though I do agree with your second point. I can name two crises off the top of my head: the black hole information paradox, and the galaxy rotation curves which some claim are due to dark matter. One is a major theoretical problem, and the other is a major experimental problem. And though crises they may be, it's not like people are running around with their hair on fire.
Realistically, the UV catastrophe was not a crisis. Simply because pretty much nobody understood its implications. It was like: "Quantization of light makes this formula work? OK, whatever".
At that time, the main catastrophe was caused by the null results in the search for the luminiferous aether (Michelson-Morley experiments).
At the same time, questions about the structure of matter and the composition of the atom were another crisis point. Earnshaw's theorem states that matter can't be held together by electromagnetic forces alone.
I do not think calling out specific books would be remotely good idea in many different levels. There are ao many examples, it makes no sense to enumerate any. Virtually every explanation I know starts with that myth.
It's very common on Youtube physics education videos. My take away from was exactly what the article states. That the ultraviolet catastrophe was observed and that the study of it lead to Quantum physics because it was noted that it could only be explained if things existed in discrete energy levels.
I'm not even entirely sure the article establishes that it was false. The wrong equations really had a problem with an ultraviolet catastrophe. If the physicists of the time don't seem to be running around panicking about that, it's because equations not being entirely correct as they are in the process of being refined was even by then a relatively mundane thing, obviously part of the process.
We (hopefully) historically stand in a similar position with regard to General Relativity and Quantum Mechanics. We know they don't go together. We don't know what the correct answer is. It's an understood problem. But it's not like physicists spend their days running around and shrieking and breaking down into tears about it, and in the meantime, we get on with using GR & QM to predict things.
It may be too strong to say "Physicists observed this issue with the equations and their freakout about them directly led to quantization." But it was a real problem with the equations, and it's certainly related to what led to quantization, and if the story glosses over yet another instance of what a physicists perceived as a mathematical convenience that turned out to be quite physically real, I'm not sure that's a vital detail for every high school student.
Is it possible there’s some confusion here about the understanding (then and now) of “catastrophe”?
As I understand, it wasn’t meant to mean that the theory had a catastrophic flaw, but rather that the infinite energy implied at the asymptote itself represented an (obviously unobserved, thus curious) physical catastrophe.
I agree with your characterization that an unresolved catastrophe of the latter kind does not imply a crisis of science the way an unresolved “catastrophe” of the former kind might.
Could be. My understanding of the term is the same as yours, but reading it the wrong way would fit the facts, and I have to admit in general I can't be too critical of such a reading. The physics sense of "catastrophe" in use here is pretty obscure; I'm not sure I can think of another instance of it I've come across in English.
“Catastrophe” or “crisis” have a long history of use for sudden change phenomena, especially associated with some major failure or blow up of a previous pattern of system behavior.
“Critical” may be a more familiar word used in similar contexts, honed in on a specific threshold of a dramatic behavior change.
None of these words in this usage style refer to the scientific social process, but to the phenomena.
Please not that the authors are Norwegian at a Norwegian university, and the first citation are "KVANTEFYSIKKENS UTVIKLING
i fysikklærebøker, vitenskapshistorien og undervisning" by Reidun Renstrøm at the University of Oslo, all proclaiming the myth being perversive.
I would be careful proclaiming this being some kind of American phenomenon.
I have basically no physics education, but I was educated in the US. In my high school, where I took an ordinary, low-quality physics course (the one for weak students that didn't require calculus), the introduction of quantum mechanics was motivated by the photoelectric effect. Now, like I said, I don't really have any physics education and I don't really understand the photoelectric effect _or_ quantum mechanics, but my basic recollection was waves hands you shine a light on certain materials and electrons pop out, and it looks for various reasons like this behavior is packet-y rather than smooth as one might expect.
Basically in an intro American physics class, if they've got an opportunity to get Einstein involved, they're gonna take it.
I've never heard of this myth either.
FWIW, the authors, Nils-Erik Bomark and Reidun Renstrøm, also appear to work at .no universities.
Same here. Studied it in 2 different countries, and it was always presented as "trying to explain black body radiation". I think UV catastrophe was merely mentioned as a side note.
I've seen many mentions of the Ultraviolet Catastrophe, but I don't remember reading that it directly caused Plank to look for (a quantized) solution. That seems like a historical justification that a QM physics class doesn't need. I looked through both Liboff and Baym and don't see even a mention of ultraviolet (other than problem sets) or catastrophe, but maybe I missed it. These are searchable.
I don't think the Mechanical Uselessverse (as we referred to it) would be a text for these schools, thought it was produced at caltech and it apparently does refer to the ultraviolet catastrophe. I think you're more likely to find it in videos and narrative historical materials where story is more important.
Douglas Hofstader has a talk on this topic called "Albert Einstein on Light; Light on Albert Einstein" that I often revisit: https://www.youtube.com/watch?v=ePA1zq56J1I (watch 10:30 to 12:00 if you're pressed for time, but I recommend the whole thing)
Indeed. I recommend also checking out the 2nd volume of Einstein's collected papers. There is one with the proceedings from a conference on the subject, and Einstein is basically alone in trying to convince his peers, including Planck, of the reality of "quanta" of light, independent of the process of emission and absorption or mathematical tricks.
Agreed; my professors did use Planck's trick to introduce quantization, but made it very clear that he was just fitting the data and thought the discretization would disappear with further analysis.
Seemingly also the Wikipedia article (https://en.wikipedia.org/wiki/Ultraviolet_catastrophe) claims the pop-sci ordering: “As the theory diverged from empirical observations when these frequencies reached the ultraviolet region of the electromagnetic spectrum, there was a problem.[3] This problem was later found to be due to a property of quanta as proposed by Max Planck: There could be no fraction of a discrete energy package already carrying minimal energy.”
Pedagogically, this is an argument against teaching physics using the historical development model. You end up with post hoc arguments and simplified narratives, and I think it it just makes life harder for undergraduate students. Maybe 'history of science' should be its own subject?
Some textbooks (e.g. Molecular Quantum Mechanics, Atkins & Friedman) take a more nuanced view. They present failures of classical calculations of the heat capacity of solids near absolute zero side by side with blackbody radiation:
> "Einstein recognized the similarity between this problem and black-body radiation, for if each atomic oscillator required a certain minimum energy before it would actively oscillate, then at low temperatures some would be inactive and the heat capacity would be smaller than expected."
Debye improved the theory by allowing atoms to oscillate with different frequencies. So looking back, one can say matter appears to be quantized, and this shows up at low temperatures, and radiation appears to be quantized, and this shows up at high frequencies - which is a nice symmetric argument, visible in hindsight, that probably helps students grasp the concept of the quantized harmonic oscillator (and why they need to learn about it).
One major development was Bose deriving Planck's radiation law using quantum statisical arguments (and no classical physics), with further development by Einstein c. 1924 - but this might be a difficult place to start from, teaching-wise.
The myth is also promoted in Chapter 3 of The Making of the Atomic Bomb by Richard Rhodes:
> Plank had taught at Berlin since 1889. In 1900 he had proposed a revolutionary idea to explain a persistent problem in mechanical physics, the so-called ultraviolet catastrophe
This video talks about this story and in particular acknowledges that it was called the ultraviolet catastrophe after Plank https://youtu.be/gXeAp_lyj9s
Funny I spent Sunday afternoon watching youtubers talk about this and they all pretty much said that the catastrophe predated Plank's corrections (or caused them). Is this wrong, or is it just pedantic?
As a physics layman, I learned his part of history from the book "Quantum" by Manjit Kumar, which as far as I can tell got the Planck bit right and covered his black body work correctly, FWIW.
Besides the myth busted in this paper, that the actually later work of Rayleigh could have influenced Planck, there is another incorrect myth, that Planck has introduced the "constant of Planck" in his publication from 1900, where he presented the deduction of the Planck formula from the supposition that the emission and absorption of electromagnetic radiation are quantized.
This frequently seen claim is also wrong. Planck has introduced the constant of Planck and he has also computed its value with excellent precision for that time (4% relative error) in an earlier work published in 1899:
Max Planck, "Ueber irreversible Strahlungsvorgaenge", "Sitzungsberichte der koeniglich preussischen Akademie der Wissenschaften zu Berlin. Jahrgang 1899", pp. 440-480.
There Max Planck has presented deductions of the formulae previously established by Wien for the blackbody radiation, where he replaced the empirical constants of Wien with functions of other universal constants and of the new universal constant introduced by him.
Already Maxwell, a quarter of century earlier, had shown that it is possible to determine the units for all physical quantities with a single arbitrary choice (in his example, the wavelength of the yellow light emitted by sodium vapor).
In 1899, Planck has shown that the law of blackbody radiation provides an additional relationship between the units of length, time and energy, which, together with the previous relationships considered by Maxwell, can determine the units of all physical quantities without any arbitrary choice.
So at the end of this work from 1899, where the constant of Planck has been introduced, he has also presented the system of natural units known now as the Planck units.
Nevertheless, the system of Planck units cannot be used as the base of a practical system of units, because the uncertainty of measuring the Newtonian constant of gravitation is huge. This makes useless one of the equations that connect the units of length, time and energy.
Because of that, any practical system of units must contain a single arbitrary choice of a unit, which in the case of SI is the frequency of a certain hyperfine transition of the cesium-133 atom, while all the other units result from this choice by adopting conventional values for the universal constants, except for the Newtonian constant of gravitation, which must be measured experimentally (some constant determining the intensity of the electromagnetic interaction, e.g. the fine structure constant, must also be measured experimentally, but for that the uncertainty is extremely low).
BTW, another extremely frequent incorrect claim about the constant of Planck is that it is a quantum of action. This is very wrong, it is a quantum of angular momentum (the ratio between energy and frequency is an angular momentum, like also the ratio between their integrals, i.e. between action and plane angle). The origin of the mistake is the fact that many follow the suggestions of the recent SI brochures (there was a resolution adopted by vote that the unit of plane angle is not a base unit, which is equivalent with establishing by vote that 2 + 2 = 5), and they omit the unit of plane angle in the dimensional formulae, in which case it appears that the unit of action is the same with the unit of angular momentum, but they are not the same, as any attempt to change the unit used for plane angles would demonstrate, e.g. between radians and degrees or cycles.
The original constant of Planck corresponds to plane angles measured in cycles, while the so-called h bar is the same constant converted to correspond with plane angles measured in radians.
At that time the use of radians was still uncommon in physics.
The frequency of periodic events was measured at that time in cycles per second. The unit of frequency "cycle per second" has been renamed "hertz" much later. The renaming was first proposed in 1935, but it became adopted in SI only in 1960.
Because the use of cycles per second (now hertz) is entrenched, when the radian has been promoted as the preferred unit for plane angle, that has caused a split in the SI system of physical quantities, between the frequency of periodic phenomena measured in cycles per second and the angular velocity measured in radians per second.
This split is a big cause of inconsistency in SI, because "frequency of periodic phenomena" and "angular velocity" are just 2 names for the same physical quantity and the "hertz"/"cycle per second" and the "radian per second" do not belong in the same consistent system of units, the former corresponds to the choice of the cycle as the unit of plane angle, while the latter corresponds to the choice of the radian as the unit of plane angle.
Thus the SI system of units is a mixture of units from 2 distinct systems of consistent units. There are a number of physical quantities in SI for which 2 units are used, one derived from the cycle and one derived from the radian. In some cases distinct names are used to show the intended unit, like "frequency" and "angular velocity", while in other cases there are no distinct names. The non-SI unit of angular velocity, "rpm", i.e. "rotations per minute", is another name for "cycles per minute".
All the units of physical quantities related to rotations, including the angular momentum, depend on the unit chosen for the plane angle, so they must be multiplied or divided by conversion factors (i.e. 2 times pi for conversion between cycle and radian), whenever the unit for plane angle is changed.
The use of radian is a cause of confusion for those who learn about the physics of rotation, because they learn for instance that the angular momentum and the torque are proportional with the radius to the center of rotation.
This is false and it appears to be true only when the radian is chosen as the unit of plane angle. In reality, the factor of conversion between the physical quantities that refer to linear motion and those that refer to rotations is not the radius, but the ratio between the arc length and the central angle corresponding to the arc. This ratio happens to be equal to the radius only when the central angle happens to be measured in radians.
The correct conversion factors must be used to convert the values of a physical quantity like angular momentum between systems with different units for the plane angle.
While in the system with plane angles measured in radians the angular momentum is the linear momentum multiplied by radius, in the system with plane angles measured in cycles the angular momentum is the linear momentum multiplied by perimeter. The same for torque.
When you divide a kinetic energy to a frequency expressed in cycles per second, you get an angular momentum that is equal to the product of linear momentum by perimeter, corresponding to the cycle as the unit of angle.
When you divide a kinetic energy to a frequency expressed in radians per second, you get an angular momentum that is equal to the product of linear momentum by radius, corresponding to the radian as the unit of angle.
These 2 variants correspond to the alternative "h" and "h bar" values of the Planck constant. The same quantum of angular momentum corresponds to both choices (i.e. the double of the spin of the electron), it is just expressed in different units.
The Planck institute sent some borderline animal to my city where he proceeded to criticize everything and everyone he saw. He directed some blah blah department under their purview. Completely unimpressed here.
Yet another example where it is tempting to retrofit a modern understanding onto a historical debate. We're tempted to do this because when you're embedded to the modern worldview, it is hard to remember that others were once possible. And it is tempting to believe that history was a straight arrow to modern truths. In fact it was seldom such a straight path.
Kuhn complained about this in The Structure of Scientific Revolutions. When trying to teach the history of science to scientists, you have to work to get them to stop trying to think the "correct" way, so that they can understand the actual historical debate.
One could easily adopt the idea that the history of science is "Some smart guy figured this out" over and over again.
The real history of science is: A lot of people became interested in problems and worked on theory and test apparatus and put their ideas into public discussion and eventually and sometimes suddenly we developed narratives and equations that explain observations. Along the way there was a lot of contention and conflict.
It is easy to retrofit many simple stories onto science.
In this case the "smart guy who figured it out" usually didn't understand the discovery in the same way that we do today. Something was figured out, but typically not in the modern glory that we explain it with today.
The equations don’t just congeal out of the air, no matter how many people are thinking about the problems. They are indeed the results of some smart people figuring things out.
I wonder if it might be pop-sci vs science, rather than modern vs historical.
Physicists (from the outside at least) have always seemed more like hunters than the town watch, they go looking for the problems. There isn’t some catastrophic looming threat of physics approaching that they have to deal with, haha. Unexplainable data is an opportunity and all that.
What I'm talking about is very much modern vs historical.
To take a trivial example, most of us would like to draw a straight line from Darwin writing The Origin of the Species to the current acceptance of his theory of evolution. We have no particular desire to follow how Darwin's work inspired Francis Galton to study heredity. Unfortunately Galton discovered regression to the mean when he did. Further experiments undermined Darwin's theories as it uncovered evidence for "natural types". The result was that a half-century after Darwin's great book, many scientists doubted Darwin's theories.
But then R. A. Fischer managed to explain the mess with population genetics based on Mendel's theories. "Natural types" disappeared from the literature, and Darwin was back. Today Galton is likely to be remembered as a dilletante who invented the idea of eugenics. And Fischer as a genius in statistics. We retrofit a story with heroes (Darwin and Fischer) and villains (Galton). We skip over the bad parts, and focus on the good.
In the process we forget that Darwin also took it for granted that blacks must be inferior to whites. And that Fischer was also a supporter of eugenics. And that Galton set out to confirm Darwin, then accepted the data that he encountered.
We want a story, not a mess. But history is full of messes. Arranging the right ideas in the right ways involved a whole lot of trial and error that wasn't obvious at the time. While some of us enjoy learning about the history, it actually isn't very helpful for scientists. Because there is little point in learning every wrong idea that people used to hold, only to immediately learn that you can forget it again because it was wrong.
But while that exercise does not help us learn what is currently known, maybe it can help give us more humility about what it is we think we know today?
Because there is little point in learning every wrong idea that people used to hold, only to immediately learn that you can forget it again because it was wrong.
Of course there's a point, one you suggest yourself. "If great scientists like Darwin could be wrong about X, is there a chance that I'm wrong about Y?"
Quantum physics: where the only certainty is that even scientists are uncertain, but don't worry, nobody understands it, not even the scientists themselves!
[1] https://dept.math.lsa.umich.edu/~krasny/math156_article_plan...
> It might have been thought, by some scientists in the 1890's, that refined mathematical analysis of this kind would play a role in resolving the fundamental problems of classical physics associated with the apparent failures of the equipartition theorem. But that is not what happened.
> Although the quantum hypothesis did dispose of the paradox of specific heats of polyatomic gases, and eliminated the possibility that ether-vibrations (having an infinite number of degrees of freedom) would drain an indefinite amount of energy out of material systems at any finite temperature, these were not the anomalies that provoked the introduction of the quantum hypothesis in the first place. Max Planck was not one of the physicists who worried about the validity of the equipartition theorem before 1900, and the myth that his distribution law for blackbody radiation was concocted merely to escape from an "ultraviolet catastrophe" predicted by the Rayleigh-Jeans law has now been thoroughly demolished. It was Paul Ehrenfest who invented the ultraviolet catastrophe (eleven years after the publication of Rayleigh's and Planck's papers in 1900) in order to dramatize what would have been the consequences of the equipartition theorem if it had been valid for all classical dynamical systems (though neither Rayleigh nor Planck believed that it was).
I have this saved as a note, but can't find the exact source atm. Here's another source though, from the 60s:
https://sci-hub.ru/https://doi.org/10.1007/BF00327765
> Planck did not consider this a quantization, but merely a mathematical trick to be able to calculate the entropy of the oscillators.
My understanding was that Planck absolutely understood that his approach would have been a mathematical trick if he took the limit h → 0, but that in stopping at a nonzero small number he was explicitly aware that he was saying something peculiar about the energies in the system, and had strayed far away from that realm of pure mathematics into something that we would today effortlessly identify as quantum, even if that word did not exist at the time.
While we now know a great deal about this topic - placing such speculation in a paper would be problematic. Hypothesizing that such quantizations are common for other quantities would be even more problematic.
EDIT: Removed eroneous mention of michelson-moorly instead of milikan oil drop experiment.
Atoms were controversial well past that. Their existence was not definitively settled until Einstein's work on Brownian motion in the early 20th century (which, incidentally, is what he won the Nobel Prize for, not relativity — Update: turns out I got that wrong. See child comment.). The controversy around the atomic theory was so intense it drove Ludwig Boltzmann to suicide [1].
[1] https://paperpile.com/blog/ludwig-boltzmann/
(I think he could have won it for the theory of Brownian motion too!)
My work with Millikan on the oil-drop experiment - https://web.archive.org/web/20160128151252/http://www.cce.uf...
First, the authors make a general claim about "most physics textbooks" without providing a single example. I think one will often encounter more nuanced statements in the better and more widely used textbooks.
And I think the paper sorely lacks evidence for the general claim in the concluding sentence: "The idea that physics progress through a series of crisis, is hard to defend." Not only do they present only a single example, but even in that case one could claim that the "crisis" started after the discovery of Planck's formula! After all, it fitted the data supremely well but required this mystery constant: h.
It took physicist a quarter century to resolve the deeper meaning of Planck's constant. If that was not a crisis in physics then I do not know what would qualify as one.
I wish it were otherwise, but there are some weird dynamics in the textbook industry that reward looking like other textbooks more than accuracy and usefulness.
A classic essay showing this is "The Case of the Creeping Fox Terrier Clone" in Gould's book Bully for Brontosaurus" that traces the history of textbooks comparing Hyracotherium* to a fox terrier. This, despite the fact that most students and authors have absolutely no real idea how big a fox terrier is. And it wouldn't help if they did know, given that Hyracotherium actually weighed over twice as much!
Another classic essay showing how textbooks repeat other textbooks without properly questioning what should be taught is https://web.williams.edu/Mathematics/lg5/Rota.pdf. If you've taken some variant of the differential equations course that he discusses, I highly recommend reading his essay. I guarantee that it is far from the only standard course with such levels of silliness.
Thank you for sharing Rota's delightful lecture! I wish more professors/instructors took a critical look at their material, which is nearly always is taught in the order that was put in place for some now utterly irrelevant or forgotten reasons. If the science progresses at the speed of the hearse, the education of scientific subjects barely moves at all.
One explicitly recites the Ultraviolet Catastrophe prompted Planck story, complete with Rayleigh's incomplete formula.
One essentially matches the story in section 2, using the lesser version of Rayleigh's formula, but (just like the story) does not explicitly tie Planck's work to it. (That textbook notes "an act of desperation" is a quote from one of Planck's letters.)
The third one is interesting! It says that "late nineteenth century physicists tried to understand the shape of the blackbody spectrum [...] using their knowledge of thermodynamics and electromagnetic waves. Their efforts ended in failure." This third text never mentions Rayleigh by name and doesn't specifically show "Rayleigh's Lesser Formula", but it does graph that formula vs. the observed blackbody radiation (interestingly, as a function of frequency instead of wavelength).
The text then eventually says that in 1900, Planck used a photon argument "to make a theoretical prediction that is in excellent agreement with the experimental spectrum". It does not explicitly state cause and effect, but it's kinda implied from the structure of the writing.
Reading into the third text a smidge, it feels like the result of wanting to use the Rayleigh/Catastrophe story and yet knowing it wasn't quite true.
And he did not believe in photons, he interpreted his work in terms of classical EM radiation obeying some entropy condition, and quanta of energy that he used were considered either a math trick to make calculations with that entropy, or a condition on the emission process only in his later theories. He never assumed or believed that EM radiation consists of quanta.
Though I do agree with your second point. I can name two crises off the top of my head: the black hole information paradox, and the galaxy rotation curves which some claim are due to dark matter. One is a major theoretical problem, and the other is a major experimental problem. And though crises they may be, it's not like people are running around with their hair on fire.
At that time, the main catastrophe was caused by the null results in the search for the luminiferous aether (Michelson-Morley experiments).
At the same time, questions about the structure of matter and the composition of the atom were another crisis point. Earnshaw's theorem states that matter can't be held together by electromagnetic forces alone.
I think naming 3 of them would be better than naming 0 of them.
I find it strange that many people in this thread are gainsaying the existence of this narrative. It's surely a commonplace part of physics education?
Really? When there Mercury's perihelion mystery at the exact same period? (Which got us general relativity)
But the actual story as described in the paper is vaguely familiar. Before reading it my mind wandered to Einstein and quantization of light.
Is this mainly a US myth perhaps?
We (hopefully) historically stand in a similar position with regard to General Relativity and Quantum Mechanics. We know they don't go together. We don't know what the correct answer is. It's an understood problem. But it's not like physicists spend their days running around and shrieking and breaking down into tears about it, and in the meantime, we get on with using GR & QM to predict things.
It may be too strong to say "Physicists observed this issue with the equations and their freakout about them directly led to quantization." But it was a real problem with the equations, and it's certainly related to what led to quantization, and if the story glosses over yet another instance of what a physicists perceived as a mathematical convenience that turned out to be quite physically real, I'm not sure that's a vital detail for every high school student.
As I understand, it wasn’t meant to mean that the theory had a catastrophic flaw, but rather that the infinite energy implied at the asymptote itself represented an (obviously unobserved, thus curious) physical catastrophe.
I agree with your characterization that an unresolved catastrophe of the latter kind does not imply a crisis of science the way an unresolved “catastrophe” of the former kind might.
“Critical” may be a more familiar word used in similar contexts, honed in on a specific threshold of a dramatic behavior change.
None of these words in this usage style refer to the scientific social process, but to the phenomena.
Please not that the authors are Norwegian at a Norwegian university, and the first citation are "KVANTEFYSIKKENS UTVIKLING i fysikklærebøker, vitenskapshistorien og undervisning" by Reidun Renstrøm at the University of Oslo, all proclaiming the myth being perversive.
I would be careful proclaiming this being some kind of American phenomenon.
Rather than merely that he saw an apple fall nearby, and wondered why it fell downwards.
I have basically no physics education, but I was educated in the US. In my high school, where I took an ordinary, low-quality physics course (the one for weak students that didn't require calculus), the introduction of quantum mechanics was motivated by the photoelectric effect. Now, like I said, I don't really have any physics education and I don't really understand the photoelectric effect _or_ quantum mechanics, but my basic recollection was waves hands you shine a light on certain materials and electrons pop out, and it looks for various reasons like this behavior is packet-y rather than smooth as one might expect.
Basically in an intro American physics class, if they've got an opportunity to get Einstein involved, they're gonna take it.
I've never heard of this myth either.
FWIW, the authors, Nils-Erik Bomark and Reidun Renstrøm, also appear to work at .no universities.
EDIT: both at high school and across multiple different lecturers at university
https://archive.org/stream/LIBOFFIntroductoryQuantumMechanic...
https://archive.org/details/lecturesonquantu0000baym/mode/2u...
I don't think the Mechanical Uselessverse (as we referred to it) would be a text for these schools, thought it was produced at caltech and it apparently does refer to the ultraviolet catastrophe. I think you're more likely to find it in videos and narrative historical materials where story is more important.
https://archive.org/details/beyondmechanical0000olen/mode/2u...
Some textbooks (e.g. Molecular Quantum Mechanics, Atkins & Friedman) take a more nuanced view. They present failures of classical calculations of the heat capacity of solids near absolute zero side by side with blackbody radiation:
> "Einstein recognized the similarity between this problem and black-body radiation, for if each atomic oscillator required a certain minimum energy before it would actively oscillate, then at low temperatures some would be inactive and the heat capacity would be smaller than expected."
Debye improved the theory by allowing atoms to oscillate with different frequencies. So looking back, one can say matter appears to be quantized, and this shows up at low temperatures, and radiation appears to be quantized, and this shows up at high frequencies - which is a nice symmetric argument, visible in hindsight, that probably helps students grasp the concept of the quantized harmonic oscillator (and why they need to learn about it).
One major development was Bose deriving Planck's radiation law using quantum statisical arguments (and no classical physics), with further development by Einstein c. 1924 - but this might be a difficult place to start from, teaching-wise.
https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein_statisti...
> Plank had taught at Berlin since 1889. In 1900 he had proposed a revolutionary idea to explain a persistent problem in mechanical physics, the so-called ultraviolet catastrophe
Most importantly, is the wikipedia page correct:
https://en.wikipedia.org/wiki/Ultraviolet_catastrophe
It was a good read.
This frequently seen claim is also wrong. Planck has introduced the constant of Planck and he has also computed its value with excellent precision for that time (4% relative error) in an earlier work published in 1899:
Max Planck, "Ueber irreversible Strahlungsvorgaenge", "Sitzungsberichte der koeniglich preussischen Akademie der Wissenschaften zu Berlin. Jahrgang 1899", pp. 440-480.
There Max Planck has presented deductions of the formulae previously established by Wien for the blackbody radiation, where he replaced the empirical constants of Wien with functions of other universal constants and of the new universal constant introduced by him.
Already Maxwell, a quarter of century earlier, had shown that it is possible to determine the units for all physical quantities with a single arbitrary choice (in his example, the wavelength of the yellow light emitted by sodium vapor).
In 1899, Planck has shown that the law of blackbody radiation provides an additional relationship between the units of length, time and energy, which, together with the previous relationships considered by Maxwell, can determine the units of all physical quantities without any arbitrary choice.
So at the end of this work from 1899, where the constant of Planck has been introduced, he has also presented the system of natural units known now as the Planck units.
Nevertheless, the system of Planck units cannot be used as the base of a practical system of units, because the uncertainty of measuring the Newtonian constant of gravitation is huge. This makes useless one of the equations that connect the units of length, time and energy.
Because of that, any practical system of units must contain a single arbitrary choice of a unit, which in the case of SI is the frequency of a certain hyperfine transition of the cesium-133 atom, while all the other units result from this choice by adopting conventional values for the universal constants, except for the Newtonian constant of gravitation, which must be measured experimentally (some constant determining the intensity of the electromagnetic interaction, e.g. the fine structure constant, must also be measured experimentally, but for that the uncertainty is extremely low).
BTW, another extremely frequent incorrect claim about the constant of Planck is that it is a quantum of action. This is very wrong, it is a quantum of angular momentum (the ratio between energy and frequency is an angular momentum, like also the ratio between their integrals, i.e. between action and plane angle). The origin of the mistake is the fact that many follow the suggestions of the recent SI brochures (there was a resolution adopted by vote that the unit of plane angle is not a base unit, which is equivalent with establishing by vote that 2 + 2 = 5), and they omit the unit of plane angle in the dimensional formulae, in which case it appears that the unit of action is the same with the unit of angular momentum, but they are not the same, as any attempt to change the unit used for plane angles would demonstrate, e.g. between radians and degrees or cycles.
The original constant of Planck corresponds to plane angles measured in cycles, while the so-called h bar is the same constant converted to correspond with plane angles measured in radians.
My German is bad, is that from that publication ? Any closest ones in other languages ?
Asking for a friend :
https://tauday.com/tau-manifesto
The frequency of periodic events was measured at that time in cycles per second. The unit of frequency "cycle per second" has been renamed "hertz" much later. The renaming was first proposed in 1935, but it became adopted in SI only in 1960.
Because the use of cycles per second (now hertz) is entrenched, when the radian has been promoted as the preferred unit for plane angle, that has caused a split in the SI system of physical quantities, between the frequency of periodic phenomena measured in cycles per second and the angular velocity measured in radians per second.
This split is a big cause of inconsistency in SI, because "frequency of periodic phenomena" and "angular velocity" are just 2 names for the same physical quantity and the "hertz"/"cycle per second" and the "radian per second" do not belong in the same consistent system of units, the former corresponds to the choice of the cycle as the unit of plane angle, while the latter corresponds to the choice of the radian as the unit of plane angle.
Thus the SI system of units is a mixture of units from 2 distinct systems of consistent units. There are a number of physical quantities in SI for which 2 units are used, one derived from the cycle and one derived from the radian. In some cases distinct names are used to show the intended unit, like "frequency" and "angular velocity", while in other cases there are no distinct names. The non-SI unit of angular velocity, "rpm", i.e. "rotations per minute", is another name for "cycles per minute".
All the units of physical quantities related to rotations, including the angular momentum, depend on the unit chosen for the plane angle, so they must be multiplied or divided by conversion factors (i.e. 2 times pi for conversion between cycle and radian), whenever the unit for plane angle is changed.
The use of radian is a cause of confusion for those who learn about the physics of rotation, because they learn for instance that the angular momentum and the torque are proportional with the radius to the center of rotation.
This is false and it appears to be true only when the radian is chosen as the unit of plane angle. In reality, the factor of conversion between the physical quantities that refer to linear motion and those that refer to rotations is not the radius, but the ratio between the arc length and the central angle corresponding to the arc. This ratio happens to be equal to the radius only when the central angle happens to be measured in radians.
The correct conversion factors must be used to convert the values of a physical quantity like angular momentum between systems with different units for the plane angle.
While in the system with plane angles measured in radians the angular momentum is the linear momentum multiplied by radius, in the system with plane angles measured in cycles the angular momentum is the linear momentum multiplied by perimeter. The same for torque.
When you divide a kinetic energy to a frequency expressed in cycles per second, you get an angular momentum that is equal to the product of linear momentum by perimeter, corresponding to the cycle as the unit of angle.
When you divide a kinetic energy to a frequency expressed in radians per second, you get an angular momentum that is equal to the product of linear momentum by radius, corresponding to the radian as the unit of angle.
These 2 variants correspond to the alternative "h" and "h bar" values of the Planck constant. The same quantum of angular momentum corresponds to both choices (i.e. the double of the spin of the electron), it is just expressed in different units.
I will be more careful in these situations to also consider the "other system of units" than the one they used and see if that illuminates things.
Thanks !
Kuhn complained about this in The Structure of Scientific Revolutions. When trying to teach the history of science to scientists, you have to work to get them to stop trying to think the "correct" way, so that they can understand the actual historical debate.
The real history of science is: A lot of people became interested in problems and worked on theory and test apparatus and put their ideas into public discussion and eventually and sometimes suddenly we developed narratives and equations that explain observations. Along the way there was a lot of contention and conflict.
In this case the "smart guy who figured it out" usually didn't understand the discovery in the same way that we do today. Something was figured out, but typically not in the modern glory that we explain it with today.
Physicists (from the outside at least) have always seemed more like hunters than the town watch, they go looking for the problems. There isn’t some catastrophic looming threat of physics approaching that they have to deal with, haha. Unexplainable data is an opportunity and all that.
To take a trivial example, most of us would like to draw a straight line from Darwin writing The Origin of the Species to the current acceptance of his theory of evolution. We have no particular desire to follow how Darwin's work inspired Francis Galton to study heredity. Unfortunately Galton discovered regression to the mean when he did. Further experiments undermined Darwin's theories as it uncovered evidence for "natural types". The result was that a half-century after Darwin's great book, many scientists doubted Darwin's theories.
But then R. A. Fischer managed to explain the mess with population genetics based on Mendel's theories. "Natural types" disappeared from the literature, and Darwin was back. Today Galton is likely to be remembered as a dilletante who invented the idea of eugenics. And Fischer as a genius in statistics. We retrofit a story with heroes (Darwin and Fischer) and villains (Galton). We skip over the bad parts, and focus on the good.
In the process we forget that Darwin also took it for granted that blacks must be inferior to whites. And that Fischer was also a supporter of eugenics. And that Galton set out to confirm Darwin, then accepted the data that he encountered.
We want a story, not a mess. But history is full of messes. Arranging the right ideas in the right ways involved a whole lot of trial and error that wasn't obvious at the time. While some of us enjoy learning about the history, it actually isn't very helpful for scientists. Because there is little point in learning every wrong idea that people used to hold, only to immediately learn that you can forget it again because it was wrong.
But while that exercise does not help us learn what is currently known, maybe it can help give us more humility about what it is we think we know today?
Of course there's a point, one you suggest yourself. "If great scientists like Darwin could be wrong about X, is there a chance that I'm wrong about Y?"
But, like, you know how Newton discovered gravity when an apple fell on his head? Totally true, pinky swear!
Physics has the opposite problem - they know it's wrong but they can't find a way to disprove it. It's too good.