Right now, in early 2026, the biggest grid operator in the United States is warning of a power shortage that its market monitor has called a "crisis stage." Electricity bills rose nearly 7% last year — more than double the inflation rate — and Goldman Sachs says they will keep rising through the decade. The cause is not a shortage of coal or gas. It is artificial intelligence. A single AI data centre consumes as much electricity as a small city, and hundreds are being built simultaneously. Bernie Sanders and Ron DeSantis — two politicians who agree on virtually nothing — have both called for moratoria on new data centre construction. The AI industry's answer? Nuclear fusion. Billions of private dollars are flowing into fusion startups. Fortune magazine is running headlines about the "holy grail of energy." And nuclear fusion has been thirty years away for the last hundred years.
In Series 01 of The Physical Universe, we established the deeper reason this keeps happening: modern science has not delivered the breakthroughs needed to free us from tearing the Earth apart for energy and resources. We have built a Mechanical Universe — a vast and extraordinarily useful system of equations — but we have mistaken the equations for the reality beneath them. Series 01 introduced force and energy as the first examples: neither is a physical entity. Both are mathematical bookkeeping for effects whose physical causes we have never fully explored.
Series 03 goes deeper into exactly that problem — not through the lens of energy policy, but through a single, elegant experiment that has sat in physics curricula for over a century, quietly teaching the wrong lesson about what science actually knows. That lesson has consequences far beyond the classroom. How we teach what science has explained shapes what science believes it still needs to explain. And there is one thing above all others it has stopped trying to explain — with costs that are now, in 2026, impossible to ignore.
The Experiment That Has Been Quietly Misleading Us
In 1914, two German physicists — James Franck and Gustav Hertz — ran an experiment so elegant it would later win them the Nobel Prize. They fired electrons at mercury vapor and watched what happened. The current dipped. It rose. It dipped again. Perfectly periodic. Every 4.9 volts, without fail.
It was, by any standard, a beautiful result. The physics community had an explanation within the decade. Bohr's atomic model said atoms absorb energy in discrete chunks — quantum leaps. The periodicity of that curve fit the story perfectly. Case closed. Textbooks written. Nobel awarded. Move on.
Except — here is the thing nobody tells you in undergraduate physics — Franck and Hertz themselves got the explanation wrong.
They thought they were measuring the ionization potential of mercury. They were not. It was Bohr himself who had to write to them and explain what their own data actually meant. The discoverers of one of quantum mechanics' most celebrated experiments misunderstood their own discovery. And then science moved on, stamped it "explained," and handed it to generations of students as settled truth.
The most dangerous words in science are not "we don't know." They are "we already know."
This is the story science doesn't like to tell about itself. Not the triumphant narrative of discovery, but the quieter, more uncomfortable one — of explanations that calcify into dogma, of "consistent with" being quietly promoted to "proven by," and the enormous, invisible cost of intellectual comfort. And underneath all of it, an even more unsettling truth: every single explanation physics has ever offered for what happens inside that mercury tube is a mathematical abstraction. A model. A map. Not the territory.
The Experiment That Explains Too Much
Walk into any physics classroom and ask what the Franck-Hertz experiment proves. You will be told, with complete confidence: it proves that atomic energy levels are quantized. It validates Bohr's model. End of story.
But look carefully at what the experiment actually does. It fires electrons through mercury vapor at a controlled density, in a near-vacuum tube, at increasing voltages, and measures whether those electrons make it to the other side. The current drops periodically because below 4.9 eV, collisions are elastic — the atom bounces back unchanged. Above it, the atom absorbs exactly 4.9 eV and the electron loses its momentum.
What the experiment measures, in the most honest possible description, is how matter in a specific physical state responds to electron bombardment at specific densities and energies, and what electromagnetic signatures that interaction produces. Notice what is absent from that description: any claim about why energy levels are discrete. Any claim about what the electron is doing inside the atom. Any claim about orbits, wavefunctions, or probability clouds. The experiment observes a threshold. It measures a spacing. It detects photons. That is all.
The experiment is consistent with Bohr's model. It does not derive from it, and it certainly does not prove it. These three phrases are not synonyms — and collapsing the difference between them is where science goes to die quietly. More importantly: Bohr's model, like every model that followed, is a mathematical description of behaviour. It is not a window into what an atom is.
Bohr's model, it turns out, has catastrophic flaws the Franck-Hertz experiment never addresses. It assumes circular orbits that violate Heisenberg's uncertainty principle. It only works reliably for hydrogen — a single-electron atom. Mercury has 80 electrons. Bohr cannot derive mercury's 4.9 eV excitation from first principles. He cannot explain which transitions are allowed and which forbidden. He cannot explain why the atom emits a photon at all when it de-excites.
And yet "the Franck-Hertz experiment proves Bohr's model" remains standard curriculum in many institutions today. The explanation felt complete. So nobody looked too hard at the seams.
Three Scientists Walk Into the Same Data
Consider the extraordinary sequence of events surrounding this one experiment. Franck and Hertz collect their data and conclude they have measured ionization. Bohr reads the same data and sees excitation. A generation of physicists uses both men's work to build quantum mechanics. Then Schrödinger arrives, dissolves Bohr's neat circular orbits into probability clouds, and suddenly the "explanation" of the Franck-Hertz experiment has to be rewritten entirely.
The data never changed. The 4.9 volt spacing was always 4.9 volts. The UV photons at 254 nanometres were always there. What changed — repeatedly, dramatically — was the story we told about why.
And here is where it gets philosophically vital: each successive story was more complete, more predictive, and more mechanistically satisfying than the last — but only because someone refused to accept the previous story as final. Schrödinger didn't look at Bohr's model and say "good enough." He looked at it and saw the gaps. Why circular orbits? Why quantized angular momentum specifically? Where do the selection rules come from that govern which transitions are even allowed?
Every "complete" explanation in science is really just a temporarily comfortable campsite on an infinite trail.
Bohr's model cannot tell you why the dominant transition in mercury is 6s to 6p and not 6s to 6d, even though both levels are energetically reachable. Schrödinger's wavefunction model can — through quantum mechanical selection rules that fall directly out of the mathematics. That is not a minor clarification. That is a fundamentally richer understanding of what is happening inside that vacuum tube.
And then even Schrödinger runs out of road. His equation describes the states but doesn't fully explain spontaneous emission — why the atom emits a photon at all when it drops back down. For that you need Quantum Electrodynamics. For heavy atoms like mercury, where relativistic effects reshape the electron structure, you need the Dirac equation. The rabbit hole doesn't end.
The Hierarchy Nobody Shows You
Here is what the textbook should say — but almost never does — about the relationship between models and the Franck-Hertz experiment:
The same experimental result. Three completely different levels of understanding. The third was only reachable because the people who built it were dissatisfied with the second — which was only reachable because someone was dissatisfied with the first.
Intellectual dissatisfaction is not a bug in science. It is the entire engine.
The Map Is Not the Atom
Here is the confession that sits beneath all of this — the one that physicists know intimately but rarely say aloud in public, and almost never say in a first-year lecture hall: every explanation we have ever given for the Franck-Hertz experiment is a mathematical abstraction. Every single one. Not a description of what is actually happening. A model. A map. A human-made tool for predicting outcomes.
Bohr drew circular orbits. Beautiful, crisp, geometric circles that an electron was supposed to occupy without radiating energy, in defiance of classical electromagnetism, for reasons he could not explain. Those circles were never real. No one ever saw a circular orbit. No instrument ever detected one. The circle was a mathematical convenience that happened to predict the right energy levels for hydrogen — and nothing more.
Then came Schrödinger. He replaced the circles with wavefunctions — ψ, a complex-valued mathematical function spread over all of space. More powerful, more predictive, more elegant. But here is the question that Schrödinger himself could not answer, and that remains unsettled today: what is ψ, physically? Is it a real wave, something that genuinely ripples through space? Is it just a probability ledger — a bookkeeping device for predicting measurement outcomes? Does it represent something that exists, or only our knowledge of something that exists?
There is no wavefunction surrounding a mercury atom. There is a mercury atom. The wavefunction is what we write on paper to predict what happens when we disturb it.
This is not a fringe concern. It is the central unresolved problem of quantum foundations. The Copenhagen interpretation — the dominant teaching framework — essentially says: don't ask. The wavefunction is a calculational tool. Asking what it "really is" between measurements is a category error. Shut up and calculate.
That instruction — shut up and calculate — is the most mathematically successful and philosophically evasive stance in the history of human thought. It works. The predictions are extraordinarily accurate. And it completely sidesteps the question of what is actually going on inside that vacuum tube when an electron collides with a mercury atom.
Consider what quantum electrodynamics actually tells you about the Franck-Hertz experiment at its most fundamental level. An electron — itself a quantum field excitation, not a tiny ball — interacts with the electromagnetic field surrounding the mercury atom — itself a complex many-body quantum system — and a photon gets emitted during de-excitation. That photon is not a particle flying through space. It is an excitation of the photon field. The electron is not a particle either. It is an excitation of the electron field. Everything is fields, interacting through mathematical operators defined on a Hilbert space, evolving according to equations derived from symmetry principles.
It is not mathematics that surrounds a mercury atom. Mathematics is what surrounds our description of a mercury atom. The atom precedes every equation ever written about it by approximately 4.5 billion years. The equations are newcomers, doing their best to track something they were not present for and cannot fully see.
This matters for a reason that goes beyond philosophical tidiness. If you believe your mathematical model is the reality — that the wavefunction is what the atom literally is — then you have locked yourself into a particular ontology. You will interpret every experiment through that ontology. You will design future experiments to confirm predictions made within that ontology. And you will be structurally blind to phenomena that fall outside it.
The history of physics is, in part, a history of ontological prisons. Classical mechanics assumed a clockwork universe of definite positions and momenta — and was structurally unable to accommodate the discrete energy levels that the Franck-Hertz experiment revealed. Bohr's model assumed fixed orbits and was structurally unable to accommodate selection rules or multi-electron atoms. Each model carried within it the seeds of its own limitation, invisible to those inside it.
What comes after QED — what lies beyond the Standard Model, what unifies quantum mechanics with gravity — will almost certainly require abandoning some mathematical structure we currently treat as fundamental. Some variable, some symmetry, some equation that feels as inevitable and obvious to us as circular orbits felt to Bohr will turn out to be a useful fiction. A map feature with no territory counterpart.
The mercury atom in that tube doesn't know about Hamiltonians. It doesn't compute probability amplitudes. It doesn't collapse wavefunctions. It just is. And whatever it is, we are still, genuinely, in the early chapters of finding out.
The Real Cost of Comfort
There is a particular kind of intellectual violence that happens in institutions — in universities, in journals, in funding committees — when a field decides a question is answered. The question gets removed from the agenda. Students are no longer trained to think about it. Grant proposals that revisit it are returned with polite notes about reinventing the wheel. The people who keep asking get quietly sidelined.
This is not a conspiracy. It is something more banal and more dangerous: the natural tendency of any community to mistake familiarity for understanding.
Think about what almost didn't happen. If the physics community of 1920 had been fully satisfied with Bohr — and many were — the pressure to seek something deeper would have been lower. The Schrödinger equation might have arrived later. Quantum field theory might have been delayed. Everything built on those foundations — semiconductors, lasers, MRI machines, every piece of modern electronics — sits downstream of someone's refusal to be satisfied with a perfectly serviceable explanation.
And think about what has not happened. For a hundred years, science has held a story about the Sun — about hydrogen nuclei fusing under unimaginable pressure, releasing more energy in two minutes than humanity has consumed in all of recorded history. Right now, that story is attracting more money than ever before: over $10 billion in private investment since 2021, a US Department of Energy fusion roadmap released just last October, Nvidia and Google backing Commonwealth Fusion Systems, and the AI industry — the same industry causing the power crisis in your electricity bill — openly betting that fusion will save it. ITER, the multinational flagship project, is now delayed to 2034 just to begin research operations. Commercial fusion power is still, officially, a decade away from the decade it was already a decade away from. Fortune magazine ran a headline this year that fusion was "always 30 years away — now it's a matter of when, not if." The confidence is soaring. The electricity is not flowing. The story may have felt complete enough that the deepest questions stopped being asked. The cost of that comfort is still being counted — on your utility bill.
When we teach the Franck-Hertz experiment as proof of Bohr's model, we are not just slightly overstating a claim. We are teaching students a profoundly wrong lesson about the nature of science itself — that experiments confirm theories rather than surviving them until something better comes along.
The Franck-Hertz experiment does not prove Bohr. It survived Bohr. It survived Schrödinger. It will survive whatever comes next. The data is indifferent to our explanations. Only we are not.
A Message to the AI Industry — and to the Scientists Who Could Change It
On March 4, 2026, Amazon, Google, Meta, Microsoft, OpenAI, Oracle, and xAI gathered at the White House to sign the Ratepayer Protection Pledge — a commitment to "build, bring, or buy" their own power for every new AI data centre they construct. The reason: their electricity demand had become so acute that wholesale prices in some US regions had risen by 267% in five years, and the political pressure to stop passing those costs to consumers had become impossible to ignore. The Big Four AI companies — Amazon, Alphabet, Microsoft, Meta — are on track to spend $650 billion on infrastructure in 2026 alone. Satya Nadella, CEO of Microsoft, stated publicly that his company does not have enough electricity to run all the AI GPUs already in its inventory.
This is not a supply chain problem. It is not a permitting problem. It is not a grid upgrade problem — though all of those are symptoms. It is a physics problem. The AI industry is attempting to scale a civilisation-reshaping technology using an energy infrastructure that has not seen a fundamental breakthrough in over a century. Every solution currently being deployed — natural gas turbines, small modular reactors, the fusion investments discussed above — is an engineering workaround for a physics gap that has never been closed. The gap is not between current fusion technology and a working reactor. The gap is between the story we tell about how the Sun works and a physical understanding deep enough to replicate it.
For the AI executives reading this: the energy problem you are pledging to solve by building gas turbines behind your data centres is a placeholder. It buys time. It does not solve the underlying constraint. The question worth asking is not "how do we get more megawatts by 2028" — it is "what is the minimum investment required to seriously test whether the physics story we have been working from for a hundred years is complete?" That investment, compared to $650 billion in annual infrastructure spend, is vanishingly small.
For the scientists reading this: the proposition is not that the existing framework is wrong in its predictions. It is that a framework which cannot tell you why the Sun works — only that it does, and at what rate — is a framework that will never give an engineer the handle they need to replicate it. The Franck-Hertz experiment is not an obscure historical curiosity. It is a precise demonstration, in miniature, of exactly this problem: the data is real, the predictions are accurate, and the physical account of what is actually happening remains, a century later, genuinely contested. If that is true of one tube of mercury vapour, it is true of the star at the centre of our solar system.
What the Experiment Is Actually Teaching
Strip away every layer of theoretical interpretation and ask what the Franck-Hertz experiment shows you, in the most raw and honest sense. Matter at a specific density, in a specific physical state, under electron bombardment, responds in ways that are discrete, reproducible, and universal. The 4.9-volt spacing is not a property of the apparatus — not an artefact of temperature, tube geometry, or retarding voltage. It is a property of mercury itself, stable across every laboratory on Earth since 1914.
That universality is the real discovery. The reason for it has been revised three times and will likely be revised again. Quantum mechanics remains famously incompatible with general relativity. The deepest questions — what measurement means, what a wavefunction collapse actually is, whether it happens at all — remain genuinely contested among physicists today. The Franck-Hertz experiment is still, in some sense, an open question. Not about the data. About the depth of explanation we place beneath it.
This is the lesson the experiment has always been trying to teach, and the one we have consistently refused to hear: that Franck and Hertz were wrong and still contributed something priceless. That Bohr was wrong and still built a foundation. That being wrong in an interesting direction is one of the highest callings in science. Most of all, that the gaps in an explanation are not embarrassments to be hidden in footnotes. They are invitations — and some of them, left unanswered long enough, become the reason a civilisation is still burning oil a century after it believed it had understood the Sun.
The most sophisticated thing a scientist can say is not "we have proven this." It is "we have not yet found a way to break this — and here is exactly where we are trying."
The Question Behind the Question
We have a complete mathematical description of why the Franck-Hertz experiment works. Quantum electrodynamics gives us transition probabilities that match measurements to extraordinary precision — a theory so accurate it should produce genuine awe at the reach of human reasoning. It predicts the magnetic moment of the electron to eleven decimal places. Eleven. It is the most precisely verified theory in the history of science.
And it is still a model.
There is an irony here that should not be lost on anyone living through 2026. The most powerful AI systems in history — the large language models now consuming enough electricity to destabilise power grids — are in exactly the same epistemic position. MIT Technology Review noted this year that nobody knows exactly how large language models work. We cannot explain what is happening inside them. We can describe their outputs with extraordinary precision. We deploy them at civilisational scale. And we have no satisfying physical account of the process. It happened with QED. It is happening again with AI. The pattern is not a coincidence — it is a habit of mind.
Nobody can tell you, in a satisfying physical sense, what an electron actually is. Not what it does — we can describe that with extraordinary precision. What it is. Nobody can tell you what is really happening when a wavefunction "collapses," or whether it collapses at all, or whether the wavefunction was ever a real thing or always just a ledger. The Copenhagen interpretation, Many-Worlds, Pilot Wave, Relational Quantum Mechanics — these are not minor technical disagreements. They are radically different claims about what exists. And after a century of quantum mechanics, physicists still cannot agree on which one is true. Or whether any of them are.
The map has become so detailed, so precise, so staggeringly useful — that we have begun to forget it is a map.
That mercury atom in the tube doesn't experience its own ground state as a wavefunction. It doesn't know it is obeying selection rules. It doesn't consult the Hamiltonian before deciding whether to emit a photon. It just is. Our mathematics tracks that behaviour with breathtaking fidelity. It does not explain it in any final sense. And that gap — between prediction and understanding — is the most consequential open problem in science today.
What Comes Next — and Why It Will Unsettle You
The articles that follow will attempt something modern science has largely stopped trying to do: offer a physical description of the Universe. Not new equations. Not a refinement of the existing framework. A description of the actual mechanisms by which the Universe produces the observations we have spent 400 years cataloguing — and that the Mechanical Universe has labelled without ever naming.
The observations are not in dispute. Gravity, elasticity, the 4.9-volt threshold — these are solid ground. What this series will challenge are the explanations layered on top: that gravity is the curvature of spacetime, that elasticity arises from intermolecular forces, that the threshold follows from quantised orbits. Each story is internally consistent. Each may be fundamentally incompatible with a physical account of what is actually happening beneath it.
The Physical Universe description will tie observations like elasticity, gravity, and atomic energy thresholds together as outcomes of the same underlying physical processes — processes the mechanical model has never needed to name, because naming them was never required to make the equations work. Compatibility with the existing explanations is not the goal. Compatibility with the observations is.
This is the pattern science knows well. Newton preserved every observation Aristotle catalogued while making his physical explanations obsolete. Quantum mechanics preserved every result classical electromagnetism correctly predicted while dissolving its picture of matter entirely. The observations survived. The stories around them did not. That is not a failure of the previous generation — it is exactly how genuine progress works. And it is long overdue.
The physical description of the Universe proposed in this series should be understood in exactly that spirit. If it is correct, gravity will still make objects fall at the same rate. Elastic materials will still deform and recover. The Franck-Hertz experiment will still show current drops at 4.9-volt intervals. The data will not change. What will change — what must change, if we are ever to move beyond the resource constraints and energy poverty outlined in Series 01 — is our understanding of why.
Every year we spend inside a story that cannot explain the Sun is a year we do not spend building the one that can.
One Hundred Years. Zero Cars That Never Need Fuel.
The physics, at the level of actual mechanism, is still the same story told in 1926. More money has not changed that. ITER's delays have not changed that. A White House pledge and $650 billion in infrastructure spend have not changed that. When a story about physical reality fails to translate into physical technology across a hundred years of serious effort, the honest response is not to fund it more aggressively within the same framework. It is to ask whether the framework is complete — and to find the people willing to build the next one.
If the physical description of the Universe proposed in this series proves correct — if the processes behind gravity, elasticity, and atomic behaviour are finally understood at the level of actual mechanism — the implications are not incremental. We are not talking about a slightly better battery. We are not talking about a fusion reactor by 2035. We are talking about cars that never need to be fuelled. Drones that never touch the ground. Data centres that generate their own energy from physical processes we do not yet understand but are, for the first time, asking the right questions about. Science fiction has always been a description of what becomes possible when the right story about physics finally arrives. That story may be closer than a century of managed disappointment has led us to believe.
Validating it will require two things that have rarely arrived together. Scientists — not the kind satisfied with a framework that predicts without explaining, but the kind who ask what is actually happening inside the tube, inside the star, inside the field — to take the physical question seriously enough to pursue it. And capital allocators who understand that the asymmetry here is extraordinary: the cost of seriously exploring a new physical framework is a rounding error against $650 billion in annual AI infrastructure spend, and the upside, if it yields even a partial answer, ends the energy constraint permanently.
This series is an open invitation to both. The arguments will be specific enough to critique, grounded enough to test, and ambitious enough to matter. The territory still waits. The map has been consulted long enough.