Over the past few decades, a number of important advances have helped revolutionize our picture of the Universe. The astrophysical evidence for dark matter is overwhelming, teaching us that the majority of mass in our Universe doesn’t arise from any of the particles we know. The Universe’s expansion is accelerating, revealing the existence of a new type of energy — dark energy — that seems inherent to empty space. We’ve invented room-temperature superconductors, discovered every fundamental particle in the Standard Model (including the elusive Higgs boson), revealed the massive nature of the neutrino, and made atomic clocks so precise that they can measure the difference in the rate at which time passes when they’re separated by as little as one foot (30 cm).
And yet, in many ways, our picture of what makes up the Universe hasn’t advanced significantly in over ~40 years. No particles outside of the Standard Model have shown up at any of our colliders — at high or low energies — and our largest data sets of all time have revealed no robust, repeatable surprises for fundamental physics. Importantly, many of our greatest ideas, including supersymmetry, extra dimensions, leptoquarks, technicolor, and string theory, have made no predictions that have been borne out by experiment. Yet still, many are excited about a possible hint of new physics at the Large Hadron Collider (LHC). Even if you’re optimistic, it’s important to be skeptical. Here’s the reason why.
Most of us, when we think of the Standard Model, think of the indivisible particles that exist in our Universe. There are the quarks and gluons: the fundamental constituents of protons, neutrons, and all of their heavier and lighter cousins. There are the leptons, including the electron, muon, and tau, plus all of the neutrinos. There are the antiparticles: the antimatter counterparts of the quarks and leptons. And also, there are the weak bosons — the W+, W–, and Z0 — as well as the photon, mediator of the electromagnetic force, and the Higgs boson.
But the Standard Model is also a whole lot more than a framework for the fundamental particles that exist (and can exist) within our Universe. It also provides a complete description for all the quantum fields that exist between these particles, which encapsulates how every particle that exists interacts with every other particle that exists. The proton’s mass depends on quark-gluon and gluon-gluon couplings that include even massive particles like the top quark; if we were to change any of the parameters of the Standard Model, including rest masses or couplings, there would be many consequences that would experimentally reveal themselves to us.
Over many decades, theorists have proposed extension after extension to the Standard Model. Perhaps there are extra fields that arise as a consequence of Grand Unification. Perhaps there are extra particles that arise from additional symmetries. Perhaps there are new decays or couplings that could show themselves at high energies or with the production of large numbers of rare, unstable particles. We know there are many puzzles that are not resolvable with physics as we know it, from dark matter to why there’s more matter than antimatter to why particles have the mass values they do, among others. Yet the Standard Model, no matter how we tweak it, offers no viable solutions on its own.
The original hope of many was that the Large Hadron Collider (LHC) at CERN — the most powerful particle accelerator in human history — would reveal not only the Higgs boson, but some clues about many of these unsolved mysteries. The way it does so is brilliant: by producing large numbers of high-energy collisions, exotic, unstable particles are created in great numbers. Those events are then tracked and recorded by the world’s largest particle detectors, identifying the energy, momentum, electric charges, and many other properties of everything that comes out.
If the Standard Model — all of its particles and interactions — were legitimately all that were out there, we could calculate precisely what we’d see. There would be new particles created with particular probabilities that corresponded to the particular parameters of each collision. The new particles that came into existence would then decay in a particular set of ways:
- with particular lifetimes,
- into sets of particles that are permitted,
- with particular ratios,
- and not into other groups of particles which are forbidden,
all according to the Standard Model’s rules.
What we’re basically doing is testing the Standard Model to incredible precision, and looking for any possible deviations. Most of the ideas we initially examined didn’t pan out at the LHC: the Higgs isn’t a composite particle, there are no low-energy supersymmetric particles, the evidence for large or warped extra dimensions isn’t there, and there appears to be just one Higgs particle instead of many. But that doesn’t mean everything we’ve seen is in perfect agreement with the Standard Model’s predictions.
Anytime you collide large numbers of particles at high energies, you’re going to create heavy, rare, unstable particles so long as they’re allowed by Einstein’s most famous equation: E = mc². Those particles will live for a short while and then decay. If you can create enough of them, you can actually test the Standard Model with some level of mathematical rigor. Because there are explicit predictions for how often any particle you create should decay in a particular fashion, measuring the frequency of these decays precisely, by creating enormous numbers of these particles, puts the Standard Model to the test.
And there are many, many ways that we genuinely believe physics must, somehow, go beyond the Standard Model. For example, gravity is not treated as a quantum interaction, but rather as a classical, unchanging background by the Standard Model. Neutrinos are predicted to be massless by the Standard Model, and there’s no dark matter nor dark energy. The Standard Model doesn’t explain everything we see about our Universe, and we fully anticipate that, at some level, there may be additional fields, particles, interactions, dimensions, or even physics from beyond our observable Universe that could be affecting us.
Of course, the grave danger — and we’ve done this many times in the past — is that we might see something unexpected and leap to an incorrect conclusion. We know how the probabilities ought to break down and what to expect, but observing anything different doesn’t necessarily mean there’s new physics showing up here. Sometimes, there’s just an unlikely statistical fluctuation.
In this particular instance, we see B-mesons, which are particles that contain bottom quarks (the second heaviest quark, behind the top), decaying to either an electron/positron pair or a muon/anti-muon pair. In theory, these two decays should occur at the same rate; in practice, we see that a slightly higher-than-expected fraction of particles decays into muons and antimuons compared to electrons and positrons.
But in terms of statistical significance — where we ask, “how confident are we that this isn’t just an unlikely but perfectly normal outcome?” — the answer is not very good: we’re only about 99.8% sure this is out of the ordinary.
You might seem incredulous: if we’re 99.8% sure, statistically, that something’s out of the ordinary, why would we consider that “not very good?” I like to think about it in terms of coin flips. If you flipped a coin ten times in a row and got identical results all ten times — either 10 heads or 10 tails results, consecutively — you would declare that to be extremely unlikely. In fact, the odds of that happening are just 1 in 512, or 0.02%: about the same odds as getting the outcome that the LHC saw with these decaying B-mesons.
But think about what would happen if, instead of ten flips, you flipped the coin 1000 times. Now, what are the odds that somewhere in that succession of 1000 coin tosses, you’d get a string where you saw either 10 heads or 10 tails consecutively? Perhaps surprisingly, only 14% of the time would you never see a string of 10 identical outcomes in a row. On average, you’d expect to get the same result 10 times in a row about 3 times in 1000 tosses: sometimes more, sometimes less.
At the LHC, we have many different classes of “unlikely outcomes” that we’re searching for. As it stands, the LHC has discovered more than 50 new composite particles, and has created hundreds of different types of particles that were already known to exist. Each one has, typically, one or two handfuls of ways it can decay, some of which are extremely rare and others of which are far more likely. It’s no stretch to say that there are literally thousands of ways that new physics could potentially show up at the LHC, and we’re looking for every single one of them that we know how to look for.
That’s why, when we look at data that doesn’t line up with the Standard Model’s predictions, we want to make sure that it’s crossed an unambiguous threshold of confidence. We want to be so certain that it isn’t an unlikely statistical fluctuation we’re seeing that we aren’t impressed by 95% confidence (a two-sigma result), by 99.7% confidence (a three-sigma result, which is what this latest announcement is), or even by 99.99% confidence (a four-sigma result). Instead, in particle physics — to avoid fooling ourselves in exactly this fashion, like we’ve done many times throughout history — we demand that there be just a 1-in-3.5 million chance that a discovery is a fluke. Only when we cross that threshold of significance can we declare that we’ve made a robust discovery.
What’s frustrating about the current situation is that many commentators are passing judgment on whether this result is likely to hold up or not, when that’s not something we have the necessary information to conclude. It could be evidence for a novel particle, like a leptoquark or a Z’ (pronounced zee-prime) particle. It could signal a novel coupling in the lepton sector. It could even help explain the matter-antimatter asymmetry in the Universe, or be indicative of a sterile neutrino.
But it could also just be a statistical fluctuation. And without more data — and it’s coming, as the LHC has so far only collected about 2% of the data it will collect over its lifetime — we have no way of telling these scenarios apart. Over its history, the LHC has seen many somewhat unexpected decays involving bottom-quark containing particles; just recently the LHCb collaboration (where the “b” indicates their focus on bottom-quark containing particles) announced a completely different decay that could challenge the Standard Model’s expectations. What we’ll have to do is, as we gather more data, look at all of these various anomalies together. Only when, combined, their significance crosses that “gold standard” for significance, will we get an announcement of discovery that’s as confident as we were with the Higgs.
Right now, the LHC is undergoing a high-luminosity upgrade, which should significantly increase the rate of collisions that appear in our detectors. We should keep in mind that many unexpected bumps in the data have appeared — a diboson excess, a diphoton bump, unexpected ratios of Higgs decays — and disappeared as we subsequently collected more data. We cannot know how this experiment will turn out, and that’s why we have to perform it.
Many physicists are excited about the possibilities while others are more pessimistic. However, the most important aspect of this is that everyone is appropriately cautious, practicing responsible science instead of prematurely declaring a new discovery. There are many hints of new physics out there, but we cannot be sure which ones will hold up and which ones will turn out to be mere statistical flukes. The only way forward is to take as much data as we can and to examine the full, synthesized suite of all of it. The only way we’ll ever reveal the secrets of nature is to put the question to the Universe itself, and listen to whatever it is that it says. With every new collision we create in our detectors, the closer we get to that inevitable but critical moment that physicists all over the world are awaiting.
This article is auto-generated by Algorithm Source: www.forbes.com