ReportWire

Tag: Quanta Magazine

  • Quanta Magazine Unravels Space and Time in Ambitious New Series

    Quanta Magazine Unravels Space and Time in Ambitious New Series

    For over a decade, Quanta Magazine has challenged and delighted fans with stories about the most fundamental questions in science. Today, in its most ambitious project to date, the magazine brings its audience into what might be the deepest mystery of all: the nature of reality itself.

    Many physicists now believe that space and time are not fundamental features of the universe, but rather properties that emerge as the result of something else going on underneath. Why do physicists believe this? How can we know if it is true? And if space-time isn’t the fabric of reality, what is?

    The new series, “The Unraveling of Space-Time,” tangles with those questions. It includes nine new pieces of writing and media, brought together in a rich, interactive experience designed by Quanta and HLabs, an award-winning digital design agency. Senior Editor Natalie Wolchover, who, along with Quanta, was awarded the 2022 Pulitzer Prize in Explanatory Reporting, oversaw the project’s development.

    “Physicists have made genuine progress recently in studying the underpinnings of space-time,” Wolchover said. “We wanted to be as ambitious in our coverage as the subject deserves, by laying out the history, motivations and context behind the developments with the help of alluring art, animation and infographics, and a beautifully designed hub that brings it all together.”

    The series includes:

    • Two deep-dive features — one about new progress in understanding space-time as a hologram and another about the geometric underpinnings of quantum physics and space-time
    • Two explainers, on the phenomenon of duality and the thermodynamics of black holes
    • A dynamic exploration of thought experiments that expose problems with space-time
    • A historical essay about physicist John Wheeler by science writer Amanda Gefter
    • A video documentary by Senior Producer Emily Buder
    • Interviews with physicist Latham Boyle and philosopher of science Karen Crowther 
    • 30 original visuals from five artists, under the guidance of Art Director Samuel Velasco 

    “These articles resulted from more than 60 hours of interviews with more than 30 quantum gravity researchers,” said Staff Writer Charlie Wood, author of five of the pieces in the series. “It’s next to impossible to talk about anything without leaning on the concepts of space and time. And yet many physicists suspect that our current picture is holding us back.”

    Conceived by Wood, Wolchover, Executive Editor Michael Moyer, and Samir Patel, Quanta’s new editor-in-chief, “The Unraveling of Space-Time” is the first of several planned editorial projects that will engage some of the biggest questions in basic science and math today.

    Quanta has never shied away from exploring the frontiers of knowledge — however challenging, abstract, or esoteric — with ambitious storytelling and visual panache,” Patel said. “‘The Unraveling of Space-Time’ is an evolutionary step for us, and we can’t wait to do it again.” 

    “I hope that our audience will enjoy exploring the many facets of this series,” Wolchover added, “and will come away with a far deeper understanding of physicists’ ultimate quest.”

    Those who enjoy the series will have an opportunity to put their questions to members of Quanta’s staff. From 1:30–4:30 p.m. ET on Friday, September 27, Wolchover and Wood will answer questions about the series in a Reddit “Ask Me Anything” discussion on r/IAmA, a forum for community-driven Q&A discussions with subject experts.

    Quanta Magazine is an award-winning, editorially independent online publication of the Simons Foundation.

    Patel, Wolchover and Wood are available for media interviews about the series and its contents.

    Source: Quanta Magazine

    Source link

  • New Evidence Shows Heat Destroys Quantum Entanglement

    New Evidence Shows Heat Destroys Quantum Entanglement

    But not all questions about quantum systems are easier to answer using quantum algorithms. Some are equally easy for classical algorithms, which run on ordinary computers, while others are hard for both classical and quantum ones.

    To understand where quantum algorithms and the computers that can run them might offer an advantage, researchers often analyze mathematical models called spin systems, which capture the basic behavior of arrays of interacting atoms. They then might ask: What will a spin system do when you leave it alone at a given temperature? The state it settles into, called its thermal equilibrium state, determines many of its other properties, so researchers have long sought to develop algorithms for finding equilibrium states.

    Whether those algorithms really benefit from being quantum in nature depends on the temperature of the spin system in question. At very high temperatures, known classical algorithms can do the job easily. The problem gets harder as temperature decreases and quantum phenomena grow stronger; in some systems it gets too hard for even quantum computers to solve in any reasonable amount of time. But the details of all this remain murky.

    “When do you go to the space where you need quantum, and when do you go to the space where quantum doesn’t even help you?” said Ewin Tang, a researcher at the University of California, Berkeley, and one of the authors of the new result. “Not that much is known.”

    In February, Tang and Moitra began thinking about the thermal equilibrium problem together with two other MIT computer scientists: a postdoctoral researcher named Ainesh Bakshi and Moitra’s graduate student Allen Liu. In 2023, they’d all collaborated on a groundbreaking quantum algorithm for a different task involving spin systems, and they were looking for a new challenge.

    “When we work together, things just flow,” Bakshi said. “It’s been awesome.”

    Before that 2023 breakthrough, the three MIT researchers had never worked on quantum algorithms. Their background was in learning theory, a subfield of computer science that focuses on algorithms for statistical analysis. But like ambitious upstarts everywhere, they viewed their relative naïveté as an advantage, a way to see a problem with fresh eyes. “One of our strengths is that we don’t know much quantum,” Moitra said. “The only quantum we know is the quantum that Ewin taught us.”

    The team decided to focus on relatively high temperatures, where researchers suspected that fast quantum algorithms would exist, even though nobody had been able to prove it. Soon enough, they found a way to adapt an old technique from learning theory into a new fast algorithm. But as they were writing up their paper, another team came out with a similar result: a proof that a promising algorithm developed the previous year would work well at high temperatures. They’d been scooped.

    Sudden Death Reborn

    A bit bummed that they’d come in second, Tang and her collaborators began corresponding with Álvaro Alhambra, a physicist at the Institute for Theoretical Physics in Madrid and one of the authors of the rival paper. They wanted to work out the differences between the results they’d achieved independently. But when Alhambra read through a preliminary draft of the four researchers’ proof, he was surprised to discover that they’d proved something else in an intermediate step: In any spin system in thermal equilibrium, entanglement vanishes completely above a certain temperature. “I told them, ‘Oh, this is very, very important,’” Alhambra said.

    From left: Allen Liu, Ainesh Bakshi, and Ankur Moitra collaborated with Tang, drawing on their background in a different branch of computer science. “One of our strengths is that we don’t know much quantum,” Moitra said.

    Photographs: From left: Courtesy of Allen Liu; Amartya Shankha Biswas; Gretchen Ertl

    Ben Brubaker

    Source link

  • Stephen Hawking Was Wrong—Extremal Black Holes Are Possible

    Stephen Hawking Was Wrong—Extremal Black Holes Are Possible

    Now two mathematicians have proved Hawking and his colleagues wrong. The new work—contained in a pair of recent papers by Christoph Kehle of the Massachusetts Institute of Technology and Ryan Unger of Stanford University and the University of California, Berkeley—demonstrates that there is nothing in our known laws of physics to prevent the formation of an extremal black hole.

    Their mathematical proof is “beautiful, technically innovative, and physically surprising,” said Mihalis Dafermos, a mathematician at Princeton University (and Kehle’s and Unger’s doctoral adviser). It hints at a potentially richer and more varied universe in which “extremal black holes could be out there astrophysically,” he added.

    That doesn’t mean they are. “Just because a mathematical solution exists that has nice properties doesn’t necessarily mean that nature will make use of it,” Khanna said. “But if we somehow find one, that would really [make] us think about what we are missing.” Such a discovery, he noted, has the potential to raise “some pretty radical kinds of questions.”

    The Law of Impossibility

    Before Kehle and Unger’s proof, there was good reason to believe that extremal black holes couldn’t exist.

    In 1973, Bardeen, Carter, and Hawking introduced four laws about the behavior of black holes. They resembled the four long-established laws of thermodynamics—a set of sacrosanct principles that state, for instance, that the universe becomes more disordered over time, and that energy cannot be created or destroyed.

    Christoph Kehle, a mathematician at the Massachusetts Institute of Technology, recently disproved a 1973 conjecture about extremal black holes.

    Image: Dan Komoda/Institute for Advanced Study

    In their paper, the physicists proved their first three laws of black hole thermodynamics: the zeroth, first, and second. By extension, they assumed that the third law (like its standard thermodynamics counterpart) would also be true, even though they were not yet able to prove it.

    That law stated that the surface gravity of a black hole cannot decrease to zero in a finite amount of time—in other words, that there is no way to create an extremal black hole. To support their claim, the trio argued that any process that would allow a black hole’s charge or spin to reach the extremal limit could also potentially result in its event horizon disappearing altogether. It is widely believed that black holes without an event horizon, called naked singularities, cannot exist. Moreover, because a black hole’s temperature is known to be proportional to its surface gravity, a black hole with no surface gravity would also have no temperature. Such a black hole would not emit thermal radiation—something that Hawking later proposed black holes had to do.

    In 1986, a physicist named Werner Israel seemed to put the issue to rest when he published a proof of the third law. Say you want to create an extremal black hole from a regular one. You might try to do so by making it spin faster or by adding more charged particles. Israel’s proof seemed to demonstrate that doing so could not force a black hole’s surface gravity to drop to zero in a finite amount of time.

    As Kehle and Unger would ultimately discover, Israel’s argument concealed a flaw.

    Death of the Third Law

    Kehle and Unger did not set out to find extremal black holes. They stumbled on them entirely by accident.

    They were studying the formation of electrically charged black holes. “We realized that we could do it”—make a black hole—“for all charge-to-mass ratios,” Kehle said. That included the case where the charge is as high as possible, a hallmark of an extremal black hole.

    Image may contain Crew Cut Hair Person Adult Face Head Photography Portrait Cup Clothing Footwear Shoe and Desk

    After proving that highly charged extremal black holes are mathematically possible, Ryan Unger of Stanford University is now trying to show that fast-spinning ones are, too. But it’s a much harder problem.

    Photograph: Dimitris Fetsios

    Dafermos recognized that his former students had uncovered a counterexample to Bardeen, Carter, and Hawking’s third law: They’d shown that they could indeed change a typical black hole into an extremal one within a finite stretch of time.

    Kehle and Unger started with a black hole that doesn’t rotate and has no charge, and modeled what might happen if it was placed in a simplified environment called a scalar field, which assumes a background of uniformly charged particles. They then buffeted the black hole with pulses from the field to add charge to it.

    Steve Nadis

    Source link

  • The Biggest Controversy in Cosmology Just Got Bigger

    The Biggest Controversy in Cosmology Just Got Bigger

    A long-awaited study of the cosmic expansion rate suggests that when it comes to the Hubble tension, cosmologists are still missing something.

    Liz Kruesi

    Source link

  • Students Find New Evidence of the Impossibility of Complete Disorder

    Students Find New Evidence of the Impossibility of Complete Disorder

    A new mathematic proof marks the first progress in decades on a problem about how order emerges.

    Leila Sloman

    Source link

  • The Quantum Mechanics of the Greenhouse Effect

    The Quantum Mechanics of the Greenhouse Effect

    A key question was the origin of the logarithmic scaling of the greenhouse effect—the 2-to-5-degree temperature rise that models predict will happen for every doubling of CO2. One theory held that the scaling comes from how quickly the temperature drops with altitude. But in 2022, a team of researchers used a simple model to prove that the logarithmic scaling comes from the shape of carbon dioxide’s absorption “spectrum”—how its ability to absorb light varies with the light’s wavelength.

    This goes back to those wavelengths that are slightly longer or shorter than 15 microns. A critical detail is that carbon dioxide is worse—but not too much worse—at absorbing light with those wavelengths. The absorption falls off on either side of the peak at just the right rate to give rise to the logarithmic scaling.

    “The shape of that spectrum is essential,” said David Romps, a climate physicist at the University of California, Berkeley, who coauthored the 2022 paper. “If you change it, you don’t get the logarithmic scaling.”

    The carbon spectrum’s shape is unusual—most gases absorb a much narrower range of wavelengths. “The question I had at the back of my mind was: Why does it have this shape?” Romps said. “But I couldn’t put my finger on it.”

    Consequential Wiggles

    Wordsworth and his coauthors Jacob Seeley and Keith Shine turned to quantum mechanics to find the answer.

    Light is made of packets of energy called photons. Molecules like CO2 can absorb them only when the packets have exactly the right amount of energy to bump the molecule up to a different quantum mechanical state.

    Carbon dioxide usually sits in its “ground state,” where its three atoms form a line with the carbon atom in the center, equidistant from the others. The molecule has “excited” states as well, in which its atoms undulate or swing about.

    A photon of 15-micron light contains the exact energy required to set the carbon atom swirling about the center point in a sort of hula-hoop motion. Climate scientists have long blamed this hula-hoop state for the greenhouse effect, but—as Ångström anticipated—the effect requires too precise an amount of energy, Wordsworth and his team found. The hula-hoop state can’t explain the relatively slow decline in the absorption rate for photons further from 15 microns, so it can’t explain climate change by itself.

    The key, they found, is another type of motion, where the two oxygen atoms repeatedly bob toward and away from the carbon center, as if stretching and compressing a spring connecting them. This motion takes too much energy to be induced by Earth’s infrared photons on their own.

    But the authors found that the energy of the stretching motion is so close to double that of the hula-hoop motion that the two states of motion mix with one another. Special combinations of the two motions exist, requiring slightly more or less than the exact energy of the hula-hoop motion.

    This unique phenomenon is called Fermi resonance after the famous physicist Enrico Fermi, who derived it in a 1931 paper. But its connection to Earth’s climate was only made for the first time in a paper last year by Shine and his student, and the paper this spring is the first to fully lay it bare.

    Joseph Howlett

    Source link

  • The Vacuum of Space Will Decay Sooner Than Expected

    The Vacuum of Space Will Decay Sooner Than Expected

    The original version of this story appeared in Quanta Magazine.

    Vacuum decay, a process that could end the universe as we know it, may happen 10,000 times sooner than expected. Fortunately, it still won’t happen for a very, very long time.

    When physicists speak of “the vacuum,” the term sounds as though it refers to empty space, and in a sense it does. More specifically, it refers to a set of defaults, like settings on a control board. When the quantum fields that permeate space sit at these default values, you consider space to be empty. Small tweaks to the settings create particles—turn the electromagnetic field up a bit, and you get a photon. Big tweaks, on the other hand, are best thought of as new defaults altogether. They create a different definition of empty space, with different traits.

    One quantum field is special because its default value can change. Called the Higgs field, it controls the mass of many fundamental particles, like electrons and quarks. Unlike every other quantum field physicists have discovered, the Higgs field has a default value above zero. Dialing the Higgs field value up or down would increase or decrease the mass of electrons and other particles. If the setting of the Higgs field were zero, those particles would be massless.

    We could stay at the nonzero default for eternity, were it not for quantum mechanics. A quantum field can “tunnel,” jumping to a new, lower-energy value even if it doesn’t have enough energy to pass through the higher-energy intermediate settings, an effect akin to tunneling through a solid wall.

    For this to happen, you need to have a lower-energy state to tunnel to. And before building the Large Hadron Collider, physicists thought that the current state of the Higgs field could be the lowest. That belief has now changed.

    The curve that represents the energy required for different settings of the Higgs field was always known to resemble a sombrero with an upturned brim. The current setting of the Higgs field can be pictured as a ball resting at the bottom of the brim.

    Ilustration: Credit: Mark Belan for Quanta Magazine

    Matt von Hippel

    Source link

  • The Physics of Cold Water May Have Jump-Started Complex Life

    The Physics of Cold Water May Have Jump-Started Complex Life

    After 30 days, the algae in the middle were still unicellular. As the scientists put algae from thicker and thicker rings under the microscope, however, they found larger clumps of cells. The very largest were wads of hundreds. But what interested Simpson the most were mobile clusters of four to 16 cells, arranged so that their flagella were all on the outside. These clusters moved around by coordinating the movement of their flagella, the ones at the back of the cluster holding still, the ones at the front wriggling.

    Comparing the speed of these clusters to the single cells in the middle revealed something interesting. “They all swim at the same speed,” Simpson said. By working together as a collective, the algae could preserve their mobility. “I was really pleased,” he said. “With the coarse mathematical framework, there were a few predictions I could make. To actually see it empirically means there’s something to this idea.”

    Intriguingly, when the scientists took these little clusters from the high-viscosity gel and put them back at low viscosity, the cells stuck together. They remained this way, in fact, for as long as the scientists continued to watch them, about 100 more generations. Clearly, whatever changes they underwent to survive at high viscosity were hard to reverse, Simpson said—perhaps a move toward evolution rather than a short-term shift.

    ILLUSTRATION
    Caption: In gel as viscous as ancient oceans, algal cells began working together. They clumped up and coordinated the movements of their tail-like flagella to swim more quickly. When placed back in normal viscosity, they remained together.
    Credit: Andrea Halling

    Modern-day algae are not early animals. But the fact that these physical pressures forced a unicellular creature into an alternate way of life that was hard to reverse feels quite powerful, Simpson said. He suspects that if scientists explore the idea that when organisms are very small, viscosity dominates their existence, we could learn something about conditions that might have led to the explosion of large forms of life.

    A Cell’s Perspective

    As large creatures, we don’t think much about the thickness of the fluids around us. It’s not a part of our daily lived experience, and we are so big that viscosity doesn’t impinge on us very much. The ability to move easily—relatively speaking—is something we take for granted. From the time Simpson first realized that such limits on movement could be a monumental obstacle to microscopic life, he hasn’t been able to stop thinking about it. Viscosity may have mattered quite a lot in the origins of complex life, whenever that was.

    “[This perspective] allows us to think about the deep-time history of this transition,” Simpson said, “and what was going on in Earth’s history when all the obligately complicated multicellular groups evolved, which is relatively close to each other, we think.”

    Other researchers find Simpson’s ideas quite novel. Before Simpson, no one seems to have thought very much about organisms’ physical experience of being in the ocean during Snowball Earth, said Nick Butterfield of the University of Cambridge, who studies the evolution of early life. He cheerfully noted, however, that “Carl’s idea is fringe.” That’s because the vast majority of theories about Snowball Earth’s influence on the evolution of multicellular animals, plants, and algae focus on how levels of oxygen, inferred from isotope levels in rocks, could have tipped the scales in one way or another, he said.

    Veronique Greenwood

    Source link

  • ‘Gem’ of a Proof Breaks 80-Year-Old Record, Offers New Insights Into Prime Numbers

    ‘Gem’ of a Proof Breaks 80-Year-Old Record, Offers New Insights Into Prime Numbers

    The original version of this story appeared in Quanta Magazine.

    Sometimes mathematicians try to tackle a problem head on, and sometimes they come at it sideways. That’s especially true when the mathematical stakes are high, as with the Riemann hypothesis, whose solution comes with a $1 million reward from the Clay Mathematics Institute. Its proof would give mathematicians much deeper certainty about how prime numbers are distributed, while also implying a host of other consequences—making it arguably the most important open question in math.

    Mathematicians have no idea how to prove the Riemann hypothesis. But they can still get useful results just by showing that the number of possible exceptions to it is limited. “In many cases, that can be as good as the Riemann hypothesis itself,” said James Maynard of the University of Oxford. “We can get similar results about prime numbers from this.”

    In a breakthrough result posted online in May, Maynard and Larry Guth of the Massachusetts Institute of Technology established a new cap on the number of exceptions of a particular type, finally beating a record that had been set more than 80 years earlier. “It’s a sensational result,” said Henryk Iwaniec of Rutgers University. “It’s very, very, very hard. But it’s a gem.”

    The new proof automatically leads to better approximations of how many primes exist in short intervals on the number line, and stands to offer many other insights into how primes behave.

    A Careful Sidestep

    The Riemann hypothesis is a statement about a central formula in number theory called the Riemann zeta function. The zeta (ζ) function is a generalization of a straightforward sum:

    1 + 1/2 + 1/3 + 1/4 + 1/5 + ⋯.

    This series will become arbitrarily large as more and more terms are added to it—mathematicians say that it diverges. But if instead you were to sum up

    1 + 1/22 + 1/32 + 1/42 + 1/52 + ⋯ = 1 + 1/4 + 1/9+ 1/16 + 1/25 +⋯

    you would get π2/6, or about 1.64. Riemann’s surprisingly powerful idea was to turn a series like this into a function, like so:

    ζ(s) = 1 + 1/2s + 1/3s + 1/4s + 1/5s + ⋯.

    So ζ(1) is infinite, but ζ(2) = π2/6.

    Things get really interesting when you let s be a complex number, which has two parts: a “real” part, which is an everyday number, and an “imaginary” part, which is an everyday number multiplied by the square root of −1 (or i, as mathematicians write it). Complex numbers can be plotted on a plane, with the real part on the x-axis and the imaginary part on the y-axis. Here, for example, is 3 + 4i.

    Graph: Mark Belan for Quanta Magazine

    Jordana Cepelewicz

    Source link

  • Uncovering Magnetism’s Mysterious Role in the Galaxy

    Uncovering Magnetism’s Mysterious Role in the Galaxy

    When were you first drawn to learning about it?

    I don’t think that was from some deep-seated, lifelong need to study magnetism, but it grabbed me in grad school as an area of astrophysics that is not well understood and is avoided for its complexity.

    For astrophysics in general, I did a National Science Foundation research experience for undergraduates at Arecibo in Puerto Rico the summer before my senior year, and it was incredible. That’s when I realized I wanted to work on the ISM, when I really appreciated what the ISM was. It was my first experience with full-time research, and it was at this incredible facility—both because the telescope is incredible and because you live there on-site in these little cabins. The cabin that Jodie Foster was in, in the movie Contact, that’s where my bunk bed was.

    Was there an earlier moment when you realized you wanted to be a scientist?

    The honest truth is that I did not always want to be a scientist. At the point of entering college, I was like, maybe I will double major in biology and English. I loved biology in particular, and I’ve always loved writing, so I thought maybe I’d be a writer.

    I have always been very interested in everything. It’s a common refrain for astronomers to say, “Oh, ever since I was a little kid, I absolutely loved space, and I knew that’s exactly what I wanted to do when I grew up.” And I definitely loved space as a little kid, but I also loved rocks, and dinosaurs, and lizards. Salamanders in particular. If anything, it all started with looking under rocks for salamanders with my sisters in the backyard in Virginia. It’s just a curiosity about nature and a love of learning, and that’s what you get to do as a scientist.


    Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

    Jay Bennett

    Source link

  • How the Brain Decides What to Remember

    How the Brain Decides What to Remember

    “There has to be some kind of triage to remember what is relevant and forget the rest,” Zugaro said. “Understanding how specific memories were selected for storage was still lacking … Now we have a good clue.”

    Last December, a research team led by Bendor at University College London published related results in Nature Communications that anticipated those of Yang and Buzsáki. They too found that sharp wave ripples that fired when rats were awake and asleep seemed to tag experiences for memory. However, their analysis averaged a number of different trials together—an approach less precise than what Yang and Buzsáki accomplished.

    The NYU team’s key innovation was to bring the element of time, which distinguishes similar memories from one another, into their analysis. The mice were running around in the same maze patterns, and yet these researchers could distinguish between blocks of trials at the neuronal level—a resolution never reached before.

    The brain patterns are marking “something a little bit closer to an event, and a little bit less like a general knowledge,” said Loren Frank, a neuroscientist at UC San Francisco who was not involved in the research. “That strikes me as a really interesting finding.”

    “They’re showing that the brain is maybe creating some kind of temporal code to distinguish between different memories occurring in the same place,” said Freyja Ólafsdóttir, a neuroscientist at Radboud University who was not involved with the work.

    Shantanu Jadhav, a neuroscientist at Brandeis University, praised the study. “This is a good start,” he said. However, he hopes to see a follow-up experiment that includes a behavioral test. Demonstrating that an animal forgot or remembered particular trial blocks would be “the real proof that this is a tagging mechanism.”

    The research leaves a burning question unanswered: Why is one experience chosen over another? The new work suggests how the brain tags a certain experience to remember. But it can’t tell us how the brain decides what’s worth remembering.

    Sometimes the things we remember seem random or irrelevant, and surely different from what we’d select if given the choice. “There is a sense that the brain prioritizes based on ‘importance,’” Frank said. Because studies have suggested that emotional or novel experiences tend to be remembered better, it’s possible that internal fluctuations in arousal or the levels of neuromodulators such as dopamine or adrenaline and other chemicals that affect neurons end up selecting experiences, he suggested.

    Jadhav echoed that thought, saying, “The internal state of the organism can bias experiences to be encoded and stored more effectively.” But it’s not known what makes one experience more prone to being stored than others, he added. And in the case of Yang and Buzsáki’s study, it’s not clear why a mouse would remember one trial better than another.

    Buzsáki remains committed to exploring the roles that sharp wave ripples play in the hippocampus, although he and his team are also interested in potential applications that might arise from these observations. It’s possible, for example, that scientists could disrupt the ripples as part of a treatment for conditions like post-traumatic stress disorder, in which people remember certain experiences too vividly, he said. “The low-hanging fruit here is to erase sharp waves and forget what you experienced.”

    But for the time being, Buzsáki will continue to tune in to these powerful brain waves to uncover more about why we remember what we do.


    Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

    Yasemin Saplakoglu

    Source link

  • Cryptographers Are Discovering New Rules for Quantum Encryption

    Cryptographers Are Discovering New Rules for Quantum Encryption

    The original version of this story appeared in Quanta Magazine.

    Say you want to send a private message, cast a secret vote, or sign a document securely. If you do any of these tasks on a computer, you’re relying on encryption to keep your data safe. That encryption needs to withstand attacks from code breakers with their own computers, so modern encryption methods rely on assumptions about what mathematical problems are hard for computers to solve.

    But as cryptographers laid the mathematical foundations for this approach to information security in the 1980s, a few researchers discovered that computational hardness wasn’t the only way to safeguard secrets. Quantum theory, originally developed to understand the physics of atoms, turned out to have deep connections to information and cryptography. Researchers found ways to base the security of a few specific cryptographic tasks directly on the laws of physics. But these tasks were strange outliers—for all others, there seemed to be no alternative to the classical computational approach.

    By the end of the millennium, quantum cryptography researchers thought that was the end of the story. But in just the past few years, the field has undergone another seismic shift.

    “There’s been this rearrangement of what we believe is possible with quantum cryptography,” said Henry Yuen, a quantum information theorist at Columbia University.

    In a string of recent papers, researchers have shown that most cryptographic tasks could still be accomplished securely even in hypothetical worlds where practically all computation is easy. All that matters is the difficulty of a special computational problem about quantum theory itself.

    “The assumptions you need can be way, way, way weaker,” said Fermi Ma, a quantum cryptographer at the Simons Institute for the Theory of Computing in Berkeley, California. “This is giving us new insights into computational hardness itself.”

    This Message Will Self-Destruct

    The story begins in the late 1960s, when a physics graduate student named Stephen Wiesner started thinking about the destructive nature of measurement in quantum theory. Measure any system governed by the rules of quantum physics, and you’ll alter the quantum state that mathematically describes its configuration. This quantum measurement disturbance was a hindrance for most physicists. Wiesner, who took an unorthodox information-centric view of quantum theory, wondered whether it could be made useful. Perhaps it could serve as a form of built-in tamper protection for sensitive data.

    But Wiesner’s ideas were too far ahead of their time, and he left academia after graduate school. Fortunately, he’d discussed his ideas with his friend and fellow physicist Charles Bennett, who unsuccessfully tried to interest others in the subject for a decade. Finally, in 1979, Bennett met the computer scientist Gilles Brassard while swimming off the coast of Puerto Rico during a conference. Together, they wrote a groundbreaking paper describing a new approach to an important cryptographic task. Their protocol was based on quantum measurement disturbance, and needed no assumptions about the difficulty of any computational problems.

    Ben Brubaker

    Source link

  • How Game Theory Can Make AI More Reliable

    How Game Theory Can Make AI More Reliable

    Posing a far greater challenge for AI researchers was the game of Diplomacy—a favorite of politicians like John F. Kennedy and Henry Kissinger. Instead of just two opponents, the game features seven players whose motives can be hard to read. To win, a player must negotiate, forging cooperative arrangements that anyone could breach at any time. Diplomacy is so complex that a group from Meta was pleased when, in 2022, its AI program Cicero developed “human-level play” over the course of 40 games. While it did not vanquish the world champion, Cicero did well enough to place in the top 10 percent against human participants.

    During the project, Jacob—a member of the Meta team—was struck by the fact that Cicero relied on a language model to generate its dialog with other players. He sensed untapped potential. The team’s goal, he said, “was to build the best language model we could for the purposes of playing this game.” But what if instead they focused on building the best game they could to improve the performance of large language models?

    Consensual Interactions

    In 2023, Jacob began to pursue that question at MIT, working with Yikang Shen, Gabriele Farina, and his adviser, Jacob Andreas, on what would become the consensus game. The core idea came from imagining a conversation between two people as a cooperative game, where success occurs when a listener understands what a speaker is trying to convey. In particular, the consensus game is designed to align the language model’s two systems—the generator, which handles generative questions, and the discriminator, which handles discriminative ones.

    After a few months of stops and starts, the team built this principle up into a full game. First, the generator receives a question. It can come from a human or from a preexisting list. For example, “Where was Barack Obama born?” The generator then gets some candidate responses, let’s say Honolulu, Chicago, and Nairobi. Again, these options can come from a human, a list, or a search carried out by the language model itself.

    But before answering, the generator is also told whether it should answer the question correctly or incorrectly, depending on the results of a fair coin toss.

    If it’s heads, then the machine attempts to answer correctly. The generator sends the original question, along with its chosen response, to the discriminator. If the discriminator determines that the generator intentionally sent the correct response, they each get one point, as a kind of incentive.

    If the coin lands on tails, the generator sends what it thinks is the wrong answer. If the discriminator decides it was deliberately given the wrong response, they both get a point again. The idea here is to incentivize agreement. “It’s like teaching a dog a trick,” Jacob explained. “You give them a treat when they do the right thing.”

    The generator and discriminator also each start with some initial “beliefs.” These take the form of a probability distribution related to the different choices. For example, the generator may believe, based on the information it has gleaned from the internet, that there’s an 80 percent chance Obama was born in Honolulu, a 10 percent chance he was born in Chicago, a 5 percent chance of Nairobi, and a 5 percent chance of other places. The discriminator may start off with a different distribution. While the two “players” are still rewarded for reaching agreement, they also get docked points for deviating too far from their original convictions. That arrangement encourages the players to incorporate their knowledge of the world—again drawn from the internet—into their responses, which should make the model more accurate. Without something like this, they might agree on a totally wrong answer like Delhi, but still rack up points.

    Steve Nadis

    Source link

  • The Hunt for Ultralight Dark Matter

    The Hunt for Ultralight Dark Matter

    If or when SLAC’s planned project, the Light Dark Matter Experiment (LDMX), receives funding—a decision from the Department of Energy is expected in the next year or so—it will scan for light dark matter. The experiment is designed to accelerate electrons toward a target made of tungsten in End Station A. In the vast majority of collisions between a speeding electron and a tungsten nucleus, nothing interesting will happen. But rarely—on the order of once every 10,000 trillion hits, if light dark matter exists—the electron will instead interact with the nucleus via the unknown dark force to produce light dark matter, significantly draining the electron’s energy.

    That 10,000 trillion is actually the worst-case scenario for light dark matter. It’s the lowest rate at which you can produce dark matter to match thermal-relic measurements. But Schuster says light dark matter might arise in upward of one in every 100 billion impacts. If so, then with the planned collision rate of the experiment, “that’s an inordinate amount of dark matter that you can produce.”

    LDMX will need to run for three to five years, Nelson said, to definitively detect or rule out thermal relic light dark matter.

    Ultralight Dark Matter

    Other dark matter hunters have their experiments tuned for a different candidate. Ultralight dark matter is axionlike but no longer obliged to solve the strong CP problem. Because of this, it can be much more lightweight than ordinary axions, as light as 10 billionths of a trillionth of the electron’s mass. That tiny mass corresponds to a wave with a vast wavelength, as long as a small galaxy. In fact, the mass can’t be any smaller because if it were, the even longer wavelengths would mean that dark matter could not be concentrated around galaxies, as astronomers observe.

    Ultralight dark matter is so incredibly minuscule that the dark-force particle needed to mediate its interactions is thought to be massive. “There’s no name given to these mediators,” Schuster said, “because it’s outside of any possible experiment. It has to be there [in the theory] for consistency, but we don’t worry about them.”

    The origin story for ultralight dark matter particles depends on the particular theoretical model, but Toro says they would have arisen after the Big Bang, so the thermal-relic argument is irrelevant. There’s a different motivation for thinking about them. The particles naturally follow from string theory, a candidate for the fundamental theory of physics. These feeble particles arise from the ways that six tiny dimensions might be curled up or “compactified” at each point in our 4D universe, according to string theory. “The existence of light axionlike particles is strongly motivated by many kinds of string compactifications,” said Jessie Shelton, a physicist at the University of Illinois, “and it’s something that we should take seriously.”

    Rather than trying to create dark matter using an accelerator, experiments looking for axions and ultralight dark matter listen for the dark matter that supposedly surrounds us. Based on its gravitational effects, dark matter seems to be distributed most densely near the Milky Way’s center, but one estimate suggests that even out here on Earth, we can expect dark matter to have a density of almost half a proton’s mass per cubic centimeter. Experiments try to detect this ever-present dark matter using powerful magnetic fields. In theory, the ethereal dark matter will occasionally absorb a photon from the strong magnetic field and convert it into a microwave photon, which an experiment can detect.

    Lyndie Chiou

    Source link

  • Does String Theory Actually Describe the World? AI May Be Able to Tell

    Does String Theory Actually Describe the World? AI May Be Able to Tell

    A group led by string theory veterans Burt Ovrut of the University of Pennsylvania and Andre Lukas of Oxford went further. They too started with Ruehle’s metric-calculating software, which Lukas had helped develop. Building on that foundation, they added an array of 11 neural networks to handle the different types of sprinkles. These networks allowed them to calculate an assortment of fields that could take on a richer variety of shapes, creating a more realistic setting that can’t be studied with any other techniques. This army of machines learned the metric and the arrangement of the fields, calculated the Yukawa couplings, and spit out the masses of three types of quarks. It did all this for six differently shaped Calabi-Yau manifolds. “This is the first time anybody has been able to calculate them to that degree of accuracy,” Anderson said.

    None of those Calabi-Yaus underlies our universe, because two of the quarks have identical masses, while the six varieties in our world come in three tiers of masses. Rather, the results represent a proof of principle that machine-learning algorithms can take physicists from a Calabi-Yau manifold all the way to specific particle masses.

    “Until now, any such calculations would have been unthinkable,” said Constantin, a member of the group based at Oxford.

    Numbers Game

    The neural networks choke on doughnuts with more than a handful of holes, and researchers would eventually like to study manifolds with hundreds. And so far, the researchers have considered only rather simple quantum fields. To go all the way to the standard model, Ashmore said, “you might need a more sophisticated neural network.”

    Bigger challenges loom on the horizon. Attempting to find our particle physics in the solutions of string theory—if it’s in there at all—is a numbers game. The more sprinkle-laden doughnuts you can check, the more likely you are to find a match. After decades of effort, string theorists can finally check doughnuts and compare them with reality: the masses and couplings of the elementary particles we observe. But even the most optimistic theorists recognize that the odds of finding a match by blind luck are cosmically low. The number of Calabi-Yau doughnuts alone may be infinite. “You need to learn how to game the system,” Ruehle said.

    One approach is to check thousands of Calabi-Yau manifolds and try to suss out any patterns that could steer the search. By stretching and squeezing the manifolds in different ways, for instance, physicists might develop an intuitive sense of what shapes lead to what particles. “What you really hope is that you have some strong reasoning after looking at particular models,” Ashmore said, “and you stumble into the right model for our world.”

    Lukas and colleagues at Oxford plan to start that exploration, prodding their most promising doughnuts and fiddling more with the sprinkles as they try to find a manifold that produces a realistic population of quarks. Constantin believes that they will find a manifold reproducing the masses of the rest of the known particles in a matter of years.

    Other string theorists, however, think it’s premature to start scrutinizing individual manifolds. Thomas Van Riet of KU Leuven is a string theorist pursuing the “swampland” research program, which seeks to identify features shared by all mathematically consistent string theory solutions—such as the extreme weakness of gravity relative to the other forces. He and his colleagues aspire to rule out broad swaths of string solutions—that is, possible universes—before they even start to think about specific doughnuts and sprinkles.

    “It’s good that people do this machine-learning business, because I’m sure we will need it at some point,” Van Riet said. But first “we need to think about the underlying principles, the patterns. What they’re asking about is the details.”

    Charlie Wood

    Source link

  • The Complex Social Lives of Viruses

    The Complex Social Lives of Viruses

    The original version of this story appeared in Quanta Magazine.

    Ever since viruses came to light in the late 1800s, scientists have set them apart from the rest of life. Viruses were far smaller than cells, and inside their protein shells they carried little more than genes. They could not grow, copy their own genes, or do much of anything. Researchers assumed that each virus was a solitary particle drifting alone through the world, able to replicate only if it happened to bump into the right cell that could take it in.

    This simplicity was what attracted many scientists to viruses in the first place, said Marco Vignuzzi, a virologist at the Singapore Agency for Science, Research and Technology Infectious Diseases Labs. “We were trying to be reductionist.”

    That reductionism paid off. Studies on viruses were crucial to the birth of modern biology. Lacking the complexity of cells, they revealed fundamental rules about how genes work. But viral reductionism came at a cost, Vignuzzi said: By assuming viruses are simple, you blind yourself to the possibility that they might be complicated in ways you don’t know about yet.

    For example, if you think of viruses as isolated packages of genes, it would be absurd to imagine them having a social life. But Vignuzzi and a new school of like-minded virologists don’t think it’s absurd at all. In recent decades, they have discovered some strange features of viruses that don’t make sense if viruses are lonely particles. They instead are uncovering a marvelously complex social world of viruses. These sociovirologists, as the researchers sometimes call themselves, believe that viruses make sense only as members of a community.

    Granted, the social lives of viruses aren’t quite like those of other species. Viruses don’t post selfies to social media, volunteer at food banks, or commit identity theft like humans do. They don’t fight with allies to dominate a troop like baboons; they don’t collect nectar to feed their queen like honeybees; they don’t even congeal into slimy mats for their common defense like some bacteria do. Nevertheless, sociovirologists believe that viruses do cheat, cooperate, and interact in other ways with their fellow viruses.

    The field of sociovirology is still young and small. The first conference dedicated to the social life of viruses took place in 2022, and the second will take place this June. A grand total of 50 people will be in attendance. Still, sociovirologists argue that the implications of their new field could be profound. Diseases like influenza don’t make sense if we think of viruses in isolation from one another. And if we can decipher the social life of viruses, we might be able to exploit it to fight back against the diseases some of them create.

    Under Our Noses

    Some of the most important evidence for the social life of viruses has been sitting in plain view for nearly a century. After the discovery of the influenza virus in the early 1930s, scientists figured out how to grow stocks of the virus by injecting it into a chicken egg and letting it multiply inside. The researchers could then use the new viruses to infect lab animals for research or inject them into new eggs to keep growing new viruses.

    In the late 1940s, the Danish virologist Preben von Magnus was growing viruses when he noticed something odd. Many of the viruses produced in one egg could not replicate when he injected them into another. By the third cycle of transmission, only one in 10,000 viruses could still replicate. But in the cycles that followed, the defective viruses became rarer and the replicating ones bounced back. Von Magnus suspected that the viruses that couldn’t replicate had not finished developing, and so he called them “incomplete.”

    Carl Zimmer

    Source link

  • NASA’s Quest to Touch the Sun

    NASA’s Quest to Touch the Sun

    The original version of this story appeared in Quanta Magazine.

    Our sun is the best-observed star in the entire universe.

    We see its light every day. For centuries, scientists have tracked the dark spots dappling its radiant face, while in recent decades, telescopes in space and on Earth have scrutinized sunbeams in wavelengths spanning the electromagnetic spectrum. Experiments have also sniffed the sun’s atmosphere, captured puffs of the solar wind, collected solar neutrinos and high-energy particles, and mapped our star’s magnetic field—or tried to, since we have yet to really observe the polar regions that are key to learning about the sun’s inner magnetic structure.

    For all that scrutiny, however, one crucial question remained embarrassingly unsolved. At its surface, the sun is a toasty 6,000 degrees Celsius. But the outer layers of its atmosphere, called the corona, can be a blistering—and perplexing—1 million degrees hotter.

    You can see that searing sheath of gas during a total solar eclipse, as happened on April 8 above a swath of North America. If you were in the path of totality, you could see the corona as a glowing halo around the moon-shadowed sun.

    This year, that halo looked different than the one that appeared during the last North American eclipse, in 2017. Not only is the sun more active now, but you were looking at a structure that we—the scientists who study our home star—have finally come to understand. Observing the sun from afar wasn’t good enough for us to grasp what heats the corona. To solve this and other mysteries, we needed a sun-grazing space probe.

    That spacecraft—NASA’s Parker Solar Probe—launched in 2018. As it loops around the sun, dipping in and out of the solar corona, it has collected data that shows us how small-scale magnetic activity within the solar atmosphere makes the solar corona almost inconceivably hot.

    From Surface to Sheath

    To begin to understand that roasting corona, we need to consider magnetic fields.

    The sun’s magnetic engine, called the solar dynamo, lies about 200,000 kilometers beneath the sun’s surface. As it churns, that engine drives solar activity, which waxes and wanes over periods of roughly 11 years. When the sun is more active, solar flares, sunspots, and outbursts increase in intensity and frequency (as is happening now, near solar maximum).

    At the sun’s surface, magnetic fields accumulate at the boundaries of churning convective cells, known as supergranules, which look like bubbles in a pan of boiling oil on the stove. The constantly boiling solar surface concentrates and strengthens those magnetic fields at the cells’ edges. Those amplified fields then launch transient jets and nanoflares as they interact with solar plasma.

    Courtesy of NSO/NSF/AURA/Quanta Magazine

    CAPTION: These churning convective cells on the sun’s surface, each approximately the size of the state of Texas, are closely connected to the magnetic activity that heats the sun’s corona.
    CREDIT: NSO/NSF/AURA

    Magnetic fields can also erupt through the sun’s surface and produce larger-scale phenomena. In regions where the field is strong, you see dark sunspots and giant magnetic loops. In most places, especially in the lower solar corona and near sunspots, these magnetic arcs are “closed,” with both ends attached to the sun. These closed loops come in various sizes—from minuscule ones to the dramatic, blazing arcs seen during eclipses.

    Thomas Zurbuchen

    Source link

  • An Old Abstract Field of Math Is Unlocking the Deep Complexity of Spacecraft Orbits

    An Old Abstract Field of Math Is Unlocking the Deep Complexity of Spacecraft Orbits

    The original version of this story appeared in Quanta Magazine.

    In October, a Falcon Heavy rocket is scheduled to launch from Cape Canaveral in Florida, carrying NASA’s Europa Clipper mission. The $5 billion mission is designed to find out if Europa, Jupiter’s fourth-largest moon, can support life. But because Europa is constantly bombarded by intense radiation created by Jupiter’s magnetic field, the Clipper spacecraft can’t orbit the moon itself. Instead, it will slide into an eccentric orbit around Jupiter and gather data by repeatedly swinging by Europa—53 times in total—before retreating from the worst of the radiation. Every time the spacecraft rounds Jupiter, its path will be slightly different, ensuring that it can take pictures and gather data from Europa’s poles to its equator.

    To plan convoluted tours like this one, trajectory planners use computer models that meticulously calculate the trajectory one step at a time. The planning takes hundreds of mission requirements into account, and it’s bolstered by decades of mathematical research into orbits and how to join them into complicated tours. Mathematicians are now developing tools which they hope can be used to create a more systematic understanding of how orbits relate to one another.

    “What we have is the previous computations that we’ve done, that guide us as we do the current computations. But it’s not a complete picture of all the options that we have,” said Daniel Scheeres, an aerospace engineer at the University of Colorado, Boulder.

    “I think that was my biggest frustration when I was a student,” said Dayung Koh, an engineer at NASA’s Jet Propulsion Laboratory. “I know these orbits are there, but I don’t know why.” Given the expense and complexity of missions to the moons of Jupiter and Saturn, not knowing why orbits are where they are is a problem. What if there is a completely different orbit that could get the job done with fewer resources? As Koh said: “Did I find them all? Are there more? I can’t tell that.”

    After getting her doctorate from the University of Southern California in 2016, Koh grew interested in how orbits can be cataloged into families. Jovian orbits that are far from Europa form such a family; so do orbits close to Europa. But other families are less obvious. For instance, for any two bodies, like Jupiter and Europa, there is an intermediate point where the two bodies’ gravitational effects balance to create stable points. Spacecraft can orbit this point, even though there is nothing at the center of the orbit. These orbits form a family called Lyapunov orbits. Add a little energy to such an orbit by firing a spacecraft engine, and at first you’ll stay in the same family. But add enough, and you’ll cross over into another family—say, one that includes Jupiter inside its orbits. Some orbit families might require less fuel than others, remain in sunlight at all times, or have other useful features.

    Dayung Koh, an engineer at NASA’s Jet Propulsion Laboratory, is trying to come to a systematic understanding of how orbits in a planetary system relate to one another.

    PHOTO: Courtesy of Dayung Koh

    Leila Sloman

    Source link

  • Here’s a Clever Way to Uncover America’s Voting Deserts

    Here’s a Clever Way to Uncover America’s Voting Deserts

    The original version of this story appeared in Quanta Magazine.

    In Georgia’s 2020 gubernatorial election, some voters in Atlanta waited over 10 hours to cast a ballot. One reason for the long lines was that almost 10 percent of Georgia’s polling sites had closed over the preceding seven years, despite an influx of about 2 million voters. These closures were disproportionately concentrated in predominantly Black areas that tended to vote Democratic.

    But pinpointing the locations of “voting deserts” isn’t as straightforward as it might seem. Sometimes a lack of capacity is reflected in long waits at the polls, but other times the problem is the distance to the nearest polling place. Combining these factors in a systematic way is tricky.

    In a paper due to be published this summer in the journal SIAM Review, Mason Porter, a mathematician at the University of California, Los Angeles, and his students used tools from topology to do just that. Abigail Hickok, one of the paper’s coauthors, conceived the idea after seeing images of long lines in Atlanta. “Voting was on my mind a lot, partly because it was an especially anxiety-inducing election,” she said.

    Topologists study the underlying properties and spatial relations of geometric shapes under transformation. Two shapes are considered topologically equivalent if one can deform into the other via continuous movements without tearing, gluing, or introducing new holes.

    At first glance, topology would seem to be a poor fit for the problem of polling site placement. Topology concerns itself with continuous shapes, and polling sites are at discrete locations. But in recent years, topologists have adapted their tools to work on discrete data by creating graphs of points connected by lines and then analyzing the properties of those graphs. Hickok said these techniques are useful not only for understanding the distribution of polling places but also for studying who has better access to hospitals, grocery stores, and parks.

    That’s where the topology begins.

    Imagine creating tiny circles around each point on the graph. The circles start with a radius of zero, but they grow with time. Specifically, when the time exceeds the wait time at a given polling place, the circle will begin to expand. As a consequence, locations with shorter wait times will have bigger circles—they start growing first—and locations with longer wait times will have smaller ones.

    Some circles will eventually touch each other. When this happens, draw a line between the points at their centers. If multiple circles overlap, connect all those points into “simplices,” which is just a general term meaning shapes such as triangles (a 2-simplex) and tetrahedrons (3-simplex).

    Courtesy of Merrill Sherman/Quanta Magazine

    Lyndie Chiou

    Source link

  • The Quest to Map the Inside of the Proton

    The Quest to Map the Inside of the Proton

    “How are matter and energy distributed?” asked Peter Schweitzer, a theoretical physicist at the University of Connecticut. “We don’t know.”

    Schweitzer has spent most of his career thinking about the gravitational side of the proton. Specifically, he’s interested in a matrix of properties of the proton called the energy-momentum tensor. “The energy-momentum tensor knows everything there is to be known about the particle,” he said.

    In Albert Einstein’s theory of general relativity, which casts gravitational attraction as objects following curves in space-time, the energy-momentum tensor tells space-time how to bend. It describes, for instance, the arrangement of energy (or, equivalently, mass)—the source of the lion’s share of space-time twisting. It also tracks information about how momentum is distributed, as well as where there will be compression or expansion, which can also lightly curve space-time.

    If we could learn the shape of space-time surrounding a proton, Russian and American physicists independently worked out in the 1960s, we could infer all the properties indexed in its energy-momentum tensor. Those include the proton’s mass and spin, which are already known, along with the arrangement of the proton’s pressures and forces, a collective property physicists refer to as the “Druck term,” after the word for pressure in German. This term is “as important as mass and spin, and nobody knows what it is,” Schweitzer said—though that’s starting to change.

    In the ’60s, it seemed as if measuring the energy-momentum tensor and calculating the Druck term would require a gravitational version of the usual scattering experiment: You fire a massive particle at a proton and let the two exchange a graviton—the hypothetical particle that makes up gravitational waves—rather than a photon. But due to the extreme weakness of gravity, physicists expect graviton scattering to occur 39 orders of magnitude more rarely than photon scattering. Experiments can’t possibly detect such a weak effect.

    “I remember reading about this when I was a student,” said Volker Burkert, a member of the Jefferson Lab team. The takeaway was that “we probably will never be able to learn anything about mechanical properties of particles.”

    Gravity Without Gravity

    Gravitational experiments are still unimaginable today. But research in the late 1990s and early 2000s by the physicists Xiangdong Ji and, working separately, the late Maxim Polyakov revealed a workaround.

    The general scheme is the following. When you fire an electron lightly at a proton, it usually delivers a photon to one of the quarks and glances off. But in fewer than one in a billion events, something special happens. The incoming electron sends in a photon. A quark absorbs it and then emits another photon a heartbeat later. The key difference is that this rare event involves two photons instead of one—both incoming and outgoing photons. Ji’s and Polyakov’s calculations showed that if experimentalists could collect the resulting electron, proton and photon, they could infer from the energies and momentums of these particles what happened with the two photons. And that two-photon experiment would be essentially as informative as the impossible graviton-scattering experiment.

    Charlie Wood

    Source link