ReportWire

Tag: Quanta Magazine

  • Game Theory Explains How Algorithms Can Drive Up Prices

    The original version of this story appeared in Quanta Magazine.

    Imagine a town with two widget merchants. Customers prefer cheaper widgets, so the merchants must compete to set the lowest price. Unhappy with their meager profits, they meet one night in a smoke-filled tavern to discuss a secret plan: If they raise prices together instead of competing, they can both make more money. But that kind of intentional price-fixing, called collusion, has long been illegal. The widget merchants decide not to risk it, and everyone else gets to enjoy cheap widgets.

    For well over a century, US law has followed this basic template: Ban those backroom deals, and fair prices should be maintained. These days, it’s not so simple. Across broad swaths of the economy, sellers increasingly rely on computer programs called learning algorithms, which repeatedly adjust prices in response to new data about the state of the market. These are often much simpler than the “deep learning” algorithms that power modern artificial intelligence, but they can still be prone to unexpected behavior.

    So how can regulators ensure that algorithms set fair prices? Their traditional approach won’t work, as it relies on finding explicit collusion. “The algorithms definitely are not having drinks with each other,” said Aaron Roth, a computer scientist at the University of Pennsylvania.

    Yet a widely cited 2019 paper showed that algorithms could learn to collude tacitly, even when they weren’t programmed to do so. A team of researchers pitted two copies of a simple learning algorithm against each other in a simulated market, then let them explore different strategies for increasing their profits. Over time, each algorithm learned through trial and error to retaliate when the other cut prices—dropping its own price by some huge, disproportionate amount. The end result was high prices, backed up by mutual threat of a price war.

    Aaron Roth suspects that the pitfalls of algorithmic pricing may not have a simple solution. “The message of our paper is it’s hard to figure out what to rule out,” he said.

    Photograph: Courtesy of Aaron Roth

    Implicit threats like this also underpin many cases of human collusion. So if you want to guarantee fair prices, why not just require sellers to use algorithms that are inherently incapable of expressing threats?

    In a recent paper, Roth and four other computer scientists showed why this may not be enough. They proved that even seemingly benign algorithms that optimize for their own profit can sometimes yield bad outcomes for buyers. “You can still get high prices in ways that kind of look reasonable from the outside,” said Natalie Collina, a graduate student working with Roth who co-authored the new study.

    Researchers don’t all agree on the implications of the finding—a lot hinges on how you define “reasonable.” But it reveals how subtle the questions around algorithmic pricing can get, and how hard it may be to regulate.

    Ben Brubaker

    Source link

  • How Genes Have Harnessed Physics to Grow Living Things

    The original version of this story appeared in Quanta Magazine.

    Sip a glass of wine, and you will notice liquid continuously weeping down the wetted side of the glass. In 1855, James Thomson, brother of Lord Kelvin, explained in the Philosophical Magazine that these wine “tears” or “legs” result from the difference in surface tension between alcohol and water. “This fact affords an explanation of several very curious motions,” Thomson wrote. Little did he realize that the same effect, later named the Marangoni effect, might also shape how embryos develop.

    In March, a group of biophysicists in France reported that the Marangoni effect is responsible for the pivotal moment when a homogeneous blob of cells elongates and develops a head-and-tail axis — the first defining features of the organism it will become.

    The finding is part of a trend that defies the norm in biology. Typically, biologists try to characterize growth, development, and other biological processes as the result of chemical cues triggered by genetic instructions. But that picture has often seemed incomplete. Researchers now increasingly appreciate the role of mechanical forces in biology: forces that push and pull tissues in response to their material properties, steering growth and development in ways that genes cannot.

    Modern imaging and measurement techniques have opened scientists’ eyes to these forces by flooding the field with data that invites mechanical interpretations. “What has changed over the past decades is really the possibility to watch what happens live, and to see the mechanics in terms of cell movement, cell rearrangement, tissue growth,” said Pierre-François Lenne of Aix Marseille University, one of the researchers behind the recent study.

    The shift toward mechanical explanations has revived interest in pre-genetic models of biology. For example, in 1917 the Scottish biologist, mathematician, and classics scholar D’Arcy Thompson published On Growth and Form, which highlighted similarities between the shapes found among living organisms and those that emerge in nonliving matter. Thompson wrote the book as an antidote to what he thought was an excessive tendency to explain everything in terms of Darwinian natural selection. His thesis—that physics, too, shapes us—is coming back into vogue.

    Time-lapse movie of a gastruloid developing a head-to-tail axis.

    Video: Sham Tlili/CNRS

    “The hypothesis is that physics and mechanics can help us understand the biology at the tissue scale,” said Alexandre Kabla, a physicist and engineer at the University of Cambridge.

    The task now is to understand the interplay of causes, where genes and physics somehow act hand in hand to sculpt organisms.

    Grow With the Flow

    Mechanical models of embryo and tissue growth are not new, but biologists long lacked ways of testing these ideas. Just seeing embryos is difficult; they are small and diffusive, bouncing light in all directions like frosted glass. But new microscopy and image analysis techniques have opened a clearer window on development.

    Lenne and his coworkers applied some of the new techniques to observe the motion of cells inside mouse gastruloids: bundles of stem cells that, as they grow, mimic the early stages of embryo growth.

    Anna Demming

    Source link

  • The Hidden Math of Ocean Waves

    In 2011, Deconinck and Oliveras simulated different disturbances with higher and higher frequencies and watched what happened to the Stokes waves. As they expected, for disturbances above a certain frequency, the waves persevered.

    But as the pair continued to dial up the frequency, they suddenly began to see destruction again. At first, Oliveras worried that there was a bug in the computer program. “Part of me was like, this can’t be right,” she said. “But the more I dug, the more it persisted.”

    In fact, as the frequency of the disturbance increased, an alternating pattern emerged. First there was an interval of frequencies where the waves became unstable. This was followed by an interval of stability, which was followed by yet another interval of instability, and so on.

    Deconinck and Oliveras published their finding as a counterintuitive conjecture: that this archipelago of instabilities stretches off to infinity. They called all the unstable intervals “isole”—the Italian word for “islands.”

    It was strange. The pair had no explanation for why instabilities would appear again, let alone infinitely many times. They at least wanted a proof that their startling observation was correct.

    Bernard Deconinck and Katie Oliveras uncovered a strange pattern in computational studies of wave stability.

    Photograph: Courtesy of Bernard Deconinck

    The Hidden Math of Ocean Waves

    Photograph: Courtesy of Katie Oliveras

    For years, no one could make any progress. Then, at the 2019 workshop, Deconinck approached Maspero and his team. He knew they had a lot of experience studying the math of wavelike phenomena in quantum physics. Perhaps they could figure out a way to prove that these striking patterns arise from the Euler equations.

    The Italian group got to work immediately. They started with the lowest set of frequencies that seemed to cause waves to die. First, they applied techniques from physics to represent each of these low-frequency instabilities as arrays, or matrices, of 16 numbers. These numbers encoded how the instability would grow and distort the Stokes waves over time. The mathematicians realized that if one of the numbers in the matrix was always zero, the instability would not grow, and the waves would live on. If the number was positive, the instability would grow and eventually destroy the waves.

    To show that this number was positive for the first batch of instabilities, the mathematicians had to compute a gigantic sum. It took 45 pages and nearly a year of work to solve it. Once they’d done so, they turned their attention to the infinitely many intervals of higher-frequency wave-killing disturbances—the isole.

    First, they figured out a general formula—another complicated sum—that would give them the number they needed for each isola. Then they used a computer program to solve the formula for the first 21 isole. (After that, the calculations got too complicated for the computer to handle.) The numbers were all positive, as expected—and they also seemed to follow a simple pattern that implied they would be positive for all the other isole as well.

    Joseph Howlett

    Source link

  • Unpicking How to Measure the Complexity of Knots

    The duo kept their program running in the background for over a decade. During that time, a couple of computers from their ragtag collection succumbed to overheating and even flames. “There was one that actually sent out sparks,” Brittenham said. “That was kind of fun.” (Those machines, he added, were “honorably retired.”)

    Then, in the fall of 2024, a paper about a failed attempt to use machine learning to disprove the additivity conjecture caught Brittenham and Hermiller’s attention. Perhaps, they thought, machine learning wasn’t the best approach for this particular problem: If a counterexample to the additivity conjecture was out there, it would be “a needle in a haystack,” Hermiller said. “That’s not quite what things like machine learning are about. They’re about trying to find patterns in things.”

    But it reinforced a suspicion the pair already had—that maybe their more carefully honed sneakernet could find the needle.

    The Tie That Binds

    Brittenham and Hermiller realized they could make use of the unknotting sequences they’d uncovered to look for potential counterexamples to the additivity conjecture.

    Imagine again that you have two knots whose unknotting numbers are 2 and 3, and you’re trying to unknot their connect sum. After one crossing change, you get a new knot. If the additivity conjecture is to be believed, then the original knot’s unknotting number should be 5, and this new knot’s should be 4.

    But what if this new knot’s unknotting number is already known to be 3? That implies that the original knot can be untied in just four steps, breaking the conjecture.

    “We get these middle knots,” Brittenham said. “What can we learn from them?”

    He and Hermiller already had the perfect tool for the occasion humming away on their suite of laptops: the database they’d spent the previous decade developing, with its upper bounds on the unknotting numbers of thousands of knots.

    The mathematicians started to add pairs of knots and work through the unknotting sequences of their connect sums. They focused on connect sums whose unknotting numbers had only been approximated in the loosest sense, with a big gap between their highest and lowest possible values. But that still left them with a massive list of knots to work through—“definitely in the tens of millions, and probably in the hundreds of millions,” Brittenham said.

    For months, their computer program applied crossing changes to these knots and compared the resulting knots to those in their database. One day in late spring, Brittenham checked the program’s output files, as he did most days, to see if anything interesting had turned up. To his great surprise, there was a line of text: “CONNECT SUM BROKEN.” It was a message he and Hermiller had coded into the program—but they’d never expected to actually see it.

    Leila Sloman

    Source link

  • Physicists Create a Thermometer for Measuring ‘Quantumness’

    The original version of this story appeared in Quanta Magazine.

    If there’s one law of physics that seems easy to grasp, it’s the second law of thermodynamics: Heat flows spontaneously from hotter bodies to colder ones. But now, gently and almost casually, Alexssandre de Oliveira Jr. has just shown me I didn’t truly understand it at all.

    Take this hot cup of coffee and this cold jug of milk, the Brazilian physicist said as we sat in a café in Copenhagen. Bring them into contact and, sure enough, heat will flow from the hot object to the cold one, just as the German scientist Rudolf Clausius first stated formally in 1850. However, in some cases, de Oliveira explained, physicists have learned that the laws of quantum mechanics can drive heat flow the opposite way: from cold to hot.

    This doesn’t really mean that the second law fails, he added as his coffee reassuringly cooled. It’s just that Clausius’ expression is the “classical limit” of a more complete formulation demanded by quantum physics.

    Physicists began to appreciate the subtlety of this situation more than two decades ago and have been exploring the quantum mechanical version of the second law ever since. Now, de Oliveira, a postdoctoral researcher at the Technical University of Denmark, and colleagues have shown that the kind of “anomalous heat flow” that’s enabled at the quantum scale could have a convenient and ingenious use.

    It can serve, they say, as an easy method for detecting “quantumness”—sensing, for instance, that an object is in a quantum “superposition” of multiple possible observable states, or that two such objects are entangled, with states that are interdependent—without destroying those delicate quantum phenomena. Such a diagnostic tool could be used to ensure that a quantum computer is truly using quantum resources to perform calculations. It might even help to sense quantum aspects of the force of gravity, one of the stretch goals of modern physics. All that’s needed, the researchers say, is to connect a quantum system to a second system that can store information about it, and to a heat sink: a body that’s able to absorb a lot of energy. With this setup, you can boost the transfer of heat to the heat sink, exceeding what would be permitted classically. Simply by measuring how hot the sink is, you could then detect the presence of superposition or entanglement in the quantum system.

    Philip Ball

    Source link

  • The ‘10 Martini’ Proof Connects Quantum Mechanics With Infinitely Intricate Mathematical Structures

    But in some ways, the proof was a bit unsatisfying. Jitomirskaya and Avila had used a method that only applied to certain irrational values of alpha. By combining it with an intermediate proof that came before it, they could say the problem was solved. But this combined proof wasn’t elegant. It was a patchwork quilt, each square stitched out of distinct arguments.

    Moreover, the proofs only settled the conjecture as it was originally stated, which involved making simplifying assumptions about the electron’s environment. More realistic situations are messier: Atoms in a solid are arranged in more complicated patterns, and magnetic fields aren’t quite constant. “You’ve verified it for this one model, but what does that have to do with reality?” said Simon Becker, a mathematician at the Swiss Federal Institute of Technology Zurich.

    These more realistic situations require you to tweak the part of the Schrödinger equation where alpha appears. And when you do, the 10-martini proof stops working. “This was always disturbing to me,” Jitomirskaya said.

    The breakdown of the proof in these broader contexts also implied that the beautiful fractal patterns that had emerged—the Cantor sets, the Hofstadter butterfly—were nothing more than a mathematical curiosity, something that would disappear once the equation was made more realistic.

    Avila and Jitomirskaya moved on to other problems. Even Hofstadter had doubts. If an experiment ever saw his butterfly, he’d written in Gödel, Escher, Bach, “I would be the most surprised person in the world.”

    But in 2013, a group of physicists at Columbia University captured his butterfly in a lab. They placed two thin layers of graphene in a magnetic field, then measured the energy levels of the graphene’s electrons. The quantum fractal emerged in all its glory. “Suddenly it went from a figment of the mathematician’s imagination to something practical,” Jitomirskaya said. “It became very unsettling.”

    Lyndie Chiou, Joseph Howlett

    Source link

  • A New Algorithm Makes It Faster to Find the Shortest Paths

    The original version of this story appeared in Quanta Magazine.

    If you want to solve a tricky problem, it often helps to get organized. You might, for example, break the problem into pieces and tackle the easiest pieces first. But this kind of sorting has a cost. You may end up spending too much time putting the pieces in order.

    This dilemma is especially relevant to one of the most iconic problems in computer science: finding the shortest path from a specific starting point in a network to every other point. It’s like a souped-up version of a problem you need to solve each time you move: learning the best route from your new home to work, the gym, and the supermarket.

    “Shortest paths is a beautiful problem that anyone in the world can relate to,” said Mikkel Thorup, a computer scientist at the University of Copenhagen.

    Intuitively, it should be easiest to find the shortest path to nearby destinations. So if you want to design the fastest possible algorithm for the shortest-paths problem, it seems reasonable to start by finding the closest point, then the next-closest, and so on. But to do that, you need to repeatedly figure out which point is closest. You’ll sort the points by distance as you go. There’s a fundamental speed limit for any algorithm that follows this approach: You can’t go any faster than the time it takes to sort.

    Forty years ago, researchers designing shortest-paths algorithms ran up against this “sorting barrier.” Now, a team of researchers has devised a new algorithm that breaks it. It doesn’t sort, and it runs faster than any algorithm that does.

    “The authors were audacious in thinking they could break this barrier,” said Robert Tarjan, a computer scientist at Princeton University. “It’s an amazing result.”

    The Frontier of Knowledge

    To analyze the shortest-paths problem mathematically, researchers use the language of graphs—networks of points, or nodes, connected by lines. Each link between nodes is labeled with a number called its weight, which can represent the length of that segment or the time needed to traverse it. There are usually many routes between any two nodes, and the shortest is the one whose weights add up to the smallest number. Given a graph and a specific “source” node, an algorithm’s goal is to find the shortest path to every other node.

    The most famous shortest-paths algorithm, devised by the pioneering computer scientist Edsger Dijkstra in 1956, starts at the source and works outward step by step. It’s an effective approach, because knowing the shortest path to nearby nodes can help you find the shortest paths to more distant ones. But because the end result is a sorted list of shortest paths, the sorting barrier sets a fundamental limit on how fast the algorithm can run.

    Ben Brubaker

    Source link

  • The Mystery of How Quasicrystals Form

    The original version of this story appeared in Quanta Magazine.

    Since their discovery in 1982, exotic materials known as quasicrystals have bedeviled physicists and chemists. Their atoms arrange themselves into chains of pentagons, decagons, and other shapes to form patterns that never quite repeat. These patterns seem to defy physical laws and intuition. How can atoms possibly “know” how to form elaborate nonrepeating arrangements without an advanced understanding of mathematics?

    “Quasicrystals are one of those things that as a materials scientist, when you first learn about them, you’re like, ‘That’s crazy,’” said Wenhao Sun, a materials scientist at the University of Michigan.

    Recently, though, a spate of results has peeled back some of their secrets. In one study, Sun and collaborators adapted a method for studying crystals to determine that at least some quasicrystals are thermodynamically stable—their atoms won’t settle into a lower-energy arrangement. This finding helps explain how and why quasicrystals form. A second study has yielded a new way to engineer quasicrystals and observe them in the process of forming. And a third research group has logged previously unknown properties of these unusual materials.

    Historically, quasicrystals have been challenging to create and characterize.

    “There’s no doubt that they have interesting properties,” said Sharon Glotzer, a computational physicist who is also based at the University of Michigan but was not involved with this work. “But being able to make them in bulk, to scale them up, at an industrial level—[that] hasn’t felt possible, but I think that this will start to show us how to do it reproducibly.”

    Vikram Gavini, Sambit Das, Woohyeon Baek, Wenhao Sun, and Shibo Tan hold examples of geometric shapes that appear in quasicrystals. The University of Michigan researchers have shown that at least some quasicrystals are thermodynamically stable.

    Photograph: Marcin Szczepanski Michigan Engineering

    ‘Forbidden’ Symmetries

    Nearly a decade before the Israeli physicist Dan Shechtman discovered the first examples of quasicrystals in the lab, the British mathematical physicist Roger Penrose thought up the “quasiperiodic”—almost but not quite repeating—patterns that would manifest in these materials.

    Penrose developed sets of tiles that could cover an infinite plane with no gaps or overlaps, in patterns that do not, and cannot, repeat. Unlike tessellations made of triangles, rectangles, and hexagons—shapes that are symmetric across two, three, four or six axes, and which tile space in periodic patterns—Penrose tilings have “forbidden” fivefold symmetry. The tiles form pentagonal arrangements, yet pentagons can’t fit snugly side by side to tile the plane. So, whereas the tiles align along five axes and tessellate endlessly, different sections of the pattern only look similar; exact repetition is impossible. Penrose’s quasiperiodic tilings made the cover of Scientific American in 1977, five years before they made the jump from pure mathematics to the real world.

    Patchen Barss

    Source link

  • What Is Thirst?

    “There are only a couple of things that are so important for your body that there’s a completely innate drive to get it if you fall into deficiency,” Knight said. “Oxygen, food, water, and sodium.”

    However, animals like us do not experience salt desire as a powerful, controlling drive as we do with oxygen, food, and water. Sensors signal salt levels to the brain; in addition to the OVLT and SFO, sensors in the heart detect the stretching of atria and ventricles. But there is no analogous salt pang when we need it, the way a stomach churns for food or a scratchy throat cries out for water. Instead, the need to consume salt is mediated by taste and the brain’s reward pathways. “The taste of salt is bimodal,” Knight said. “It tastes good at low doses; at high doses it tastes disgusting, like drinking seawater.”

    Imagine the urge to eat a big bag of potato chips. If the body needs salt, those chips will cause a surge of pleasurable dopamine to flood the brain. If the body doesn’t need salt, that dopamine drip disappears. “It’s pretty much reinforcement learning,” said Yuki Oka, a neurobiologist at the California Institute of Technology who studies how the body maintains homeostasis. “More dopamine means a repeated behavior.”

    Everyone Thirsts Differently

    Scientists monitoring a river collect data and then have a choice about whether to act on their findings. Similarly, just because the brain measures the blood’s sodium levels doesn’t mean it has to act on that information.

    Take Elena Gracheva’s thirteen-lined ground squirrels. Gracheva, a neurophysiologist at the Yale School of Medicine, studies these rodents, native to North American grasslands, to understand how specific brain regions control thirst. The thirteen-lined ground squirrel is an ideal model for this, she said, because it hibernates for more than half the year, without eating or drinking. “They’re like monks,” Gracheva said. “They don’t go outside for eight months. They don’t have water in their underground burrow.” How do they not get thirsty?

    Elena Gracheva (left) has traced how the brains of thirteen-lined ground squirrels (right) suppress their thirst response during many months of hibernation.

    Courtesy of Gracheva Lab

    Squirrel grass and leaf

    CC-BY 2.0 via Wikimedia Commons

    It isn’t that the squirrels don’t need water. They do. Their bodies cry out for it. But according to Gracheva’s research, during hibernation their brain ignores the body’s signals.

    In mammals, a drop in blood water levels (which means a simultaneous rise in salt concentration, all things being equal) triggers two coupled processes. The hypothalamus pumps out the hormone vasopressin, which tells the kidneys to retain water rather than let it out as urine, and the SFO kicks off the thirst drive to direct the animal to drink. However, while ground squirrels are hibernating, their vasopressin levels jump, but the animal still doesn’t drink. “The circuit for vasopressin was normal, but thirst neurons were downregulated,” Gracheva said. “These two pathways are uncoupled.” The body is trying to retain the water it has but does not act to consume more.

    The logic of the disrupted circuitry is extremely powerful. “Even if you wake them up in the middle of hibernation, they’re not going to drink,” Gracheva said.

    The underlying network that Gracheva studies in squirrels is universal in mammals, up to and including humans. But that same neurological logic doesn’t lead to the same behaviors. Humans drink a glass of water when they’re thirsty. Cats and rabbits mostly get water from the food they eat. Camels can burn their fat stores for water (which produces carbon dioxide and water), but they also consume gallons of it and store it in their stomachs for when they need it later. Sea otters can drink ocean water and excrete urine that is saltier than the water they swim in; they are the only marine mammals to actively do this.

    How each animal manages water and salt is specialized to its ecosystem, lifestyle, and selective pressures. The question “What does it mean to be thirsty?” has no one answer. We each thirst in our own way.


    Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

    Dan Samorodnitsky

    Source link

  • Distillation Can Make AI Models Smaller and Cheaper

    The original version of this story appeared in Quanta Magazine.

    The Chinese AI company DeepSeek released a chatbot earlier this year called R1, which drew a huge amount of attention. Most of it focused on the fact that a relatively small and unknown company said it had built a chatbot that rivaled the performance of those from the world’s most famous AI companies, but using a fraction of the computer power and cost. As a result, the stocks of many Western tech companies plummeted; Nvidia, which sells the chips that run leading AI models, lost more stock value in a single day than any company in history.

    Some of that attention involved an element of accusation. Sources alleged that DeepSeek had obtained, without permission, knowledge from OpenAI’s proprietary o1 model by using a technique known as distillation. Much of the news coverage framed this possibility as a shock to the AI industry, implying that DeepSeek had discovered a new, more efficient way to build AI.

    But distillation, also called knowledge distillation, is a widely used tool in AI, a subject of computer science research going back a decade and a tool that big tech companies use on their own models. “Distillation is one of the most important tools that companies have today to make models more efficient,” said Enric Boix-Adsera, a researcher who studies distillation at the University of Pennsylvania’s Wharton School.

    Dark Knowledge

    The idea for distillation began with a 2015 paper by three researchers at Google, including Geoffrey Hinton, the so-called godfather of AI and a 2024 Nobel laureate. At the time, researchers often ran ensembles of models—“many models glued together,” said Oriol Vinyals, a principal scientist at Google DeepMind and one of the paper’s authors—to improve their performance. “But it was incredibly cumbersome and expensive to run all the models in parallel,” Vinyals said. “We were intrigued with the idea of distilling that onto a single model.”

    The researchers thought they might make progress by addressing a notable weak point in machine-learning algorithms: Wrong answers were all considered equally bad, regardless of how wrong they might be. In an image-classification model, for instance, “confusing a dog with a fox was penalized the same way as confusing a dog with a pizza,” Vinyals said. The researchers suspected that the ensemble models did contain information about which wrong answers were less bad than others. Perhaps a smaller “student” model could use the information from the large “teacher” model to more quickly grasp the categories it was supposed to sort pictures into. Hinton called this “dark knowledge,” invoking an analogy with cosmological dark matter.

    After discussing this possibility with Hinton, Vinyals developed a way to get the large teacher model to pass more information about the image categories to a smaller student model. The key was homing in on “soft targets” in the teacher model—where it assigns probabilities to each possibility, rather than firm this-or-that answers. One model, for example, calculated that there was a 30 percent chance that an image showed a dog, 20 percent that it showed a cat, 5 percent that it showed a cow, and 0.5 percent that it showed a car. By using these probabilities, the teacher model effectively revealed to the student that dogs are quite similar to cats, not so different from cows, and quite distinct from cars. The researchers found that this information would help the student learn how to identify images of dogs, cats, cows, and cars more efficiently. A big, complicated model could be reduced to a leaner one with barely any loss of accuracy.

    Explosive Growth

    The idea was not an immediate hit. The paper was rejected from a conference, and Vinyals, discouraged, turned to other topics. But distillation arrived at an important moment. Around this time, engineers were discovering that the more training data they fed into neural networks, the more effective those networks became. The size of models soon exploded, as did their capabilities, but the costs of running them climbed in step with their size.

    Many researchers turned to distillation as a way to make smaller models. In 2018, for instance, Google researchers unveiled a powerful language model called BERT, which the company soon began using to help parse billions of web searches. But BERT was big and costly to run, so the next year, other developers distilled a smaller version sensibly named DistilBERT, which became widely used in business and research. Distillation gradually became ubiquitous, and it’s now offered as a service by companies such as Google, OpenAI, and Amazon. The original distillation paper, still published only on the arxiv.org preprint server, has now been cited more than 25,000 times.

    Considering that the distillation requires access to the innards of the teacher model, it’s not possible for a third party to sneakily distill data from a closed-source model like OpenAI’s o1, as DeepSeek was thought to have done. That said, a student model could still learn quite a bit from a teacher model just through prompting the teacher with certain questions and using the answers to train its own models—an almost Socratic approach to distillation.

    Meanwhile, other researchers continue to find new applications. In January, the NovaSky lab at UC Berkeley showed that distillation works well for training chain-of-thought reasoning models, which use multistep “thinking” to better answer complicated questions. The lab says its fully open source Sky-T1 model cost less than $450 to train, and it achieved similar results to a much larger open source model. “We were genuinely surprised by how well distillation worked in this setting,” said Dacheng Li, a Berkeley doctoral student and co-student lead of the NovaSky team. “Distillation is a fundamental technique in AI.”


    Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

    Amos Zeeberg

    Source link

  • The Quest to Find the Longest-Running Simple Computer Program

    But just how much harder? In 1962, the mathematician Tibor Radó invented a new way to explore this question through what he called the busy beaver game. To play, start by choosing a specific number of rules—call that number n. Your goal is to find the n-rule Turing machine that runs the longest before eventually halting. This machine is called the busy beaver, and the corresponding busy beaver number, BB(n), is the number of steps that it takes.

    In principle, if you want to find the busy beaver for any given n, you just need to do a few things. First, list out all the possible n-rule Turing machines. Next, use a computer program to simulate running each machine. Look for telltale signs that machines will never halt—for example, many machines will fall into infinite repeating loops. Discard all these non-halting machines. Finally, record how many steps every other machine took before halting. The one with the longest runtime is your busy beaver.

    In practice, this gets tricky. For starters, the number of possible machines grows rapidly with each new rule. Analyzing them all individually would be hopeless, so you’ll need to write a custom computer program to classify and discard machines. Some machines are easy to classify: They either halt quickly or fall into easily identifiable infinite loops. But others run for a long time without displaying any obvious pattern. For these machines, the halting problem deserves its fearsome reputation.

    The more rules you add, the more computing power you need. But brute force isn’t enough. Some machines run for so long before halting that simulating them step by step is impossible. You need clever mathematical tricks to measure their runtimes.

    “Technology improvements definitely help,” said Shawn Ligocki, a software engineer and longtime busy beaver hunter. “But they only help so far.”

    End of an Era

    Busy beaver hunters started chipping away at the BB(6) problem in earnest in the 1990s and 2000s, during an impasse in the BB(5) hunt. Among them were Shawn Ligocki and his father, Terry, an applied mathematician who ran their search program in the off hours on powerful computers at Lawrence Berkeley National Laboratory. In 2007, they found a six-rule Turing machine that broke the record for the longest runtime: The number of steps it took before halting had nearly 3,000 digits. That’s a colossal number by any ordinary measure. But it’s not too big to write down. In 12-point font, those 3,000 digits will just about cover a single sheet of paper.

    In 2022, Shawn Ligocki discovered a six-rule Turing machine whose runtime has more digits than the number of atoms in the universe.

    Photograph: Kira Treibergs

    Three years later, a Slovakian undergraduate computer science student named Pavel Kropitz decided to tackle the BB(6) hunt as a senior thesis project. He wrote his own search program and set it up to run in the background on a network of 30 computers in a university lab. After a month he found a machine that ran far longer than the one discovered by the Ligockis—a new “champion,” in the lingo of busy beaver hunters.

    “I was lucky, because people in the lab were already complaining about my CPU usage and I had to scale back a bit,” Kropitz wrote in a direct message exchange on the Busy Beaver Challenge Discord server. After another month of searching, he broke his own record with a machine whose runtime had over 30,000 digits—enough to fill about 10 pages.

    Ben Brubaker

    Source link

  • The New Math of Quantum Cryptography

    The original version of this story appeared in Quanta Magazine.

    Hard problems are usually not a welcome sight. But cryptographers love them. That’s because certain hard math problems underpin the security of modern encryption. Any clever trick for solving them will doom most forms of cryptography.

    Several years ago, researchers found a radically new approach to encryption that lacks this potential weak spot. The approach exploits the peculiar features of quantum physics. But unlike earlier quantum encryption schemes, which only work for a few special tasks, the new approach can accomplish a much wider range of tasks. And it could work even if all the problems at the heart of ordinary “classical” cryptography turn out to be easily solvable.

    But this striking discovery relied on unrealistic assumptions. The result was “more of a proof of concept,” said Fermi Ma, a cryptography researcher at the Simons Institute for the Theory of Computing in Berkeley, California. “It is not a statement about the real world.”

    Now, a new paper by two cryptographers has laid out a path to quantum cryptography without those outlandish assumptions. “This paper is saying that if certain other conjectures are true, then quantum cryptography must exist,” Ma said.

    Castle in the Sky

    You can think of modern cryptography as a tower with three essential parts. The first part is the bedrock deep beneath the tower, which is made of hard mathematical problems. The tower itself is the second part—there you can find specific cryptographic protocols that let you send private messages, sign digital documents, cast secret ballots, and more.

    In between, securing those day-to-day applications to mathematical bedrock, is a foundation made of building blocks called one-way functions. They’re responsible for the asymmetry inherent in any encryption scheme. “It’s one-way because you can encrypt messages, but you can’t decrypt them,” said Mark Zhandry, a cryptographer at NTT Research.

    In the 1980s, researchers proved that cryptography built atop one-way functions would ensure security for many different tasks. But decades later, they still aren’t certain that the bedrock is strong enough to support it. The trouble is that the bedrock is made of special hard problems—technically known as NP problems—whose defining feature is that it’s easy to check whether any candidate solution is correct. (For example, breaking a number into its prime factors is an NP problem: hard to do for large numbers, but easy to check.)

    Many of these problems seem intrinsically difficult, but computer scientists haven’t been able to prove it. If someone discovers an ingenious algorithm for rapidly solving the hardest NP problems, the bedrock will crumble, and the whole tower will collapse.

    Unfortunately, you can’t simply move your tower elsewhere. The tower’s foundation—one-way functions—can only sit on a bedrock of NP problems.

    To build a tower on harder problems, cryptographers would need a new foundation that isn’t made of one-way functions. That seemed impossible until just a few years ago, when researchers realized that quantum physics could help.

    Ben Brubaker

    Source link

  • These Newly Discovered Cells Breathe in Two Ways

    The team members went through a process of incrementally determining what elements and molecules the bacterial strain could grow on. They already knew it could use oxygen, so they tested other combinations in the lab. When oxygen was absent, RSW1 could process hydrogen gas and elemental sulfur—chemicals it would find spewing from a volcanic vent—and create hydrogen sulfide as a product. Yet while the cells were technically alive in this state, they didn’t grow or replicate. They were making a small amount of energy—just enough to stay alive, nothing more. “The cell was just sitting there spinning its wheels without getting any real metabolic or biomass gain out of it,” Boyd said.

    Then the team added oxygen back into the mix. As expected, the bacteria grew faster. But, to the researchers’ surprise, RSW1 also still produced hydrogen sulfide gas, as if it were anaerobically respiring. In fact, the bacteria seemed to be breathing both aerobically and anaerobically at once, and benefiting from the energy of both processes. This double respiration went further than the earlier reports: The cell wasn’t just producing sulfide in the presence of oxygen but was also performing both conflicting processes at the same time. Bacteria simply shouldn’t be able to do that.

    “That set us down this path of ‘OK, what the heck’s really going on here?’” Boyd said.

    Breathing Two Ways

    RSW1 appears to have a hybrid metabolism, running an anaerobic sulfur-based mode at the same time it runs an aerobic one using oxygen.

    “For an organism to be able to bridge both those metabolisms is very unique,” said Ranjani Murali, an environmental microbiologist at the University of Nevada, Las Vegas, who was not involved in the research. Normally when anaerobic organisms are exposed to oxygen, damaging molecules known as reactive oxygen compounds create stress, she said. “For that not to happen is really interesting.”

    In the thermal spring Roadside West (left) in Yellowstone National Park, researchers isolated an unusual microbe from the gray-colored biofilm (right).

    Photograph: Eric Boyd; Quanta Magazine

    In the thermal spring Roadside West  in Yellowstone National Park researchers isolated an unusual microbe from the...

    In the thermal spring Roadside West (left) in Yellowstone National Park, researchers isolated an unusual microbe from the gray-colored biofilm (right).Photograph: Eric Boyd; Quanta Magazine

    Boyd’s team observed that the bacteria grew best when running both metabolisms simultaneously. It may be an advantage in its unique environment: Oxygen isn’t evenly distributed in hot springs like those where RSW1 lives. In constantly changing conditions, where you could be bathed in oxygen one moment only for it to disappear, hedging one’s metabolic bets might be a highly adaptive trait.

    Other microbes have been observed breathing two ways at once: anaerobically with nitrate and aerobically with oxygen. But those processes use entirely different chemical pathways, and when paired together, they tend to present an energetic cost to the microbes. In contrast, RSW1’s hybrid sulfur/oxygen metabolism bolsters the cells instead of dragging them down.

    This kind of dual respiration may have evaded detection until now because it was considered impossible. “You have really no reason to look” for something like this, Boyd said. Additionally, oxygen and sulfide react with each other quickly; unless you were watching for sulfide as a byproduct, you might miss it entirely, he added.

    It’s possible, in fact, that microbes with dual metabolisms are widespread, Murali said. She pointed to the many habitats and organisms that exist at tenuous gradients between oxygen-rich and oxygen-free areas. One example is in submerged sediments, which can harbor cable bacteria. These elongated microbes orient themselves in such a way that one end of their bodies can use aerobic respiration in oxygenated water while the other end is buried deep in anoxic sediment and uses anaerobic respiration. Cable bacteria thrive in their precarious partition by physically separating their aerobic and anaerobic processes. But RSW1 appears to multitask while tumbling around in the roiling spring.

    It’s still unknown how RSW1 bacteria manage to protect their anaerobic machinery from oxygen. Murali speculated that the cells might create chemical supercomplexes within themselves that can surround, isolate and “scavenge” oxygen, she said—using it up quickly once they encounter it so there is no chance for the gas to interfere with the sulfur-based breathing.

    RSW1 and any other microbes that have dual metabolism make intriguing models for how microbial life may have evolved during the Great Oxygenation Event, Boyd said. “That must have been a quite chaotic time for microbes on the planet,” he said. As a slow drip of oxygen filtered into the atmosphere and sea, any life-form that could handle an occasional brush with the new, poisonous gas—or even use it to its energetic benefit—may have been at an advantage. In that time of transition, two metabolisms may have been better than one.


    Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

    Jake Buehler

    Source link

  • The Hidden Ingredients Behind AI’s Creativity

    The original version of this story appeared in Quanta Magazine.

    We were once promised self-driving cars and robot maids. Instead, we’ve seen the rise of artificial intelligence systems that can beat us in chess, analyze huge reams of text, and compose sonnets. This has been one of the great surprises of the modern era: physical tasks that are easy for humans turn out to be very difficult for robots, while algorithms are increasingly able to mimic our intellect.

    Another surprise that has long perplexed researchers is those algorithms’ knack for their own, strange kind of creativity.

    Diffusion models, the backbone of image-generating tools such as DALL·E, Imagen, and Stable Diffusion, are designed to generate carbon copies of the images on which they’ve been trained. In practice, however, they seem to improvise, blending elements within images to create something new—not just nonsensical blobs of color, but coherent images with semantic meaning. This is the “paradox” behind diffusion models, said Giulio Biroli, an AI researcher and physicist at the École Normale Supérieure in Paris: “If they worked perfectly, they should just memorize,” he said. “But they don’t—they’re actually able to produce new samples.”

    To generate images, diffusion models use a process known as denoising. They convert an image into digital noise (an incoherent collection of pixels), then reassemble it. It’s like repeatedly putting a painting through a shredder until all you have left is a pile of fine dust, then patching the pieces back together. For years, researchers have wondered: If the models are just reassembling, then how does novelty come into the picture? It’s like reassembling your shredded painting into a completely new work of art.

    Now two physicists have made a startling claim: It’s the technical imperfections in the denoising process itself that leads to the creativity of diffusion models. In a paper presented at the International Conference on Machine Learning 2025, the duo developed a mathematical model of trained diffusion models to show that their so-called creativity is in fact a deterministic process—a direct, inevitable consequence of their architecture.

    By illuminating the black box of diffusion models, the new research could have big implications for future AI research—and perhaps even for our understanding of human creativity. “The real strength of the paper is that it makes very accurate predictions of something very nontrivial,” said Luca Ambrogioni, a computer scientist at Radboud University in the Netherlands.

    Bottoms Up

    Mason Kamb, a graduate student studying applied physics at Stanford University and the lead author of the new paper, has long been fascinated by morphogenesis: the processes by which living systems self-assemble.

    One way to understand the development of embryos in humans and other animals is through what’s known as a Turing pattern, named after the 20th-century mathematician Alan Turing. Turing patterns explain how groups of cells can organize themselves into distinct organs and limbs. Crucially, this coordination all takes place at a local level. There’s no CEO overseeing the trillions of cells to make sure they all conform to a final body plan. Individual cells, in other words, don’t have some finished blueprint of a body on which to base their work. They’re just taking action and making corrections in response to signals from their neighbors. This bottom-up system usually runs smoothly, but every now and then it goes awry—producing hands with extra fingers, for example.

    Webb Wright

    Source link

  • The Quantum Geometry That Exists Outside of Space and Time

    The Quantum Geometry That Exists Outside of Space and Time

    “It provides a natural framework, or a bookkeeping mechanism, to assemble very large numbers of Feynman diagrams,” said Marcus Spradlin, a physicist at Brown University who has been picking up the new tools of surfaceology. “There’s an exponential compactification in information.”

    Carolina Figueiredo, a graduate student at Princeton University, noticed a striking coincidence where three species of seemingly unrelated quantum particles act identically.

    Photograph: Andrea Kane/Institute for Advanced Study

    Unlike the amplituhedron, which required exotic particles to provide a balance known as supersymmetry, surfaceology applies to more realistic, nonsupersymmetric particles. “It’s completely agnostic. It couldn’t care less about supersymmetry,” Spradlin said. “For some people, me included, I think that’s really been quite a surprise.”

    The question now is whether this new, more primitive geometric approach to particle physics will allow theoretical physicists to slip the confines of space and time altogether.

    “We needed to find some magic, and maybe this is it,” said Jacob Bourjaily, a physicist at Pennsylvania State University. “Whether it’s going to get rid of space-time, I don’t know. But it’s the first time I’ve seen a door.”

    The Trouble with Feynman

    Figueiredo sensed the need for some new magic firsthand during the waning months of the pandemic. She was struggling with a task that has challenged physicists for more than 50 years: predicting what will happen when quantum particles collide. In the late 1940s, it took a yearslong effort by three of the brightest minds of the postwar era—Julian Schwinger, Sin-Itiro Tomonaga, and Richard Feynman—to solve the problem for electrically charged particles. Their eventual success would win them a Nobel Prize. Feynman’s scheme was the most visual, so it came to dominate the way physicists think about the quantum world.

    When two quantum particles come together, anything can happen. They might merge into one, split into many, disappear, or any sequence of the above. And what will actually happen is, in some sense, a combination of all these and many other possibilities. Feynman diagrams keep track of what might happen by stringing together lines representing particles’ trajectories through space-time. Each diagram captures one possible sequence of subatomic events and gives an equation for a number, called an “amplitude,” that represents the odds of that sequence taking place. Add up enough amplitudes, physicists believe, and you get stones, buildings, trees, and people. “Almost everything in the world is a concatenation of that stuff happening over and over again,” Arkani-Hamed said. “Just good old-fashioned things bouncing off each other.”

    There’s a puzzling tension inherent in these amplitudes—one that has vexed generations of quantum physicists going back to Feynman and Schwinger themselves. One might spend hours at a chalkboard sketching byzantine particle trajectories and evaluating fearsome formulas only to find that terms cancel out and complicated expressions melt away to leave behind extremely simple answers—in a classic example, literally the number 1.

    “The degree of effort required is tremendous,” Bourjaily said. “And every single time, the prediction you make mocks you with its simplicity.”

    Charlie Wood

    Source link

  • The Secret Electrostatic World of Insects

    The Secret Electrostatic World of Insects

    This developing field, known as aerial electroreception, opens up a new dimension of the natural world. “I find it absolutely fascinating,” said Anna Dornhaus, a behavioral ecologist at the University of Arizona who was not involved with the work. “This whole field, studying electrostatic interactions between living animals, has the potential to uncover things that didn’t occur to us about how the world works.”

    “We know from all these brilliant experiments that electric fields do have a functional role in the ecology of these animals,” said Benito Wainwright, an evolutionary ecologist at the University of St. Andrews who has studied the sensory systems of butterflies and katydids. “That’s not to say that they came on the scene originally through adaptive processes.” But now that these forces are present, evolution can act on them. Though we cannot sense these electric trails, they may guide us to animal behaviors we never imagined.

    Electrostatic Discoveries

    In 2012, Víctor Ortega-Jiménez stumbled into electrostatics while playing with his 4-year-old daughter. They were using a toy wand that gathers static charge to levitate lightweight objects, such as a balloon. When they decided to test it outside, he made a startling observation.

    PICTURE
    Caption: Studies by Víctor Ortega-Jiménez of the University of California, Berkeley, revealed that a negatively charged spiderweb attracts positively charged insect prey.
    Credit: Courtesy of Víctor Ortega-Jiménez

    “My daughter put the wand close to a spiderweb, and it reacted very quickly,” recalled Ortega-Jiménez, who studies the biomechanics of animal travel at the University of California, Berkeley. The wand attracted the web. He immediately began to draw connections to his research about the strange ways insects interact with their environments.

    All matter—wands, balloons, webs, air—strives for balance between its positive and negative particles (protons, electrons and ions). At an unfathomably small scale, Ortega-Jiménez’s toy buzzes with an imbalance: A motor draws negative charges inward, forcing positive charges to the wand’s surface. This is static. It’s like when you rub a balloon against your head. Friction sheds electrons from your hair to the rubber, loading it up with static charge, so that when you lift the balloon, strands of hair float with it.

    In a similar way, Ortega-Jiménez considered, friction from beating insect wings could shed negative charges from body to air, leaving the insects with a positive charge while creating regions of negative static. He realized that if a web carries negative charge and insects a positive one, then a spiderweb might not just be a passive trap—it could move toward and attract its quarry electrostatically. His lab experiments revealed precisely that. Webs deformed instantly when jolted with static from flies, aphids, honeybees, and even water droplets. Spiders caught charged insects more easily. He saw how static electricity altered the physics of animal interactions.

    Max G. Levy

    Source link

  • How Cells Resist the Pressure of the Deep Sea

    How Cells Resist the Pressure of the Deep Sea

    To study the cell membranes of deep-sea animals, the biochemist Itay Budin (center) joined forces with marine biologists Steve Haddock (right) and Jacob Winnikoff (left).

    Photographs: From left: Tamrynn Clegg; Geoffroy Tobe; John Lee

    “They are looking into an area that, to a large degree, has not been explored,” said Sol Gruner, who researches molecular biophysics at Cornell University; he was consulted for the study but was not a co-author.

    Plasmalogen lipids are also found in the human brain, and their role in deep-sea membranes could help explain aspects of cell signaling. More immediately, the research unveils a new way that life has adapted to the most extreme conditions of the deep ocean.

    Insane in the Membrane

    The cells of all life on Earth are encircled by fatty molecules known as lipids. If you put some lipids in a test tube and add water, they automatically line themselves up back to back: The lipids’ greasy, water-hating tails commingle to form an inner layer, and their water-loving heads arrange together to form the outer portions of a thin membrane. “It’s just like oil and water separating in a dish,” Winnikoff said. “It’s universal to lipids, and it’s what makes them work.”

    For a cell, an outer lipid membrane serves as a physical barrier that, like the external wall of a house, provides structure and keeps a cell’s insides in. But the barrier can’t be too solid: It’s studded with proteins, which need some wiggle room to carry out their various cellular jobs, such as ferrying molecules across the membrane. And sometimes a cell membrane pinches off to release chemicals into the environment and then fuses back together again.

    For a membrane to be healthy and functional, it must therefore be sturdy, fluid, and dynamic at the same time. “The membranes are balancing right on the edge of stability,” Winnikoff said. “Even though it has this really well-defined structure, all the individual molecules that make up the sheets on either side—they’re flowing around each other all the time. It’s actually a liquid crystal.”

    One of the emergent properties of this structure, he said, is that the middle of the membrane is highly sensitive to both temperature and pressure—much more so than other biological molecules such as proteins, DNA or RNA. If you cool down a lipid membrane, for example, the molecules move more slowly, “and then eventually they’ll just lock together,” Winnikoff said, as when you put olive oil in the fridge. “Biologically, that’s generally a bad thing.” Metabolic processes halt; the membrane can even crack and leak its contents.

    To avoid this, many cold-adapted animals have membranes composed of a blend of lipid molecules with slightly different structures to keep the liquid crystal flowing, even at low temperatures. Because high pressure also slows a membrane’s flow, many biologists assumed that deep-sea membranes were built the same way.

    Yasemin Saplakoglu

    Source link

  • Cells From Different Species Can Exchange ‘Text Messages’ Using RNA

    Cells From Different Species Can Exchange ‘Text Messages’ Using RNA

    The original version of this story appeared in Quanta Magazine.

    For a molecule of RNA, the world is a dangerous place. Unlike DNA, which can persist for millions of years in its remarkably stable, double-stranded form, RNA isn’t built to last—not even within the cell that made it. Unless it’s protectively tethered to a larger molecule, RNA can degrade in minutes or less. And outside a cell? Forget about it. Voracious, RNA-destroying enzymes are everywhere, secreted by all forms of life as a defense against viruses that spell out their genetic identity in RNA code.

    There is one way RNA can survive outside a cell unscathed: in a tiny, protective bubble. For decades, researchers have noticed cells releasing these bubbles of cell membrane, called extracellular vesicles (EVs), packed with degraded RNA, proteins, and other molecules. But these sacs were considered little more than trash bags that whisk broken-down molecular junk out of a cell during routine decluttering.

    Then, in the early 2000s, experiments led by Hadi Valadi, a molecular biologist at the University of Gothenburg, revealed that the RNA inside some EVs didn’t look like trash. The cocktail of RNA sequences was considerably different from those found inside the cell, and these sequences were intact and functional. When Valadi’s team exposed human cells to EVs from mouse cells, they were shocked to observe the human cells take in the RNA messages and “read” them to create functional proteins they otherwise wouldn’t have been able to make.

    Valadi concluded that cells were packaging strands of RNA into the vesicles specifically to communicate with one another. “If I have been outside and see that it’s raining,” he said, “I can tell you: If you go out, take an umbrella with you.” In a similar way, he suggested, a cell could warn its neighbors about exposure to a pathogen or noxious chemical before they encountered the danger themselves.

    Since then, a wealth of evidence has emerged supporting this theory, enabled by improvements in sequencing technology that allow scientists to detect and decode increasingly small RNA segments. Since Valadi published his experiments, other researchers have also seen EVs filled with complex RNA combinations. These RNA sequences can contain detailed information about the cell that authored them and trigger specific effects in recipient cells. The findings have led some researchers to suggest that RNA may be a molecular lingua franca that transcends traditional taxonomic boundaries and can therefore encode messages that remain intelligible across the tree of life.

    In 2024, new studies have exposed additional layers of this story, showing, for example, that along with bacteria and eukaryotic cells, archaea also exchange vesicle-bound RNA, which confirms that the phenomenon is universal to all three domains of life. Another study has expanded our understanding of cross-kingdom cellular communication by showing that plants and infecting fungi can use packets of havoc-wreaking RNA as a form of coevolutionary information warfare: An enemy cell reads the RNA and builds self-harming proteins with its own molecular machinery.

    “I’ve been in awe of what RNA can do,” said Amy Buck, an RNA biologist at the University of Edinburgh who was not involved with the new research. For her, understanding RNA as a means of communication “goes beyond appreciating the sophistication and the dynamic nature of RNA within the cell.” Transmitting information beyond the cell may be one of its innate roles.

    Time-Sensitive Delivery

    The microbiologist Susanne Erdmann studies viral infections in Haloferax volcanii, a single-celled organism that thrives in unbelievably salty environments such as the Dead Sea or the Great Salt Lake. Single-celled bacteria are known to exchange EVs widely, but H. volcanii is not a bacterium—it’s an archaean, a member of the third evolutionary branch of life, which features cells built differently from bacteria or eukaryotes like us.

    Because EVs are the same size and density as the virus particles Erdmann’s team studies at the Max Planck Institute for Marine Microbiology in Germany, they “always pop up when you isolate and purify viruses,” she said. Eventually, her group got curious and decided to peek at what’s inside.

    Annie Melchor

    Source link

  • ‘Groups’ Underpin Modern Math. Here’s How They Work

    ‘Groups’ Underpin Modern Math. Here’s How They Work

    Figuring out what subgroups a group contains is one way to understand its structure. For example, the subgroups of Z6 are {0}, {0, 2, 4} and {0, 3}—the trivial subgroup, the multiples of 2, and the multiples of 3. In the group D6, rotations form a subgroup, but reflections don’t. That’s because two reflections performed in sequence produce a rotation, not a reflection, just as adding two odd numbers results in an even one.

    Certain types of subgroups called “normal” subgroups are especially helpful to mathematicians. In a commutative group, all subgroups are normal, but this isn’t always true more generally. These subgroups retain some of the most useful properties of commutativity, without forcing the entire group to be commutative. If a list of normal subgroups can be identified, groups can be broken up into components much the way integers can be broken up into products of primes. Groups that have no normal subgroups are called simple groups and cannot be broken down any further, just as prime numbers can’t be factored. The group Zn is simple only when n is prime—the multiples of 2 and 3, for instance, form normal subgroups in Z6.

    However, simple groups are not always so simple. “It’s the biggest misnomer in mathematics,” Hart said. In 1892, the mathematician Otto Hölder proposed that researchers assemble a complete list of all possible finite simple groups. (Infinite groups such as the integers form their own field of study.)

    It turns out that almost all finite simple groups either look like Zn (for prime values of n) or fall into one of two other families. And there are 26 exceptions, called sporadic groups. Pinning them down, and showing that there are no other possibilities, took over a century.

    The largest sporadic group, aptly called the monster group, was discovered in 1973. It has more than 8 × 1054 elements and represents geometric rotations in a space with nearly 200,000 dimensions. “It’s just crazy that this thing could be found by humans,” Hart said.

    By the 1980s, the bulk of the work Hölder had called for appeared to have been completed, but it was tough to show that there were no more sporadic groups lingering out there. The classification was further delayed when, in 1989, the community found gaps in one 800-page proof from the early 1980s. A new proof was finally published in 2004, finishing off the classification.

    Many structures in modern math—rings, fields, and vector spaces, for example—are created when more structure is added to groups. In rings, you can multiply as well as add and subtract; in fields, you can also divide. But underneath all of these more intricate structures is that same original group idea, with its four axioms. “The richness that’s possible within this structure, with these four rules, is mind-blowing,” Hart said.


    Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

    Leila Sloman

    Source link

  • The Vagus Nerve’s Crucial Role in Creating the Human Sense of Mind

    The Vagus Nerve’s Crucial Role in Creating the Human Sense of Mind

    The original version of this story appeared in Quanta Magazine.

    It is late at night. You are alone and wandering empty streets in search of your parked car when you hear footsteps creeping up from behind. Your heart pounds, your blood pressure skyrockets. Goose bumps appear on your arms, sweat on your palms. Your stomach knots and your muscles coil, ready to sprint or fight.

    Now imagine the same scene, but without any of the body’s innate responses to an external threat. Would you still feel afraid?

    Experiences like this reveal the tight integration between brain and body in the creation of mind—the collage of thoughts, perceptions, feelings, and personality unique to each of us. The capabilities of the brain alone are astonishing. The supreme organ gives most people a vivid sensory perception of the world. It can preserve memories, enable us to learn and speak, generate emotions and consciousness. But those who might attempt to preserve their mind by uploading its data into a computer miss a critical point: The body is essential to the mind.

    How is this crucial brain-body connection orchestrated? The answer involves the very unusual vagus nerve. The longest nerve in the body, it wends its way from the brain throughout the head and trunk, issuing commands to our organs and receiving sensations from them. Much of the bewildering range of functions it regulates, such as mood, learning, sexual arousal, and fear, are automatic and operate without conscious control. These complex responses engage a constellation of cerebral circuits that link brain and body. The vagus nerve is, in one way of thinking, the conduit of the mind.

    Nerves are typically named for the specific functions they perform. Optic nerves carry signals from the eyes to the brain for vision. Auditory nerves conduct acoustic information for hearing. The best that early anatomists could do with this nerve, however, was to call it the “vagus,” from the Latin for “wandering.” The wandering nerve was apparent to the first anatomists, notably Galen, the Greek polymath who lived until around the year 216. But centuries of study were required to grasp its complex anatomy and function. This effort is ongoing: Research on the vagus nerve is at the forefront of neuroscience today.

    The most vigorous current research involves stimulating this nerve with electricity to enhance cognition and memory, and for a smorgasbord of therapies for neurological and psychological disorders, including migraine, tinnitus, obesity, pain, drug addiction, and more. But how could stimulating a single nerve potentially have such wide-ranging psychological and cognitive benefits? To understand this, we must understand the vagus nerve itself.

    The vagus nerve originates from four clusters of neurons in the brain’s medulla, where the brainstem attaches to the spinal cord. Most nerves in our body branch directly from the spinal cord: They are threaded between the vertebrae in our backbone in a series of lateral bands to carry information into and out of the brain. But not the vagus. The vagus nerve is one of 13 nerves that leave the brain directly through special holes in the skull. From there it sprouts thickets of branches that reach almost everywhere in the head and trunk. The vagus also radiates from two major clusters of outpost neurons, called ganglia, stationed in critical spots in the body. For example, a large cluster of vagal neurons clings like a vine to the carotid artery in your neck. Its nerve fibers follow this network of blood vessels throughout your body to reach vital organs, from the heart and lungs to the gut.

    R Douglas Fields

    Source link