ReportWire

Tag: Supercomputing

  • Elon Musk’s xAI to build $20 billion data center in Mississippi

    [ad_1]

    Elon Musk’s AI company, xAI, plans to spend $20 billion on a data center in Southaven, Mississippi

    JACKSON, Miss. — Elon Musk’s artificial intelligence company xAI is set to spend $20 billion to build a data center in Southaven, Mississippi, Gov. Tate Reeves announced Thursday, calling it the largest private investment in the state’s history.

    The data center, called MACROHARDRR, is being built in Mississippi’s DeSoto County near Memphis, Tennessee. It will be the company’s third data center in the greater Memphis area. xAI CFO Anthony Armstrong said the cluster of data centers will house “the world’s largest supercomputer” with 2 gigawatts of computing power.

    The announcement comes as xAI faces scrutiny over its data center projects in the Memphis area. The NAACP and the Southern Environmental Law Center have raised concerns over air pollution generated by xAI’s supercomputer facility located near predominantly Black communities in Memphis.

    A petition by the Safe and Sound Coalition, a Southaven group opposing xAI’s developments, calls for shutting down xAI’s operations in the area and has received more than 900 signatures as of Thursday afternoon.

    xAI did not immediately respond when asked for comment about environmental concerns.

    A fact sheet released by the Mississippi governor’s office said environmental responsibility is a “core commitment” for xAI.

    During the announcement, Reeves personally thanked Musk. Reeves predicted the investment would bring hundreds of permanent jobs to the community, thousands of indirect subcontracting jobs, and tax revenue to support public services.

    Under the incentives for data centers passed in 2024, the state will waive all sales, corporate income and franchise taxes on the xAI development. Saving sales taxes on the computing power that xAI is purchasing would likely be worth a substantial amount of money, but the Mississippi Development Authority did not immediately respond to The Associated Press’ questions about how much tax revenue Mississippi will give up.

    DeSoto County and the city of Southaven have also agreed to allow substantially reduced property taxes.

    xAI is expected to begin data center operations in Southaven next month.

    [ad_2]

    Source link

  • Mexico plans to build Latin America’s most powerful supercomputer

    [ad_1]

    MEXICO CITY (AP) — Mexico unveiled plans Wednesday to build what it claims will be Latin America’s most powerful supercomputer — a project the government says will help the country capitalize on the rapidly evolving uses of artificial intelligence and exponentially expand the country’s computing capacity.

    Dubbed “Coatlicue” for the Mexica goddess considered the earth mother, the supercomputer would be seven times more powerful than the region’s current leader in Brazil, José Merino, head of the Telecommunications and Digital Transformation Agency.

    President Claudia Sheinbaum said during her morning news briefing that the location for the project had not been decided yet, but construction will begin next year.

    “We’re very excited,” said Sheinbaum, an academic and climate scientist. “It is going to allow Mexico to fully get in on the use of artificial intelligence and the processing of data that today we don’t have the capacity to do.”

    Merino said that Mexico’s most powerful supercomputer operates at 2.3 petaflops — a unit to measure computing speed, meaning it can perform one quadrillion operations per second. Coatlicue would have a capacity of 314 petaflops.

    ___

    Follow AP’s coverage of Latin America and the Caribbean at https://apnews.com/hub/latin-america

    [ad_2]

    Source link

  • Mexico plans to build Latin America’s most powerful supercomputer

    [ad_1]

    MEXICO CITY — MEXICO CITY (AP) — Mexico unveiled plans Wednesday to build what it claims will be Latin America’s most powerful supercomputer — a project the government says will help the country capitalize on the rapidly evolving uses of artificial intelligence and exponentially expand the country’s computing capacity.

    Dubbed “Coatlicue” for the Mexica goddess considered the earth mother, the supercomputer would be seven times more powerful than the region’s current leader in Brazil, José Merino, head of the Telecommunications and Digital Transformation Agency.

    President Claudia Sheinbaum said during her morning news briefing that the location for the project had not been decided yet, but construction will begin next year.

    “We’re very excited,” said Sheinbaum, an academic and climate scientist. “It is going to allow Mexico to fully get in on the use of artificial intelligence and the processing of data that today we don’t have the capacity to do.”

    Merino said that Mexico’s most powerful supercomputer operates at 2.3 petaflops — a unit to measure computing speed, meaning it can perform one quadrillion operations per second. Coatlicue would have a capacity of 314 petaflops.

    ___

    Follow AP’s coverage of Latin America and the Caribbean at https://apnews.com/hub/latin-america

    [ad_2]

    Source link

  • Mexico plans to build Latin America’s most powerful supercomputer

    [ad_1]

    MEXICO CITY — MEXICO CITY (AP) — Mexico unveiled plans Wednesday to build what it claims will be Latin America’s most powerful supercomputer — a project the government says will help the country capitalize on the rapidly evolving uses of artificial intelligence and exponentially expand the country’s computing capacity.

    Dubbed “Coatlicue” for the Mexica goddess considered the earth mother, the supercomputer would be seven times more powerful than the region’s current leader in Brazil, José Merino, head of the Telecommunications and Digital Transformation Agency.

    President Claudia Sheinbaum said during her morning news briefing that the location for the project had not been decided yet, but construction will begin next year.

    “We’re very excited,” said Sheinbaum, an academic and climate scientist. “It is going to allow Mexico to fully get in on the use of artificial intelligence and the processing of data that today we don’t have the capacity to do.”

    Merino said that Mexico’s most powerful supercomputer operates at 2.3 petaflops — a unit to measure computing speed, meaning it can perform one quadrillion operations per second. Coatlicue would have a capacity of 314 petaflops.

    ___

    Follow AP’s coverage of Latin America and the Caribbean at https://apnews.com/hub/latin-america

    [ad_2]

    Source link

  • Quantum-classical partnership enhances performance in parallel hybrid network.

    Quantum-classical partnership enhances performance in parallel hybrid network.

    [ad_1]

    Newswise — Building efficient quantum neural networks is a promising direction for research at the intersection of quantum computing and machine learning. A team at Terra Quantum AG designed a parallel hybrid quantum neural network and demonstrated that their model is “a powerful tool for quantum machine learning.” This research was published Oct. 9 in Intelligent Computing, a Science Partner Journal.

    Hybrid quantum neural networks typically consist of both a quantum layer — a variational quantum circuit  and a classical layer — a deep learning neural network called a multi-layered perceptron. This special architecture enables them to learn complicated patterns and relationships from data inputs more easily than traditional machine learning methods.

    In this paper, the authors focus on parallel hybrid quantum neural networks. In such networks, the quantum layer and the classical layer process the same input at the same time and then produce a joint output — a linear combination of the outputs from both layers. A parallel network could avoid the information bottleneck that often affects sequential networks, where the quantum layer and the classical layer feed data into each other and process data alternately.

    The training results demonstrate that the authors’ parallel hybrid network can outperform either its quantum layer or its classical layer. Trained on two periodic datasets with high-frequency noise added, the hybrid model shows lower training loss, produces better predictions, and is found to be more adaptable to complex problems and new datasets.

    The quantum and classical layers both contribute to this effective quantum-classical interplay. The quantum layer, specifically, a variational quantum circuit, maps the smooth periodical parts, while the classical multi-layered perceptron fills in the irregular additions of noise. Both variational quantum circuits and multi-layered perceptrons are considered “universal approximators.” To maximize output during training, variational quantum circuits adjust the parameters of quantum gates that control the status of qubits, and multi-layered perceptrons mainly tune the strength of the connections, or so-called weights, between neurons.

    At the same time, the success of a parallel hybrid network rides on the setting and tuning of the learning rate and other hyperparameters, such as the number of layers and number of neurons in each layer in the multi-layered perceptron.

    Given that the quantum and classical layers learn at different speeds, the authors discussed how the contribution ratio of each layer affects the performance of the hybrid model and found that adjusting the learning rate is important in keeping a balanced contribution ratio. Therefore, they point out that building a custom learning rate scheduler is a future research direction because such a scheduler could enhance the speed and performance of the hybrid model.

    [ad_2]

    Intelligent Computing

    Source link

  • Scientists Amplify Superconducting Sensor Arrays Signals Near the Quantum Limit

    Scientists Amplify Superconducting Sensor Arrays Signals Near the Quantum Limit

    [ad_1]

    The Science

    Newswise — Understanding how energy moves in materials is fundamental to the study of quantum phenomena, catalytic reactions, and complex proteins. Measuring how energy moves involves shining special X-ray light onto a sample to start a reaction. Detectors then collect the radiation the reaction emits. Conventional sensors usually lack the sensitivity needed for these studies. One solution is to use superconducting sensors. But amplifying the signals from these sensors is a major challenge. Building on advances from quantum computing, researchers added a special type of amplifiers, superconducting traveling-wave parametric amplifiers. While most amplifiers add noise to the measurement, these amplifiers are almost noiseless. In a major advance, researchers recently showed that the amplifiers can operate at 4 Kelvin, which is considered relatively high operating temperatures.

    The Impact

    Reducing the noise that is added during signal processing can improve a sensor’s performance. Amplification allows each sensor to operate faster and be more sensitive. Recent experiments have shown that parametric amplifiers can potentially analyze signals from many superconducting sensors at the same time. Superconducting sensors work at very low temperatures. At these temperatures, parametric amplifiers have very good noise performance, close to the limit of quantum mechanics. The advance paves the way to integrate such amplifiers with a variety of sensor technologies.

    Summary

    A superconducting sensor consists of a superconducting thermometer and an absorber. When X-rays are stopped in the absorber, they change the superconducting state of the sensor. This generates a small current in an electrical circuit. To make the detector more sensitive, many sensors are arranged into an array, like in a digital camera. Superconducting sensors operate at very cold temperatures (approximately 0.09 Kelvin), so they require specialized readout electronics and amplifiers. These amplifiers need to combine the signals from multiple sensors on a single readout line. Combining signals is known as multiplexing. One efficient way to do this is to couple each sensor in an array to a resonator. All of the resonators are coupled to a single output line. The current produced by an absorbed photon shifts the resonant frequency in a unique way for each sensor.

    Because these resonators work in microwave frequencies, the electronic chip that contains all the resonators as well as the output feedline is called the microwave multiplexer. Researchers are preparing to measure the signals from an array of sensors and a microwave multiplexer with a readout chain whose first amplifier is a kinetic-inductance traveling-wave parametric amplifier instead of a conventional semiconductor amplifier. Using the parametric amplifier will reduce readout noise and enable larger arrays of faster sensors.

     

    Funding

    This work was funded by the Department of Energy Office of Science, Basic Energy Sciences Accelerator and Detector Research Program, the National Institute of Standards and Technology’s Innovations in Measurement Science Program, and NASA.


    Journal Link: Physical Review Applied, Apr-2022

    [ad_2]

    Department of Energy, Office of Science

    Source link

  • LLNL scientists among finalists for new Gordon Bell climate modeling award

    LLNL scientists among finalists for new Gordon Bell climate modeling award

    [ad_1]

    Newswise — A team from Lawrence Livermore and seven other Department of Energy (DOE) national laboratories is a finalist for the new Association for Computing Machinery (ACM) Gordon Bell Prize for Climate Modeling for running an unprecedented high-resolution global atmosphere model on the world’s first exascale supercomputer.

    The Gordon Bell submission, led by Energy Exascale Earth System Model (E3SM) chief computational scientist Mark Taylor, details the team’s record-setting demonstration of the Simple Cloud Resolving E3SM Atmosphere Model (SCREAM) on Oak Ridge National Laboratory’s 1.2 exaFLOP (1.2 quintillion computing operations per second) Frontier machine.

    Incorporating state-of-the-art parameterizations for fluid dynamics, microphysics, moist turbulence and radiation, SCREAM is a full-featured atmospheric general-circulation model developed for very fine-resolution simulations on exascale machines. The effort is led by LLNL staff scientist Peter Caldwell, who also heads the Lab’s Climate Modeling group.

    A cornerstone of SCREAM development is computationally-efficient performance-portable design. This feature allows SCREAM to become — as far as the team is aware — the first nonhydrostatic global atmospheric model with resolution finer than 5 kilometers to run on an exascale supercomputer, the first to run at scale on both NVIDIA and AMD Graphics Processing Unit (GPU) systems, and the first to exceed 1 simulated-year-per-day of throughput. SCREAM earned its Gordon Bell finalist position from a record-setting run performed earlier this year, and a revised submission boasts results 54% faster than the original entry, obtaining a performance of 1.26 simulated years per day on 8,192 Frontier nodes.          

    “The Gordon Bell Prize is the highest honor in high performance computing,” said LLNL’s Caldwell. “E3SM is very proud and excited to be finalists for the inaugural year of the Gordon Bell Climate Award. We worked extremely hard for five years to develop a model which makes efficient use of exascale computers, providing more trustworthy and higher-fidelity predictions of future climate than were previously possible. The aim of this new prize matches our goals exactly, so we were hopeful about our chances.”

    What separates SCREAM from other climate models is that it was written in C++ and uses the Kokkos library, enabling it to perform efficiently across the spectrum of computer architectures, Caldwell explained. The design choice allowed the SCREAM team to run on Frontier faster than any other climate model, he said, adding that “most climate and weather models are struggling to take advantage of the GPUs that power most of today’s top powerful supercomputers. SCREAM is of huge interest to other modeling centers as a successful example of how to make this transition.”

    The SCREAM effort grew out of the E3SM team, a multi-lab DOE partnership led by LLNL scientist Dave Bader, that is tasked with developing a state-of-the-art climate modeling, simulation and prediction project for exascale supercomputers. Sponsored by the U.S. Department of Energy’s (DOE’s) Office of Biological and Environmental Research (BER), the E3SM team includes researchers and computational scientists at LLNL and the Sandia, Argonne, Brookhaven, Los Alamos, Lawrence Berkeley, Oak Ridge and Pacific Northwest national laboratories. Other LLNL staff named in the Gordon Bell entry include scientists Aaron Donahue, Chris Terai and Renata McCoy.

    Team members said the achievement represents a breakthrough in climate modeling and a significant milestone for the E3SM project, which aims to bring DOE’s cutting-edge computer science to bear on the climate simulation challenge by simulating the climate system at very high resolution. Such high resolution permits explicit resolution of large convective circulations and other important atmospheric phenomena, thereby avoiding critical sources of uncertainty in traditional climate models, according to researchers. Fine resolution is also necessary to capture critical aspects of the climate that might impact conditions in the United States in the coming decades, such as extreme temperatures, storms and sea-level rise.

    The Gordon Bell Prize for Climate Modeling “aims to recognize innovative parallel computing contributions toward solving the global climate crisis,” according to ACM. It will be awarded for the first time this year at the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC23) in Denver, and accompanied by a $10,000 award provided by Gordon Bell. Winners will be selected based on their potential to impact climate modeling and related fields.

    For more on E3SM, visit https://e3sm.org/.

    [ad_2]

    Lawrence Livermore National Laboratory

    Source link

  • PPPL wins three major DOE awards for supercomputing fusion projects

    PPPL wins three major DOE awards for supercomputing fusion projects

    [ad_1]

    Newswise — Funding for three major collaborations that aim to provide ground-breaking insights into the volatile behavior of plasma in fusion facilities has been won by PPPL. The projects represent three of the DOE’s 12 Scientific Discovery through Advanced Computing (SciDAC) awards with an overall value of $112 million. 

    These four-year collaborations unite fusion scientists and applied mathematicians into multi-institutional teams. The projects, cosponsored by the DOE’s Advanced Scientific Computing Research (ASCR) program, aim to solve complex fusion problems through high-performance supercomputing. Collaborators will model state-of-the-art solutions on today’s top computers, including new exascale computers that can process data a thousand times faster than current machines.

    “This collaborative effort will advance our understanding of fusion as an energy source while utilizing the most powerful supercomputers in the world,” said Jean Paul Allain, who heads the DOE’s Fusion Energy Sciences Department. The partnerships will also guide the design of fusion pilot plants, he said.  

    Fusion combines light elements in the form of plasma — the hot, charged state of matter composed of free electrons and atomic nuclei, or ions, that makes up 99% of the visible universe — to release vast amounts of energy. The three PPPL-led collaborations bring together national laboratories, universities and private companies to advance fusion development. Two of the projects focus on doughnut-shaped tokamaks while the third involves twisted stellarator devices:

    Integrate superhot plasma core with cool edge in tokamak facilities

    The goal of this project, led by Felix Parra Diaz, head of the PPPL Theory Department, is to use advanced computation to study ways to reconcile conflicting tokamak requirements. These arise because fusion plasma must be tens of million degrees Centigrade at its core and cool enough at its edge to avoid damaging tokamak walls.

    The methods to be studied include altering the shape of the magnetic field that confines the plasma; injecting impurities into the plasma to affect its confinement and coating the walls of the tokamak with lithium to protect them from sudden bursts of heat. Parra Diaz said the findings and the advanced computer codes developed to produce them will enable the design of far larger, hotter and more powerful future tokamaks.

    Design a tokamak free of instabilities at the edge of the plasma

    This collaboration, led by principal research physicist Fatima Ebrahimi of PPPL, will develop computer simulations for tokamak plasmas free of instabilities called edge localized modes (ELMs). These frequent occurrences can produce detrimental heat loss and damage tokamak walls.

    The project will model the complete basis for ELMs-free regimes, Ebrahimi said. The resulting state-of-the-art, high-fidelity simulations using advanced computer architecture will create predictive capabilities for stabilizing the edge of magnetically shaped plasmas. Collaborators will put together a hybrid database by combining these simulations with existing experimental data on various worldwide tokamaks and will use the machine learning form of artificial intelligence to  project the findings to the design of a tokamak pilot plant..

    Explore stellarator power plants with high-fidelity simulations

    This project, led by Michael Churchill, head of digital engineering at PPPL, will create a high-fidelity digital prototype of a stellarator facility. The research will seek to verify a stellarator design under a variety of physics and engineering assumptions. Collaborators will use a hierarchy of current codes and incorporate high-fidelity simulation into the design optimization process. 

    The project will create a framework that public and private entities can use for stellarator design. The framework will combine state-of-the-art codes, artificial intelligence, advanced optimization techniques, and software developed under the DOE’s Exascale Computing Project. The overall goal, Churchill said, is to leverage more computing power into the design process to advance concepts for a stellarator pilot plant.
     

    [ad_2]

    Princeton Plasma Physics Laboratory

    Source link

  • IBM’s Jason Orcutt moves the world toward an interconnected quantum future

    IBM’s Jason Orcutt moves the world toward an interconnected quantum future

    [ad_1]

    Newswise — Jason Orcutt of IBM provides an industry perspective on quantum simulation research at the Q-NEXT quantum research center and works to connect quantum information systems around the globe.

    Glance around Jason Orcutt’s office at IBM Quantum, and you’ll see circuit boards, hiking trail maps, qubit probes and his kids’ artwork. Part office, part lab, part gallery: It’s a cross section of a life of rigorous research and vigorous recreation.

    The scene also captures the kind of activity balancing that characterizes his work as a quantum information researcher, switching between hands-on investigation and high-level research strategy. He uses these wide-ranging skills in his role as a co-design engineer for Q-NEXT, the National Quantum Information Science Research Center led by the U.S. Department of Energy’s (DOE) Argonne National Laboratory.

    A principal research scientist at IBM Quantum, Orcutt provides an industry perspective on one of the pillars of Q-NEXT research: developing simulations to better design quantum information systems.

    “IBM brings a future-looking perspective on the problems we need to solve to develop a really useful quantum computer. And Q-NEXT really aligns with our vision on creating new types of quantum interconnects to scale quantum computers into the future.” — Jason Orcutt, IBM

    Q-NEXT collaborators use quantum computers and classical supercomputers to simulate the behaviors of materials used for quantum applications, which are expected to be revolutionary. In the decades ahead, scientists will deploy quantum sensors that can detect an earthquake from space and run powerful quantum computers that can rapidly suss out solutions to intractable problems.

    “We’re using simulations to better design materials and adapting those simulations to an interconnected quantum system,” Orcutt said. ​“IBM brings a future-looking perspective on the problems we need to solve to develop a really useful quantum computer. And Q-NEXT really aligns with our vision on creating new types of quantum interconnects to scale quantum computers into the future.”

    “Quantum interconnect” is a fancy way of referring to the components that link quantum devices. It could be the instruments connecting a sensor to a computer, or it could be a line on a printed circuit board. Without interconnects, quantum devices can’t talk to each other, and quantum information can’t be shared.

    At IBM Quantum, Orcutt coordinates the development of long-range quantum interconnects, which link devices separated by meters to kilometers, such as the nodes in a future quantum data center.

    “How do we extend quantum information or connect quantum systems over physical distance?” he said. ​“Right now, our IBM quantum systems are really restricted to a single chip. I and the people I work with, as well as the academic researchers such as those at Q-NEXT, are looking to develop connections between qubits that will extend beyond more than one chip.”

    Sending quantum information over longer distances is an obstacle course of physics challenges. For starters, quantum information is fragile. Qubits — the fundamental units of quantum information — fall apart at the smallest disturbance. Distance complicates matters. How do you provide qubits with safe, noise-free passage over a kilometer or more? The proposition is like asking a soap bubble not to pop as it travels down a galley of knives.

    “You can’t use the same tools to pattern a centimeter size chip as you would a meter-scale cable,” Orcutt said.

    Qubits must also be continually converted and reconverted to the right frequencies to be read by the devices they encounter on their journey. The most fundamental frequency conversion requirements arise from the different levels of thermal noise at different frequencies. For example: IBM Quantum focuses on a type of qubit that lives in the microwave frequency range. In this range, the quantum information must be cooled to a few hundredths of a degree from absolute zero to be protected from thermal noise. To be transported in room temperature materials — a requirement for long distance communication — the quantum information must be converted to the optical-wave range, a whopping 10,000 times the frequency of microwaves.

    The way that materials respond to the two frequency ranges is massively different. How do you engineer materials to successfully conduct information that starts as a murmur and ends in a trill?

    Such challenges are part of the growing pains of the field of quantum information science, which is working to tap the potential of information that, until recently, was kept cozily inside tiny instruments such as microchips.

    “We’re taking quantum information into places it traditionally doesn’t live,” Orcutt said. Instead of moving through chips built in clean rooms, qubits are having to find their way through ​“the messy world of macroscopic objects,” he said, such as meter-long coaxial cables or fiber optic cables that connect nodes that are miles apart.

    The scientific community is working to build quantum systems that will eventually connect the globe. Simulating them from soup to nuts is key to ensuring that the interconnected systems of the future will be successful. Orcutt draws on his experience at IBM to inform Q-NEXT’s quantum simulations work.

    “We have to reengineer our systems, and to do that, we have to simulate them,” he said. ​“But how do we reengineer our systems around quantum interconnects instead of a monolithic computing device? Systems where there are different levels of connectivity? We have to rethink not just how we build the systems, but also how we adapt our algorithms to best use them.”

    Orcutt began his journey into quantum information science at Columbia University, planning initially to be a patent lawyer, combining interests in debate and technology.

    “What I quickly realized was that there are many other ways to pursue science and have a fulfilling career that was closer to creating new technical ideas,” he said.

    He pivoted to a bachelor’s in electrical engineering, with no intention of attending graduate school. But, again, he changed his mind after a couple of happy lab experiences working on electronics and photonics. For his Ph.D. research at MIT, Orcutt built the first optical interconnects in the commercial manufacturing processes used for microprocessor and memory chips.

    “This was a wonderful project because it wasn’t just about the devices — it was connected to the systems, which is something that has always been a key draw for me throughout my life,” he said.

    In 2013, Orcutt joined IBM. It was a major shift for someone who started his career as ​“the one soldering the circuit, the one simulating the physics or coding the program,” he said. And while he continues to work directly with the technology, 10 years later, he’s also the one asking how quantum computers should be wired, what components are required to connect the qubits and what direction IBM should take to tackle these strategic and technology questions.

    Orcutt’s experience both at the bench and at the center of operations made him a valuable contributor to Q-NEXT’s 2022 quantum technology report ​“A Roadmap for Quantum Interconnects,” which outlines the discoveries needed to build practical quantum information technologies in one or two decades.

    “It was a useful exercise to define the important challenges and potential solutions that are emerging within the community and define it so it could be addressed by the center on a 10-year scale,” he said.

    Producing the roadmap is just one example of IBM’s collaborative effort with Q-NEXT.

    “The next phase of quantum information science will involve creating new materials and refined products that have superior quantum information performance. And to address that, we need a whole bunch of forces coming together, which is another reason why the shared infrastructure at centers like Q-NEXT are critical,” Orcutt said. ​“Trying to tackle these really hard problems is one of the main reasons we like to work with other industrial players, national labs and a broad consortium of academic groups. To us — to me and to IBM in general — that is a paramount reason to get involved in Q-NEXT: to be able to tackle the really hard problems together with the best people in the field.”

    Building the quantum workforce through education and outreach is another goal for IBM Quantum. IBM creates connections to the students, postdocs and other early-career scientists conducting research at centers like Q-NEXT, widening opportunities to grow its own quantum workforce.

    For those thinking of entering the field, Orcutt notes the excitement of quantum research.

    “When I have a new task or project, I initially have absolutely no idea how we’re going to solve it. The wonderful thing is, we’ve been able to make significant progress against our goals,” he said. ​“It’s been a wonderful journey of figuring out ways to contribute to the quantum effort and trying to solve problems along the way.”

    This work was supported by the DOE Office of Science National Quantum Information Science Research Centers as part of the Q-NEXT center.

    About Q-NEXT

    Q-NEXT is a U.S. Department of Energy National Quantum Information Science Research Center led by Argonne National Laboratory. Q-NEXT brings together world-class researchers from national laboratories, universities and U.S. technology companies with the goal of developing the science and technology to control and distribute quantum information. Q-NEXT collaborators and institutions will create two national foundries for quantum materials and devices, develop networks of sensors and secure communications systems, establish simulation and network test beds, and train the next-generation quantum-ready workforce to ensure continued U.S. scientific and economic leadership in this rapidly advancing field. For more information, visit https://​q​-next​.org/.

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

    The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.

    [ad_2]

    Argonne National Laboratory

    Source link

  • Quantum computers guess better, study finds

    Quantum computers guess better, study finds

    [ad_1]

    Newswise — Daniel Lidar, the Viterbi Professor of Engineering at USC and Director of the USC Center for Quantum Information Science & Technology, and first author Dr. Bibek Pokharel, a Research Scientist at IBM Quantum, achieved this quantum speedup advantage in the context of a “bitstring guessing game.”  They managed strings up to 26 bits long, significantly larger than previously possible, by effectively suppressing errors typically seen at this scale. (A bit is a binary number that is either zero or one).

    Quantum computers promise to solve certain problems with an advantage that increases as the problems increase in complexity. However, they are also highly prone to errors, or noise. The challenge, says Lidar, is “to obtain an advantage in the real world where today’s quantum computers are still ‘noisy.’” This noise-prone condition of current quantum computing is termed the “NISQ” (Noisy Intermediate-Scale Quantum) era, a term adapted from the RISC architecture used to describe classical computing devices. Thus, any present demonstration of quantum speed advantage necessitates noise reduction.

    The more unknown variables a problem has, the harder it usually is for a computer to solve. Scholars can evaluate a computer’s performance by playing a type of game with it to see how quickly an algorithm can guess hidden information. For instance, imagine a version of the TV game Jeopardy, where contestants take turns guessing a secret word of known length, one whole word at a time. The host reveals only one correct letter for each guessed word before changing the secret word randomly.

    In their study, the researchers replaced words with bitstrings. A classical computer would, on average, require approximately 33 million guesses to correctly identify a 26-bit string. In contrast, a perfectly functioning quantum computer, presenting guesses in quantum superposition, could identify the correct answer in just one guess. This efficiency comes from running a quantum algorithm developed more than 25 years ago by computer scientists Ethan Bernstein and Umesh Vazirani. However, noise can significantly hamper this exponential quantum advantage.

    Lidar and Pokharel achieved their quantum speedup by adapting a noise suppression technique called dynamical decoupling. They spent a year experimenting, with Pokharel working as a doctoral candidate under Lidar at USC. Initially, applying dynamical decoupling seemed to degrade performance. However, after numerous refinements, the quantum algorithm functioned as intended. The time to solve problems then grew more slowly than with any classical computer, with the quantum advantage becoming increasingly evident as the problems became more complex.

    Lidar notes that “currently, classical computers can still solve the problem faster in absolute terms.” In other words, the reported advantage is measured in terms of the time-scaling it takes to find the solution, not the absolute time. This means that for sufficiently long bitstrings, the quantum solution will eventually be quicker.

    The study conclusively demonstrates that with proper error control, quantum computers can execute complete algorithms with better scaling of the time it takes to find the solution than conventional computers, even in the NISQ era.

    [ad_2]

    University of Southern California (USC)

    Source link

  • Qubits put new spin on magnetism: boosting applications of quantum computers

    Qubits put new spin on magnetism: boosting applications of quantum computers

    [ad_1]

    Newswise — LOS ALAMOS, N.M., March 17, 2023 — Research using a quantum computer as the physical platform for quantum experiments has found a way to design and characterize tailor-made magnetic objects using quantum bits, or qubits. That opens up a new approach to develop new materials and robust quantum computing.

    “With the help of a quantum annealer, we demonstrated a new way to pattern magnetic states,” said Alejandro Lopez-Bezanilla, a virtual experimentalist in the Theoretical Division at Los Alamos National Laboratory. Lopez-Bezanilla is the corresponding author of a paper about the research in Science Advances.

    “We showed that a magnetic quasicrystal lattice can host states that go beyond the zero and one bit states of classical information technology,” Lopez-Bezanilla said. “By applying a magnetic field to a finite set of spins, we can morph the magnetic landscape of a quasicrystal object.”

    A quasicrystal is a structure composed by the repetition of some basic shapes following rules different to those of regular crystals.

    For this work with Cristiano Nisoli, a theoretical physicist also at Los Alamos, a D-Wave quantum annealing computer served as the platform to conduct actual physical experiments on quasicrystals, rather than modeling them. This approach “lets matter talk to you,” Lopez-Bezanilla said, “because instead of running computer codes, we go straight to the quantum platform and set all the physical interactions at will.”

    The ups and downs of qubits

    Lopez-Bezanilla selected 201 qubits on the D-Wave computer and coupled them to each other to reproduce the shape of a Penrose quasicrystal.

    Since Roger Penrose in the 1970s conceived the aperiodic structures named after him, no one had put a spin on each of their nodes to observe their behavior under the action of a magnetic field.

    ”I connected the qubits so all together they reproduced the geometry of one of his quasicrystals, the so-called P3,” Lopez-Bezanilla said. “To my surprise, I observed that applying specific external magnetic fields on the structure made some qubits exhibit both up and down orientations with the same probability, which leads the P3 quasicrystal to adopt a rich variety of magnetic shapes.” 

    Manipulating the interaction strength between qubits and the qubits with the external field causes the quasicrystals to settle into different magnetic arrangements, offering the prospect of encoding more than one bit of information in a single object.

    Some of these configurations exhibit no precise ordering of the qubits’ orientation. 

     “This can play in our favor,” Lopez-Bezanilla said, “because they could potentially host a quantum quasiparticle of interest for information science.” A spin quasiparticle is able to carry information immune to external noise.

    A quasiparticle is a convenient way to describe the collective behavior of a group of basic elements. Properties such as mass and charge can be ascribed to several spins moving as if they were one.

    The paper: “Field-induced magnetic phases in a qubit Penrose quasicrystal,” by Alejandro Lopez-Bezanilla and Cristiano Nisoli, in Science Advances. DOI: 10.1126/sciadv.adf6631.

    The funding: Los Alamos National Laboratory

    -30-

    LA-UR-23-22502

    [ad_2]

    Los Alamos National Laboratory

    Source link

  • From Atoms to Earthquakes to Mars: High Performance Computing a Swiss Army Knife for Modeling and Simulation

    From Atoms to Earthquakes to Mars: High Performance Computing a Swiss Army Knife for Modeling and Simulation

    [ad_1]

    BYLINE: Idaho National Laboratory (INL)

    Newswise — Researchers solving today’s most important and complex energy challenges can’t always conduct real-world experiments.    

    This is especially true for nuclear energy research. Considerations such as cost, safety and limited resources can often make laboratory tests impractical. In some cases, the facility or capability necessary to conduct a proper experiment doesn’t exist.  

    At Idaho National Laboratory, computational scientists use INL’s supercomputers to perform “virtual experiments” to accomplish research that couldn’t be done by conventional means. While supercomputing can’t replace traditional experiments, supercomputing is an essential component of all modern scientific discoveries and advancements.  

    “Science is like a three-leg stool,” said Eric Whiting, director of Advanced Scientific Computing at INL. “One leg is theory, one is experiment, and the third is modeling and simulation. You cannot have modern scientific achievements without modeling and simulation.” 

    HIGH-DEMAND RESOURCES 

    INL’s High Performance Computing program has been in high demand for years. From INL’s first supercomputer in 1993 to the addition of the Sawtooth supercomputer in 2020, the demand for high-performance computing has only increased.   

    Sawtooth and INL’s other supercomputers are flexible enough to tackle a wide range of modeling and simulation challenges and are especially suitable for dynamic and adaptive applications, like those used in nuclear energy research. INL’s supercomputers are one of the Nuclear Science User Facilities’ 50 partner facilities and its only supercomputers.  

    Whether it’s exploring the effects of radiation on nuclear fuel or designing nuclear-powered rockets for a trip to Mars, INL’s High Performance Computing center is the Swiss Army knife of advanced computing.  

    THE POWER OF 100,000 LAPTOPS 

    On a recent tour of the Collaborative Computing Center, Whiting led the way through the rows of Sawtooth processors. Each row looked like dozens of tall black refrigerators standing side by side. The room hummed with the pumping of thousands of gallons of water needed to keep Sawtooth cool.  

    Sawtooth contains the computing power of about 100,000 processors all dedicated to very large, high-fidelity problems, which means orders of magnitude more processing power and memory when compared to a traditional laptop computer.  

    All that computing power allows researchers from around the world to run dozens of complex simulations at the same time. “If your program is designed right, it runs thousands of times faster than the best-case scenario on your desktop,” Whiting said.  

    Some of these simulations — modeling the performance of fuel inside an advanced reactor core, for instance — require the computer to solve millions or billions of unknowns repeatedly.  

    “If you have a multidimensional problem in space, and then you add time to it, it greatly adds to the size of the problem,” said Cody Permann, a computer scientist who oversees one of the laboratory’s modeling and simulation capabilities. Modeling and simulation started decades ago by solving simplified problems in one or two dimensions. Modern supercomputers, like INL’s Sawtooth, significantly increased the accuracy of these simulations, bringing them closer to reality.  

    To solve these complicated problems, researchers break down each simulation into thousands upon thousands of smaller units, each impacting the units surrounding it. The more units, the more detailed the simulation, and the more powerful the computer needed to run it.     

    THE ATOMIC EFFECTS OF RADIATION ON MATERIALS 

    For Chao Jiang, a distinguished staff scientist at INL, a highly detailed simulation means peering down to the level of individual atoms.  

    Jiang’s simulations, funded by the Department of Energy Nuclear Energy Advanced Modeling and Simulation program and the Basic Energy Sciences program, help nuclear scientists understand the behavior of materials when their atoms are constantly knocked around by neutrons in a reactor core. These displaced atoms will create defects, changing the microstructure of the material, and therefore its physical and mechanical characteristics. These changes in microstructure can damage the materials and reduce the lifetime of the reactor. Understanding these changes helps scientists design better and safer reactors. 

    “The work we are doing is extremely challenging,” Jiang said. “They are computer-hungry projects. We are big users of the high-performance computers.” 

    Understanding the radiation damage in materials is difficult. This change involves physical processes that occur across vastly different time and length scales. “When the high energy neutrons hit the material,” Jiang said, “it will locally melt the material.” 

    Heating and cooling inside an operating reactor takes place in picoseconds, or one trillionth of a second. During this heating and cooling, the material will re-solidify, but will leave defects behind, Jiang said. “These residual defects will migrate and accumulate to form large-scale defects in the long run.” 

    While large defects, such as dislocation loops and voids, can be directly seen using advanced microscopy techniques, there are many small-scale defects that remain invisible under microscope. These small defects can significantly impact the materials, making the use of computer simulations to fill this knowledge gap critical. INL computational scientists combine their simulations with the advanced characterization techniques performed by material scientists at INL’s Materials and Fuels Complex to advance the understanding of material behavior in a nuclear reactor. 

    SIMULATING THE IMPACTS OF EARTHQUAKES ON REACTOR MATERIALS  

    Another INL scientist, Chandu Bolisetti, also simulates the damage to materials, but on a much different scale.  

    Bolisetti, who leads the lab’s Facility Risk Group, uses high-performance computing to simulate the effects of seismic waves — the shaking that results from an earthquake — on energy infrastructure such as nuclear power plants or dams.  

    In early 2021, funded by the DOE Office of Technology Transitions, Bolisetti and his colleagues performed a particularly complex type of simulation — they simulated the impacts of seismic waves on a nuclear power plant building that houses a molten salt reactor.  

    A molten salt reactor is a particularly difficult physics problem because the coolant/fuel circulates through the reactor in liquid form. The team also placed their hypothetical reactor on seismic isolators, giant shock absorbers that help reduce the impacts of earthquakes on buildings. 

    Bolisetti’s team ran the simulation using MOOSE, which stands for Multiphysics Object Oriented Simulation Environment, a software framework that allows researchers to develop modeling and simulation tools for solving multiphysics problems. For these earthquake simulation problems, Bolisetti’s team uses MASTODON, which they developed using MOOSE specifically for seismic analysis.    

    Another project funded by INL’s Laboratory Directed Research and Development program looks at how a molten salt reactor behaves in an earthquake in much more detail. It extends the analysis to include neutronics and thermal hydraulics — in other words, how the shaking impacts nuclear fission and the distribution of heat in the reactor core. 

    “All three of these physics — earthquake response, thermal hydraulics and neutronics — are pretty complicated,” Bolisetti said. “No one has ever combined these into one simulation. How the power in the reactor fluctuates during an earthquake is important for safety protocols. It affects what the operators would do during an earthquake and helps us understand the core physics and design safer reactors.” 

    “Real-world experiments to simulate this are close to impossible, especially when you add neutronics,” Bolisetti said. “That’s where these kinds of multi-physics simulations really shine.”   

    SIMULATING NUCLEAR ROCKETS FOR A TRIP TO MARS 

    Mark DeHart, a senior reactor physicist at INL, uses MOOSE to simulate an entirely different kind of complex machine: a thermonuclear rocket that could someday take humans to Mars.  

    The rocket would use hydrogen as both a propellant and a coolant. When the rocket is in use, hydrogen would run from storage tanks through the reactor core. The reactor would rapidly heat the hydrogen before it exits the rocket nozzles.  

    “The hydrogen that comes out is pure thrust,” DeHart said.  

    Compared with chemical rockets, thermonuclear rockets are faster and twice as efficient. The rockets could cut travel time to Mars in half. 

    One big challenge is rapidly heating the reactor core from about 26 degrees Celsius (80 degrees Fahrenheit) to nearly 2,760 Celsius (5,000 Fahrenheit) without damaging the reactor or the fuel.  

    DeHart and his colleagues are using Griffin, a MOOSE-based advanced reactor physics tool, for multiphysics modeling of two aspects of the NASA mission.  

    The first project tests the fuel’s performance as it experiences rapid heating in the reactor core. The real-world fuel samples are placed in INL’s Transient Test Reactor (TREAT) where they are rapidly brought up to temperature.  

    The data from those experiments are used to create and validate models of the fuel’s neutronics and heat transfer characteristics using Griffin. 

    “If we can show that Griffin can model this real-world sample correctly, we can have confidence that Griffin can calculate correctly something that doesn’t exist yet,” DeHart said.   

    The second project is designing the rocket engines themselves. Automated controllers rotate drums in the reactor core to bring the temperature up and down. “We’ve developed a simulation that will show how you can use the control drums to bring the reactor from cold to nearly 5,000 F within 30 seconds,” DeHart said.  

    Without high-performance computing and MOOSE, developing a thermonuclear rocket would take dozens of small experiments costing hundreds of millions of dollars.  

    AN OPPORTUNITY FOR COLLABORATION 

    In the end, high-performance computing makes INL a gathering place for researchers with a wide range of expertise, from rocket design to artificial intelligence. About half the system’s users are from national labs, with a quarter coming from universities and a quarter from industry. The resulting collaborations are especially important for nuclear energy research.  

    “INL cannot attract all the experts in our field, but by sharing a computer, INL’s team can work with 1,200 experts across the United States,” Whiting said. “INL’s supercomputers are helping build the expertise and develop the tools so they can deploy next-generation reactors.” 

    And the demand for these modeling and simulation resources is only growing. Sawtooth added more than four times the capacity to INL’s high-performance computing capabilities, and already the line of projects waiting in the queue can reach into the thousands.  

    “We need years of research with the High Performance Computing facility,” said Jiang. “We need to understand the high energy state of nuclear materials as accurately as possible, so we need to explore a huge space. Without high-performance computing, basic energy research would suffer. It’s critical.”  

    If you are interested in accessing INL’s supercomputers for your work, visit inl.gov/ncrc or nsuf.inl.gov 

    About Idaho National Laboratory
    Battelle Energy Alliance manages INL for the U.S. Department of Energy’s Office of Nuclear Energy. INL is the nation’s center for nuclear energy research and development, and also performs research in each of DOE’s strategic goal areas: energy, national security, science and the environment. For more information, visit www.inl.gov. Follow us on social media: Twitter, Facebook, Instagram and LinkedIn. 

     

    [ad_2]

    Idaho National Laboratory (INL)

    Source link

  • Proposed quantum device may succinctly realize emergent particles such as the Fibonacci anyon

    Proposed quantum device may succinctly realize emergent particles such as the Fibonacci anyon

    [ad_1]

    Newswise — Long before Dr. Jukka Vayrynen was an assistant professor at the Purdue Department of Physics and Astronomy, he was a post-doc investigating a theoretical model with emergent particles in a condensed matter setting. Once he arrived at Purdue, he intended to expand on the model, expecting it to be relatively easy. He gave the seemingly straightforward calculations to Guangjie Li, a graduate student working with Vayrynen, but the calculations yielded an unexpected result.  These results were a surprising roadblock which nearly brought their research to a screeching halt.  Team tenacity has taken this roadblock and turned it into a possible route to the development of quantum computing.

    At the Aspen Center for Physics in Colorado, Vayrynen discussed this issue with a colleague from the Weizmann Institute of Science in Israel, Dr. Yuval Oreg, who helped circumvent the obstacle. The team used this new understanding of their calculations to propose a quantum device that could be tested experimentally to succinctly realize emergent particles such as the Fibonacci anyon. They have published their findings, “Multichannel topological Kondo effect,” in Physical Review Letters on February 10, 2023.

    Condensed matter theory is a field of physics that studies, for example, the properties of electronic quantum systems, with applications to technologies such as superconductors, transistors, or quantum computing devices. One of the challenges in this field is understanding the quantum mechanical behavior of many electrons, also known as the “many-body problem.” It is a problem because it can only be theoretically modeled in very limited cases. However, even in those limited cases, rich emergent phenomena such as collective excitations or fractionally charged emergent “quasi”-particles are known to emerge. These phenomena are a result of the complex interactions between electrons and can lead to the development of new materials and technologies.

    “In our paper, we propose a quantum device that is simple enough to be theoretically modeled and tested experimentally in the future, yet also complex enough to display non-trivial emergent particles,” says Vayrynen. “Our results indicate that the proposed device can realize an emergent particle called a Fibonacci anyon that can be used as a building block of a quantum computer. The device is therefore a promising candidate for the development of quantum computing technology.”

    This discovery could be used in future quantum computers in a way that allows one to make them more resistant to decoherence, a.k.a. noise.

    According to their publication, the team introduced a physically motivated N-channel generalization of a topological Kondo model.  Starting from the simplest case N = 2, they conjecture a stable intermediate coupling fixed point and evaluate the resulting low-temperature impurity entropy. The impurity entropy indicates that an emergent Fibonacci anyon can be realized in the N = 2 model. 

    According to Li, “a Fibonacci anyon is an emergent particle with the property that as you add more particles to the system, the number of quantum states grows like the Fibonacci sequence, 1, 2, 3, 5, 8, etc. In our system, a small quantum device is connected to conduction electron leads which will overly screen the device and can result in an emergent Fibonacci anyon.”

    The team also gives a number of predictions that could be experimentally tested in future quantum devices.

     “We evaluate the zero-temperature impurity entropy and conductance to obtain experimentally observable signatures of our results. In the large-N limit we evaluate the full cross over function describing the temperature-dependent conductance,” says Vayrynen.

    This research is the first in a series that the Purdue team of Li and Vayrynen will work on. They collaborated with a senior scientist from Max Planck Institute for Solid State Research in Germany, Dr. Elio König, and posted a related work, “Topological Symplectic Kondo Effect,” in a preprint arXiv (2210.16614) on October 20, 2022.

    This research was based on work supported by the Quantum Science Center, a U.S. Department of Energy National Quantum Information Science Research Center headquartered at DOE’s Oak Ridge National Laboratory. Dr. Yong Chen, the Karl Lark-Horovitz Professor of Physics and Astronomy and Professor of Electrical and Computer Engineering, is on the QSC’s Governance Advisory Board, and Purdue is one of the center’s core partners.

    About the Department of Physics and Astronomy at Purdue University

    Purdue Department of Physics and Astronomy has a rich and long history dating back to 1904. Our faculty and students are exploring nature at all length scales, from the subatomic to the macroscopic and everything in between. With an excellent and diverse community of faculty, postdocs, and students who are pushing new scientific frontiers, we offer a dynamic learning environment, an inclusive research community, and an engaging network of scholars.  

    Physics and Astronomy is one of the seven departments within the Purdue University College of Science. World-class research is performed in astrophysics, atomic and molecular optics, accelerator mass spectrometry, biophysics, condensed matter physics, quantum information science, particle and nuclear physics. Our state-of-the-art facilities are in the Physics Building, but our researchers also engage in interdisciplinary work at Discovery Park District at Purdue, particularly the Birck Nanotechnology Center and the Bindley Bioscience Center.  We also participate in global research including at the Large Hadron Collider at CERN, Argonne National Laboratory, Brookhaven National Laboratory, Fermilab, the Stanford Linear Accelerator, the James Webb Space Telescope, and several observatories around the world. 

    About Purdue University

    Purdue University is a top public research institution developing practical solutions to today’s toughest challenges. Ranked in each of the last five years as one of the 10 Most Innovative universities in the United States by U.S. News & World Report, Purdue delivers world-changing research and out-of-this-world discovery. Committed to hands-on and online, real-world learning, Purdue offers a transformative education to all. Committed to affordability and accessibility, Purdue has frozen tuition and most fees at 2012-13 levels, enabling more students than ever to graduate debt-free. See how Purdue never stops in the persistent pursuit of the next giant leap at https://stories.purdue.edu.

     

    Contributors:

    Dr. Jukka Vayrynen, Assistant Professor of Physics and Astronomy

    Guangjie Li, Graduate Student

    [ad_2]

    Purdue University

    Source link

  • The optical fiber that keeps data safe even after being twisted or bent

    The optical fiber that keeps data safe even after being twisted or bent

    [ad_1]

    Newswise — Optical fibres are the backbone of our modern information networks. From long-range communication over the internet to high-speed information transfer within data centres and stock exchanges, optical fibre remains critical in our globalised world.

    Fibre networks are not, however, structurally perfect, and information transfer can be compromised when things go wrong. Tßo address this problem, physicists at the University of Bath in the UK have developed a new kind of fibre designed to enhance the robustness of networks. This robustness could prove to be especially important in the coming age of quantum networks.

    The team has fabricated optical fibres (the flexible glass channels through which information is sent) that can protect light (the medium through which data is transmitted) using the mathematics of topology. Best of all, these modified fibres are easily scalable, meaning the structure of each fibre can be preserved over thousands of kilometres.

    The Bath study is published in the latest issue of Science Advances.

    Protecting light against disorder

    At its simplest, optical fibre, which typically has a diameter of 125 µm (similar to a thick strand of hair) comprises a core of solid glass surrounded by cladding. Light travels through the core, where it bounces along as though reflecting off a mirror.

    However, the pathway taken by an optical fibre as it crisscrosses the landscape is rarely straight and undisturbed: turns, loops, and bends are the norm. Distortions in the fibre can cause information to degrade as it moves between sender and receiver. “The challenge was to build a network that takes robustness into account,” said Physics PhD student Nathan Roberts, who led the research.

    “Whenever you fabricate a fibre-optic cable, small variations in the physical structure of the fibre are inevitably present. When deployed in a network, the fibre can also get twisted and bent. One way to counter these variations and defects is to ensure the fibre design process includes a real focus on robustness. This is where we found the ideas of topology useful.”

    To design this new fibre, the Bath team used topology, which is the mathematical study of quantities that remain unchanged despite continuous distortions to the geometry. Its principles are already applied to many areas of physics research. By connecting physical phenomena to unchanging numbers, the destructive effects of a disordered environment can be avoided.

    The fibre designed by the Bath team deploys topological ideas by including several light-guiding cores in a fibre, linked together in a spiral. Light can hop between these cores but becomes trapped within the edge thanks to the topological design. These edge states are protected against disorder in the structure.

    Bath physicist Dr Anton Souslov, who co-authored the study as theory lead, said: “Using our fibre, light is less influenced by environmental disorder than it would be in an equivalent system lacking topological design.

    “By adopting optical fibres with topological design, researchers will have the tools to pre-empt and forestall signal-degrading effects by building inherently robust photonic systems.”

    Theory meets practical expertise

    Bath physicist Dr Peter Mosley, who co-authored the study as experimental lead, said: “Previously, scientists have applied the complex mathematics of topology to light, but here at the University of Bath we have lots of experience physically making optical fibres, so we put the mathematics together with our expertise to create topological fibre.”

    The team, which also includes PhD student Guido Baardink and Dr Josh Nunn from the Department of Physics, are now looking for industry partners to develop their concept further.

    “We are really keen to help people build robust communication networks and we are ready for the next phase of this work,” said Dr Souslov.

    Mr Roberts added: “We have shown that you can make kilometres of topological fibre wound around a spool. We envision a quantum internet where information will be transmitted robustly across continents using topological principles.”

    He also pointed out that this research has implications that go beyond communications networks. He said: “Fibre development is not only a technological challenge, but also an exciting scientific field in its own right.

    “Understanding how to engineer optical fibre has led to light sources from bright ‘supercontinuum’ that spans the entire visible spectrum right down to quantum light sources that produce individual photons – single particles of light.”

    The future is quantum

    Quantum networks are widely expected to play an important technological role in years to come. Quantum technologies have the capacity to store and process information in more powerful ways than ‘classical’ computers can today, as well as sending messages securely across global networks without any chance of eavesdropping.

    But the quantum states of light that transmit information are easily impacted by their environment and finding a way to protect them is a major challenge. This work may be a step towards maintaining quantum information in fibre optics using topological design.

    [ad_2]

    University of Bath

    Source link

  • Dawn of solid-state quantum networks

    Dawn of solid-state quantum networks

    [ad_1]

    Newswise — This year’s Nobel Prize in Physics celebrated the fundamental interest of quantum entanglement, and also envisioned the potential applications in “the second quantum revolution” — a new age when we are able to manipulate the weirdness of quantum mechanics, including quantum superposition and entanglement. A large-scale and fully functional quantum network is the holy grail of quantum information sciences. It will open a new frontier of physics, with new possibilities for quantum computation, communication, and metrology.

    One of the most significant challenges is to extend the distance of quantum communication to a practically useful scale. Unlike classical signals that can be noiselessly amplified, quantum states in superposition cannot be amplified because they cannot be perfectly cloned. Therefore, a high-performance quantum network requires not only ultra-low-loss quantum channels and quantum memory, but also high-performance quantum light sources. There has been exciting recent progress in satellite-based quantum communications and quantum repeaters, but a lack of suitable single-photon sources has hampered further advances.

    What is required of a single-photon source for quantum network applications? First, it should emit one (only one) photon at a time. Second, to attain brightness, the single-photon sources should have high system efficiency and a high repetition rate. Third, for applications such as in quantum teleportation that require interfering with independent photons, the single photons should be indistinguishable. Additional requirements include a scalable platform, tunable and narrowband linewidth (favorable for temporal synchronization), and interconnectivity with matter qubits.

    A promising source is quantum dots (QDs), semiconductor particles of just a few nanometers. However, in the past two decades, the visibility of quantum interference between independent QDs has rarely exceeded the classical limit of 50% and distances have been limited to around a few meters or kilometers.

    As reported in Advanced Photonics, an international team of researchers has achieved high-visibility quantum interference between two independent QDs linked with ~300 km optical fibers. They report efficient and indistinguishable single-photon sources with ultra-low-noise, tunable single-photon frequency conversion, and low-dispersion long fiber transmission. The single photons are generated from resonantly driven single QDs deterministically coupled to microcavities. Quantum frequency conversions are used to eliminate the QD inhomogeneity and shift the emission wavelength to the telecommunications band. The observed interference visibility is up to 93%. According to senior author Chao-Yang Lu, professor at the University of Science and Technology of China (USTC), “Feasible improvements can further extend the distance to ~600 km.”

    Lu remarks, “Our work jumped from the previous QD-based quantum experiments at a scale from ~1 km to 300 km, two orders of magnitude larger, and thus opens an exciting prospect of solid-state quantum networks.” With this reported jump, the dawn of solid-state quantum networks may soon begin breaking toward day.

    Read the Gold Open Access article by X. You et al., “Quantum interference with independent single-photon sources over 300 km fiber,” Adv. Photon4(6), 066003 (2022), doi 10.1117/1.AP.4.6.066003.

    [ad_2]

    SPIE

    Source link

  • Using Machine Learning to Better Understand How Water Behaves

    Using Machine Learning to Better Understand How Water Behaves

    [ad_1]

    Newswise — Water has puzzled scientists for decades. For the last 30 years or so, they have theorized that when cooled down to a very low temperature like -100C, water might be able to separate into two liquid phases of different densities. Like oil and water, these phases don’t mix and may help explain some of water’s other strange behavior, like how it becomes less dense as it cools.

    It’s almost impossible to study this phenomenon in a lab, though, because water crystallizes into ice so quickly at such low temperatures. Now, new research from the Georgia Institute of Technology uses machine learning models to better understand water’s phase changes, opening more avenues for a better theoretical understanding of various substances. With this technique, the researchers found strong computational evidence in support of water’s liquid-liquid transition that can be applied to real-world systems that use water to operate.

    “We are doing this with very detailed quantum chemistry calculations that are trying to be as close as possible to the real physics and physical chemistry of water,” said Thomas Gartner, an assistant professor in the School of Chemical and Biomolecular Engineering at Georgia Tech. “This is the first time anyone has been able to study this transition with this level of accuracy.”

    The research was presented in the paper, “Liquid-Liquid Transition in Water From First Principles,” in the journal Physical Review Letters, with co-authors from Princeton University.

    Simulating Water

    To better understand how water interacts, the researchers ran molecular simulations on supercomputers, which Gartner compared to a virtual microscope.

    “If you had an infinitely powerful microscope, you could zoom in all the way down to the level of the individual molecules and watch them move and interact in real time,” he said. “This is what we’re doing by creating almost a computational movie.”

    The researchers analyzed how the molecules move and characterized the liquid structure at different water temperatures and pressures, mimicking the phase separation between the high and low-density liquids. They collected extensive data — running some simulations for up to a year — and continued to fine-tune their algorithms for more accurate results.

    Even a decade ago, running such long and detailed simulations wouldn’t have been possible, but machine learning today offered a shortcut. The researchers used a machine learning algorithm that calculated the energy of how water molecules interact with each other. This model performed the calculation significantly faster than traditional techniques, allowing the simulations to progress much more efficiently.

    Machine learning isn’t perfect, so these long simulations also improved the accuracy of the predictions. The researchers were careful to test their predictions with different types of simulation algorithms. If multiple simulations gave similar results, then it validated their accuracy.

    “One of the challenges with this work is that there’s not a lot of data that we can compare to because it’s a problem that’s almost impossible to study experimentally,” Gartner said. “We’re really pushing the boundaries here, so that’s another reason why it’s so important that we try to do this using multiple different computational techniques.”

    Beyond Water

    Some of the conditions the researchers tested were extremes that probably don’t exist on Earth directly, but potentially could be present in various water environments of the solar system, from the oceans of Europa to water in the center of comets. Yet these findings could also help researchers better explain and predict water’s strange and complex physical chemistry, informing water’s use in industrial processes, developing better climate models, and more.  

    The work is even more generalizable, according to Gartner. Water is a well-studied research area, but this methodology could be expanded to other difficult-to-simulate materials like polymers, or complex phenomena like chemical reactions.

    “Water is so central to life and industry, so this particular question of whether water can undergo this phase transition has been a longstanding problem, and if we can move toward an answer, that’s important,” he said. “But now we have this really powerful new computational technique, but we don’t yet know what the boundaries are and there’s a lot of room to move the field forward.”

    CITATION: T.E. Gartner, III, P.M. Piaggi, R. Car, A.Z. Panagiotopoulos, P.G. Debenedetti, “Liquid-liquid transition in water from first principles,”* Phys. Rev. Lett., 2022.

    DOI:10.1103/PhysRevLett.129.255702

    #####

     

    The Georgia Institute of Technology, or Georgia Tech, is one of the top public research universities in the U.S., developing leaders who advance technology and improve the human condition. The Institute offers business, computing, design, engineering, liberal arts, and sciences degrees. Its more than 46,000 students, representing 50 states and more than 150 countries, study at the main campus in Atlanta, at campuses in France and China, and through distance and online learning. As a leading technological university, Georgia Tech is an engine of economic development for Georgia, the Southeast, and the nation, conducting more than $1 billion in research annually for government, industry, and society. 

    [ad_2]

    Georgia Institute of Technology

    Source link

  • Argonne wins 3 HPCwire awards

    Argonne wins 3 HPCwire awards

    [ad_1]

    Newswise — The awards recognize collaborative science using high performance computing.

    The U.S. Department of Energy’s (DOE) Argonne National Laboratory has been recognized with three awards from HPCwire, a leading website covering the high performance computing industry. The awards were announced Nov. 14 at SC22, the annual supercomputing conference in Dallas, Texas.

    The awards recognize Argonne’s leadership in high performance computing, including collaborations with industry. Today’s scientific advances often depend on the ability to solve large complex problems relatively quickly with powerful computers and algorithms. Argonne has been using high performance computing for goals ranging from more efficient engines to exploring the cosmos.

    “These awards recognize projects that are quite distinct in their own ways, but they share a common theme: collaboration.” — Rick Stevens, Argonne associate laboratory director for the Computing, Environment and Life Sciences division and an Argonne Distinguished Fellow

    In addition to world-leading computer science expertise, the Lab is home to the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science user facility. HPCwire honored Argonne with several awards last year.

    Improving artificial intelligence tools

    Work led by Argonne to broaden usability for artificial intelligence (AI) models won a Readers’ Choice Award in the Best Use of High Performance Data Analytics & Artificial Intelligence category.

    The research aims to make data science more easily reproducible through a set of principles known as FAIR: findable, accessible, interoperable and reusable. The team included scientists from Argonne, The University of Chicago, National Center for Supercomputing Applications and University of Illinois at Urbana-Champaign. They created a computational framework that enables artificial intelligence models to run seamlessly across various types of hardware and software platforms and yield the same results.

    The research was funded by DOE’s Office of Advanced Scientific Computing Research, the National Institute of Standards and Technology, the National Science Foundation and Argonne Laboratory Directed Research and Development grants. To perform the computations, the team used the ALCF AI Testbed’s SambaNova system and the Theta supercomputer’s NVIDIA graphics processing units. The data for the study was acquired at the Advanced Photon Source, also a DOE Office of Science user facility.

    Collaborating with industry for real-world solutions

    Argonne received another Readers’ Choice Award in the Best Use of HPC in Industry (Automotive, Aerospace, Manufacturing, Chemical) category. Together with the Raytheon Technologies Research Center, Argonne developed machine learning models for designing and optimizing high-efficiency gas turbines in aircraft. The machine learning models were trained on computational fluid dynamics (CFD) simulations of gas turbine film cooling performed on DOE supercomputers. CFD simulations approximate how fluids like air or fuel move, and they are key to enhancing efficiency in machines of all kinds. The researchers’ framework can extend fuel efficiency and durability of aircraft engines while slashing design times and costs. The work is funded by DOE’s Advanced Manufacturing Office via the HPC4EnergyInnovation program.

    In the same industry category, Argonne also won an Editors’ Choice Award for its work with Aramco Americas and Convergent Science focused on high fidelity CFD simulations of hydrogen engines using resources at ALCF and Argonne’s Laboratory Computing Resource Center. The work will help expedite the adoption of clean, highly efficient hydrogen propulsion systems for the transportation sector, facilitating an accelerated transition to low-carbon energy.

    “These awards recognize projects that are quite distinct in their own ways, but they share a common theme: collaboration,” said Rick Stevens, Argonne associate laboratory director for the Computing, Environment and Life Sciences division and an Argonne Distinguished Fellow. ​“We are pushing to move scientific insights from supercomputing into real-world solutions.”

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines. Supported by the U.S. Department of Energy’s (DOE’s) Office of Science, Advanced Scientific Computing Research (ASCR) program, the ALCF is one of two DOE Leadership Computing Facilities in the nation dedicated to open science.

    About the Advanced Photon Source

    The U. S. Department of Energy Office of Science’s Advanced Photon Source (APS) at Argonne National Laboratory is one of the world’s most productive X-ray light source facilities. The APS provides high-brightness X-ray beams to a diverse community of researchers in materials science, chemistry, condensed matter physics, the life and environmental sciences, and applied research. These X-rays are ideally suited for explorations of materials and biological structures; elemental distribution; chemical, magnetic, electronic states; and a wide range of technologically important engineering systems from batteries to fuel injector sprays, all of which are the foundations of our nation’s economic, technological, and physical well-being. Each year, more than 5,000 researchers use the APS to produce over 2,000 publications detailing impactful discoveries, and solve more vital biological protein structures than users of any other X-ray light source research facility. APS scientists and engineers innovate technology that is at the heart of advancing accelerator and light-source operations. This includes the insertion devices that produce extreme-brightness X-rays prized by researchers, lenses that focus the X-rays down to a few nanometers, instrumentation that maximizes the way the X-rays interact with samples being studied, and software that gathers and manages the massive quantity of data resulting from discovery research at the APS.

    This research used resources of the Advanced Photon Source, a U.S. DOE Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357.

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

    The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.

    [ad_2]

    Argonne National Laboratory

    Source link

  • Commerce tightens export controls on high end chips to China

    Commerce tightens export controls on high end chips to China

    [ad_1]

    The Commerce Department is tightening export controls to limit China’s ability to get advanced computing chips, develop and maintain supercomputers, and make advanced semiconductors

    The Commerce Department is tightening export controls to limit China‘s ability to get advanced computing chips, develop and maintain supercomputers, and make advanced semiconductors.

    The department said Friday that its updated export controls are focusing on these areas because China can use the chips, supercomputers and semiconductors to create advanced military systems including weapons of mass destruction; commit human rights abuses and improve the speed and accuracy of its military decision making, planning, and logistics.

    Commerce said the updates are part of ongoing efforts to protect U.S. national security and foreign policy interests.

    “The threat environment is always changing, and we are updating our policies today to make sure we’re addressing the challenges posed by (China) while we continue our outreach and coordination with allies and partners,” Under Secretary of Commerce for Industry and Security Alan Estevez said in a statement.

    Commerce said it consulted with close allies and partners on its control efforts.

    Thursday, at an event in upstate New York, President Biden predicted a $20 billion investment by IBM in New York’s Hudson River Valley will help give the United States a technological edge against China. The investment is spurred by this summer’s passage of a $280 billion measure intended to boost the semiconductor industry and scientific research. That legislation was needed for national and economic security, Biden said in Poughkeepsie, adding that “the Chinese Communist Party actively lobbied against” it.

    Tensions have been rising between the U.S. and China over technology and security. Last month the Chinese government called on Washington to repeal its technology export curbs after California-based chip designer Nvidia said a new product might be delayed and some work might be moved out of China.

    Washington has tightened controls and lobbied allies to limit Chinese access to the most advanced chips and tools to develop its own. China is spending heavily to develop its fledgling producers but so far cannot make high-end chips used in the most advanced smartphones and other devices.

    [ad_2]

    Source link