ReportWire

Tag: Pacific Northwest National Laboratory

  • “Energy Droughts” in Wind and Solar Can Last Nearly a Week, Research Shows

    “Energy Droughts” in Wind and Solar Can Last Nearly a Week, Research Shows

    [ad_1]

    Newswise — Solar and wind power may be free, renewable fuels, but they also depend on natural processes that humans cannot control. It’s one thing to acknowledge the risks that come with renewable energy: the sun doesn’t always shine and the wind doesn’t always blow, but what happens when the grid loses both of these energy sources at the same time?

    This phenomenon is known as a compound energy drought. In a new paper, researchers at Pacific Northwest National Laboratory (PNNL) found that in some parts of the country, these energy droughts can last nearly a week.

    “When we have a completely decarbonized grid and depend heavily on solar and wind, energy droughts could have huge amounts of impact on the grid,” said Cameron Bracken, an Earth scientist at PNNL and lead author on the paper. Grid operators need to know when energy droughts will occur so they can prepare to pull energy from different sources. On top of that, understanding where, when, and for how long energy droughts occur will help experts manage grid-level battery systems that can store enough electricity to deploy during times when energy is needed most.

    The team published the findings October 31 in the journal Renewable Energy and will be presenting at this week’s annual meeting of the American Geophysical Union.

    Hunting for cloudy, windless days

    In the past, researchers studied compound energy droughts on a state or regional scale. But not much has been studied on a nationwide scale. To find out more about the risk of energy droughts over the entire continental U.S., the researchers dug into weather data and then used historical energy demand data to understand how often an energy drought occurs when that energy is needed the most.

    The team examined 4 decades of hourly weather data for the continental U.S. and homed in on geographical areas where actual solar and wind energy plants operate today. Weather data included wind speeds at the height of wind turbines as well as the intensity of solar energy falling on solar panels. Times when the weather data showed stagnant air and cloudy skies translated into lower energy generation from the wind and solar plants—a compound energy drought.

    “We essentially took a snapshot of the infrastructure as of 2020 and ran it through the 40 years of weather data, starting in 1980,” Bracken said. “We are basically saying ‘here is how the current infrastructure would have performed under historical weather conditions.’”

    The researchers found that energy droughts can occur in any season across the continental U.S., though they vary widely in frequency and duration. In California, for instance, cloudy and windless conditions might last several days, whereas the same conditions might last for only a few hours in Texas. Utah, Colorado, and Kansas experience frequent energy droughts both over several-hour timescales as well as several-day timescales. The Pacific Northwest and Northeast, meanwhile, seem to experience energy droughts that last several hours more frequently than several days. The different timescales (hourly versus daily) will help inform the energy drought’s impact on the grid—will it last just a few hours, or several days?

    Overall, researchers found that the longest potential compound energy drought on an hourly timescale was 37 hours (in Texas), while the longest energy drought on a daily timescale was six days (in California).

    Energy drought at peak demand

    Simply knowing the where and how of energy droughts is just one piece of the puzzle, Bracken said. He also stressed that a drought of solar and wind power won’t necessarily cause an energy shortage. Grid operators can turn to other sources of energy like hydropower, fossil fuels, or energy transmitted from other regions in the U.S.

    But as the nation aims to move away from fossil fuels and rely more on solar and wind power, grid operators must understand whether energy droughts will occur during times when the demand for electricity might exceed supply. Climate change brings hotter summers and more intense winter storms, and these are times when not only people use more energy to stay safe (for cooling or heating), but access to electricity might mean life or death.

    To understand the possible connection between energy droughts and energy demand, the team mapped their historical, hypothetical generation data onto 40 years of historical energy demand data that also covered real power plants across the continent.

    The data showed that “wind and solar droughts happen during peak demand events more than you would expect due to chance,” Bracken said, meaning that more often than not, windless and cloudless periods occurred during times when demand for power was high. For now, Bracken isn’t certain that the correlation means causation.

    “This could be due to well-understood meteorological phenomenon such as inversions suppressing wind and increasing temperatures, but further study is needed,” Bracken said.

    Energy storage for energy droughts

    Studying patterns in the frequency and duration of energy droughts will also help inform the deployment of long-duration energy storage projects, said Nathalie Voisin, an Earth scientist at PNNL and coauthor on the paper. The paper is the first to provide a uniform standard of what a compound energy drought is and how long it can last in different parts of the country.

    “We’re providing insight on how to adequately design and manage multi-day storage. So when you know an energy drought is going to last for five hours or five days, you can incentivize storage to be managed accordingly,” Voisin said.

    Next, Bracken and the team will extrapolate weather and demand data into the future to see how climate change will affect the frequency and duration of energy droughts. The team plans to model energy droughts all the way to the end of the century combined with evolving infrastructure.

    This research was funded by PNNL through its internal GODEEEP initiative.

    [ad_2]

    Pacific Northwest National Laboratory

    Source link

  • Want Better AI? Get Input From a Real (Human) Expert

    Want Better AI? Get Input From a Real (Human) Expert

    [ad_1]

    Newswise — Can AI be trusted? The question pops up wherever AI is used or discussed—which, these days, is everywhere.

    It’s a question that even some AI systems ask themselves. 

    Many machine-learning systems create what experts call a “confidence score,” a value that reflects how confident the system is in its decisions. A low score tells the human user that there is some uncertainty about the recommendation; a high score indicates to the human user that the system, at least, is quite sure of its decisions. Savvy humans know to check the confidence score when deciding whether to trust the recommendation of a machine-learning system.

    Scientists at the Department of Energy’s Pacific Northwest National Laboratory have put forth a new way to evaluate an AI system’s recommendations. They bring human experts into the loop to view how the ML performed on a set of data.  The expert learns which types of data the machine-learning system typically classifies correctly, and which data types lead to confusion and system errors. Armed with this knowledge, the experts then offer their own confidence score on future system recommendations.

    The result of having a human look over the shoulder of the AI system? Humans predicted the AI system’s performance more accurately.

    Minimal human effort—just a few hours—evaluating some of the decisions made by the AI program allowed researchers to vastly improve on the AI program’s ability to assess its decisions. In some analyses by the team, the accuracy of the confidence score doubled when a human provided the score.

    The PNNL team presented its results at a recent meeting of the Human Factors and Ergonomics Society in Washington, D.C., part of a session on human-AI robot teaming.

    “If you didn’t develop the machine-learning algorithm in the first place, then it can seem like a black box,” said Corey Fallon, the lead author of the study and an expert in human-machine interaction. “In some cases, the decisions seem fine. In other cases, you might get a recommendation that is a real head-scratcher. You may not understand why it’s making the decisions it is.”

    The grid and AI

     It’s a dilemma that power engineers working with the electric grid face. Their decisions based on reams of data that change every instant keep the lights on and the nation running. But power engineers may be reluctant to turn over decision-making authority to machine-learning systems.

    “There are hundreds of research papers about the use of machine learning in power systems, but almost none of them are applied in the real world. Many operators simply don’t trust ML. They have domain experience—something that ML can’t learn,” said coauthor Tianzhixi “Tim” Yin.

    The researchers at PNNL, which has a world-class team modernizing the grid, took a closer look at one machine-learning algorithm applied to power systems. They trained the SVM (support-vector machine) algorithm on real data from the grid’s Eastern Interconnection in the U.S. The program looked at 124 events, deciding whether a generator was malfunctioning, or whether the data was showing other types of events that are less noteworthy.

    The algorithm was 85% reliable in its decisions. Many of its errors occurred when there were complex power bumps or frequency shifts. Confidence scores created with a human in the loop were a marked improvement over the system’s assessment of its own decisions. The human expert’s input predicted the algorithm’s decisions with much greater accuracy.

     

    More human, better machine learning

    Fallon and Yin call the new score an “Expert-Derived Confidence” score, or EDC score.

    They found that, on average, when humans weighed in on the data, their EDC scores predicted model behavior that the algorithm’s confidence scores couldn’t predict.

    “The human expert fills in gaps in the ML’s knowledge,” said Yin. “The human provides information that the ML did not have, and we show that that information is significant. The bottom line is that we’ve shown that if you add human expertise to the ML results, you get much better confidence.”

    The work by Fallon and Yin was funded by PNNL through an initiative known as MARS—Mathematics for Artificial Reasoning in Science. The effort is part of a broader effort in artificial intelligence at PNNL. The initiative brought together Fallon, an expert on human-machine teaming and human factors research, and Yin, a data scientist and an expert on machine learning.

    “This is the type of research needed to prepare and equip an AI-ready workforce,” said Fallon. “If people don’t trust the tool, then you’ve wasted your time and money. You’ve got to know what will happen when you take a machine learning model out of the laboratory and put it to work in the real world.

    “I’m a big fan of human expertise and of human-machine teaming. Our EDC scores allow the human to better assess the situation and make the ultimate decision.”

    # # #

    [ad_2]

    Pacific Northwest National Laboratory

    Source link

  • Topology, Algebra, and Geometry Give Math Respect in Data Science

    Topology, Algebra, and Geometry Give Math Respect in Data Science

    [ad_1]

    By John Roach

    Newswise — In the computer vision field of object detection, deep learning models are trained to identify objects of interest within an image of a scene. For example, such models can be trained to detect viruses in microscopy images or pick out airplanes parked on tarmacs in overhead aerial imagery.

    “In many cases, like microscopy or overhead images, a user would want to ensure that the objects are found regardless of their orientation,” said Tegan Emerson, a senior data scientist and leader of the mathematics, statistics, and data science group at Pacific Northwest National Laboratory (PNNL). “However, this property is not inherent in all deep learning models.”

    In some cases, the deep learning model can pick out the airplanes with noses pointed north but fail to detect the airplanes pointed south, for instance.

    Emerson and her colleagues explored solutions to address this problem by applying the algebraic concept of group action to a deep learning model for object detection. Group action describes how things are changed under a collection of operations such as rotation. With these algebra-based architecture changes applied to the model, objects are more reliably detected in imagery no matter their orientation.

    “If you constrain the model to have this type of mathematical invariance to it, you’re able to maintain your ability to detect and appropriately identify the objects within your scene, which makes this a much more trustworthy tool for people to use,” Emerson said. “That matters in operational environments where a lot of our algorithms are going to be deployed.”

    Giving math respect in data science

    In recent years, mathematicians were pushed to the sidelines in data science disciplines as computer power and datasets used to train machine learning (ML) models grew exponentially and led to a step-change in capabilities such as artificial intelligence (AI) systems that can generate fluid prose in natural language, noted Timothy Doster, a senior data scientist at PNNL.

    “The mathematics community felt a little behind the time as massive amounts of funding went into these computer science fields,” he said. “But now they’re seeing research around explainability or dependability of these algorithms and that’s where math can really come in and address these areas.”

    In 2022, Doster, Emerson, and PNNL data scientist colleague Henry Kvinge co-founded the Topology, Algebra, and Geometry in Data Science (TAG-DS) community to help spur interest in the application of math to address specific topics in data science and ML.

    The community hosts workshops and conferences as well as provides publishing opportunities to drive awareness of mathematically principled solutions to data science problems. Most recently, the team hosted the second annual TAG in ML workshop at the International Conference on Machine Learning (ICML) on July 28, 2023, in Honolulu, Hawaii, and attracted more than 200 participants.

    Part of the interest in the TAG-DS community stems from the growing complexity of ML systems, which operate on high-dimensional, complex datasets using models that have thousands to billions of learnable parameters, noted Kvinge.

    “Such settings transcend human intuition which begins to quickly degrade beyond three dimensions,” he said. “Modern topology, algebra, and geometry were designed to allow mathematicians to understand exotic spaces, making them natural toolboxes to investigate when studying state-of-the-art machine learning.”

    Proof of math in data science

    In some cases, the application of math to data science can improve the rigor of AI models trained with massive datasets and computer power. For example, the mathematical study of symmetry, or representation theory, is used in some of the models capable of predicting how proteins fold and twist into their three-dimensional shapes, according to Kvinge.

    Protein folding models help scientists understand the structure of proteins, which are the building blocks of life—they are molecular machines that play a fundamental role in the structure, function, and regulation of nearly every biological process.

    “We know that how a protein folds should not depend on its location in space nor its orientation, and consequently a deep learning model should ignore these factors of variation when processing representations of proteins,” he explained. “Building model architectures can be done far more accurately when you understand how to capture the symmetries intrinsic to three-dimensional space.”

    In other cases, mathematics techniques can improve data used in more niche data science tasks such as using topological data analysis to extract shape-based features for ML models used to understand the structure and properties of materials such as the metal rods, tubes, and cubes that provide cars and trucks their shape, strength, and fuel economy.

    “Topology is the study of shape and there is a widely used quote from a leader in the field that states, ‘Data has shape, shape has meaning’ and what shape means for different formats of data can be nuanced,” noted Emerson.

    In one study, researchers applied topology to scanning electron microscopy images that were used to support research and development in advanced manufacturing. In this case, white precipitates, or solid materials, that formed during a metal manufacturing process were visible throughout the image. By looking at the topology of the precipitates at multiple threshold values, the team was able to capture physically meaningful features, summarize the information, and use it as input to the ML model.

    “Part of the difference in the paradigm for TAG-DS both at PNNL and in the scientific community is that you’re not just trying to train a model. What you’re trying to do is build a solution,” said Emerson. “You want something that actually addresses a need or a way to support a human who is involved in the processing pipeline.”

    Growing the TAG-DS community

    Engagement with the TAG-DS community has more than doubled in its first year of existence, according to Doster. For example, the TAG-ML workshop at ICML in 2022 had about 40 published submissions. This year’s workshop received more than 90 submissions and included four keynotes by world leaders in geometric and topological deep learning, two poster sessions, six spotlight talks, and other activities.

    Looking forward, the group is planning to host more workshops at computer science and mathematics conferences and is aiming to host a standalone TAG-DS conference in 2025.

    According to Emerson, the ability of TAG-DS to increase the rigor, trustworthiness, and explainability of AI systems will only grow in importance as technologies such as generative AI become widespread.

    “From a national laboratory’s perspective with our interest for the nation, but also for the average person in daily life, the mathematical rigor that the TAG-DS community can bring to understanding the ways these tools can support you, when they will work, how they will fail, and when they are not an appropriate technique to be using is critical,” she said.

    ###

    About PNNL

    Pacific Northwest National Laboratory draws on its distinguishing strengths in chemistry, Earth sciences, biology and data science to advance scientific knowledge and address challenges in sustainable energy and national security. Founded in 1965, PNNL is operated by Battelle for the Department of Energy’s Office of Science, which is the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science. For more information on PNNL, visit PNNL’s News Center. Follow us on Twitter, Facebook, LinkedIn and Instagram.

    [ad_2]

    Pacific Northwest National Laboratory

    Source link

  • Wind Forecast Improvement Project Saves Millions for Utilities

    Wind Forecast Improvement Project Saves Millions for Utilities

    [ad_1]

    Newswise — The wind doesn’t always blow where it’s needed–that’s the biggest hurdle in fitting wind energy to the nation’s portfolio of renewable energy. When the wind isn’t blowing, utility companies must turn to other electricity generators, such as solar or hydropower, or to fossil fuels, which the U.S. has goals to use less of.

    The key to clearing this hurdle is accurate weather forecasts, but weather forecasting isn’t a perfect science. To help make weather forecasting more accurate, scientists at the Pacific Northwest National Laboratory (PNNL) have teamed up with the National Oceanic and Atmospheric Association (NOAA), along with universities and private industry to improve weather forecasts. Through their work on the Wind Forecast Improvement Project (WFIP), the multiagency research has already helped save utility companies millions of dollars.

    “Wind energy is clean and low cost, but its one drawback is that it’s dependent on the fuel, which in this case is wind. And wind is not constant,” said Raghavendra Krishnamurthy, an Earth scientist at PNNL and principal investigator for WFIP. “With more accurate wind forecasts at turbine heights, utility companies can more efficiently balance their power generation from various sources, like wind, hydropower, or fossil fuels, and save money.”

    Forecasting Complications

    Utility companies depend on weather forecasts to prepare for the next day’s electricity generation, and inaccuracies in weather forecasts can cost millions. If wind is overpredicted (i.e., there was less wind than forecast), utilities must quickly pivot to other types of energy, which is costly and inefficient. If wind is underpredicted (i.e., there was more wind than forecast), utility companies would have already paid unnecessarily for potentially more costly energy, such as that from natural gas.

    Forecasts come from the National Weather Service, which uses a model called the high-resolution rapid refresh model (HRRR). The model incorporates data from weather sensors all over the United States about variables like wind, humidity, air pressure, and air temperature, and uses them to predict the winds for the next 48 hours.

    But variables like wind, air temperature, pressure and humidity change based on where wind farms are in the United States, which affects what kinds of weather patterns a wind farm experiences day-to-day. Some areas are dry, flat, and hot, while some areas are cold, wet, and mountainous. Some wind farms are placed in the ocean, which comes with a completely different set of temperature and humidity variables from land-based wind farms.

    WFIP helps model builders incorporate these regional nuances.

    Wind Forecast Improvements

    The team realized they had to study the weather across different regions and incorporate those findings to improve the model. “If you think of the model as a fish net, and weather phenomena like clouds and storms as the fish, the only fish you don’t catch are the ones getting through the net. The finer the net, the more fish you catch,” said Larry Berg, division director for Atmospheric Sciences and Global Change Division at PNNL and former investigator on the WFIP team. Studying regional data helps us understand what is making it through the “net,” or an improved model, which creates more accurate forecasts.                       

    In the project’s first phase, PNNL scientists, along with other partners at other DOE national laboratories, NOAA, universities, and private industry, took data from wind farms in northern Texas and the Great Plains in 2011 – 2012. In the project’s second phase, the WFIP2 team collected data from 2015 to 2017 from the Pacific Northwest’s Columbia River Gorge and basin. Here, mountains tower over near-sea-level basins and the Columbia River has cut a canyon between rocky cliffs.

    Researchers at NOAA used these data to improve the HRRR model, releasing the first updated version (called HRRR2) in 2016, and another (HRRR3) in 2018. With WFIP’s contributions, HRRR’s updates have improved weather modeling and led to significant savings. According to 2022 a paper in the Bulletin of the American Meteorological Society, utility companies likely saved more than $95 million per year after NOAA launched HRRR2 and $32 million after launching HRRR3.

    An additional paper published in 2022 in the Journal of Renewable and Sustainable Energy found that the improved models had the potential to save consumers across the U.S. more than $380 million.

    “The WFIP campaigns, and in particular WFIP2, provided a unique dataset that enabled us to improve our wind forecasts in the lower atmosphere markedly,” said David Turner, an atmospheric scientist at NOAA and manager of the agency’s Atmospheric Science for Renewable Energy program. “We have demonstrated that, if the energy community only used the HRRR for their day-ahead decisions on energy generation, then they would have saved hundreds of millions of dollars per year using more updated versions of HRRR.”

    The Future of WFIP

    The WFIP team is already planning for the future of the project, with WFIP3 starting this year gathering data from wind farms off the northeastern coast of the United States.

    “Offshore wind data is very sparse, and therefore we are not sure on the accuracy of the wind forecasts offshore.” Krishnamurthy said. “The next phase of WFIP will provide this necessary data, which will be made freely available to the research community and support the development of more accurate forecasts.”

    WFIP is supported by the Department of Energy’s Wind Energy Technologies Office and NOAA’s Atmosphere Science for Renewable Energy Program.

     

     

    [ad_2]

    Pacific Northwest National Laboratory

    Source link

  • Establishing Ethical Nanobiotechnology

    Establishing Ethical Nanobiotechnology

    [ad_1]

    By Rebekah Orton

    Newswise — Prosthetics moved by thoughts. Targeted treatments for aggressive brain cancer. Soldiers with enhanced vision or bionic ears.

    These powerful technologies sound like science fiction, but they’re becoming possible thanks to nanoparticles.

    And, as with any great power, there comes great responsibility.

    “In medicine and other biological settings, nanotechnology is amazing and helpful, but it could be harmful if used improperly,” said Pacific Northwest National Laboratory (PNNL) chemist Ashley Bradley, part of a team of researchers who conducted a comprehensive survey of nanobiotechnology applications and policies.

    Their research, available now in Health Security, works to sum up the very large, active field of nanotechnology in biology applications, draw attention to regulatory gaps, and offer areas for further consideration.

    “In our research, we learned there aren’t many global regulations yet,” said Bradley. “And we need to create a common set of rules to figure out the ethical boundaries.”

    Nanoparticles, big differences

    Nanoparticles are clusters of molecules with different properties than large amounts of the same substances. In medicine and other biology applications, these properties allow nanoparticles to act as the packaging that delivers treatments through cell walls and the difficult to cross blood-brain barrier.

    “You can think of the nanoparticles a little bit like the plastic around shredded cheese,” said PNNL chemist Kristin Omberg. “It makes it possible to get something perishable directly where you want it, but afterwards you’ve got to deal with a whole lot of substance where it wasn’t before.”

    Unfortunately, dealing with nanoparticles in new places isn’t straightforward. Carbon is pencil lead, nano carbon conducts electricity. The same material may have different properties at the nanoscale, but most countries still regulate it the same as bulk material, if the material is regulated at all.

    For example, zinc oxide, a material that was stable and unreactive as a pigment in white paint, is now accumulating in oceans when used as nanoparticles in sunscreen, warranting a call to create alternative reef-safe sunscreens. And although fats and lipids aren’t regulated, the researchers suggest which agencies could weigh in on regulations were fats to become after-treatment byproducts.

    The article also inventories national and international agencies, organizations, and governing bodies with an interest in understanding how nanoparticles break down or react in a living organism and the environmental life cycle of a nanoparticle. Because nanobiotechnology spans materials science, biology, medicine, environmental science, and tech, these disparate research and regulatory disciplines must come together, often for the first time—to fully understand the impact on humans and the environment.

    Dual use: Good for us, bad for us

    Like other quickly growing fields, there’s a time lag between the promise of new advances and the possibilities of unintended uses.

    “There were so many more applications than we thought there were,” said Bradley, who collected exciting nanobio examples such as Alzheimer’s treatment, permanent contact lenses, organ replacement, and enhanced muscle recovery, among others.

    The article also highlights concerns about crossing the blood-brain barrier, thought-initiated control of computers, and nano-enabled DNA editing where the researchers suggest more caution, questioning, and attention could be warranted. This attention spans everything from deep fundamental research and regulations all the way to what Omberg called “the equivalent of tattoo removal” if home-DNA splicing attempts go south.

    The researchers draw parallels to more established fields such as synthetic bio and pharmacology, which offer lessons to be learned from current concerns such as the unintended consequences of fentanyl and opioids. They believe these fields also offer examples of innovative coordination between science and ethics, such as synthetic bio’s IGEM—student competition, to think about not just how to create, but also to shape the use and control of new technologies.

    Omberg said unusually enthusiastic early reviewers of the article contributed even more potential uses and concerns, demonstrating that experts in many fields recognize ethical nanobiotechnology is an issue to get in front of. “This is a train that’s going. It will be sad if 10 years from now, we haven’t figured how to talk about it.”

    Funding for the team’s research was supported by PNNL’s Biorisk Beyond the List National Security Directorate Objective.

    ###

    About PNNL

    Pacific Northwest National Laboratory draws on its distinguishing strengths in chemistry, Earth sciences, biology and data science to advance scientific knowledge and address challenges in sustainable energy and national security. Founded in 1965, PNNL is operated by Battelle for the Department of Energy’s Office of Science, which is the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science. For more information on PNNL, visit PNNL’s News Center. Follow us on Twitter, Facebook, LinkedIn and Instagram.

    [ad_2]

    Pacific Northwest National Laboratory

    Source link

  • New nationwide modeling points to widespread racial disparities in urban heat stress

    New nationwide modeling points to widespread racial disparities in urban heat stress

    [ad_1]

    Newswise — RICHLAND, Wash.— From densely built urban cores to sprawling suburbia, cities are complex. This complexity can lead to temperature hot spots within cities, with some neighborhoods (and their residents) facing more heat than others.

    Understanding this environmental disparity forms the spirit of new research led by scientists at the Department of Energy’s Pacific Northwest National Laboratory. In a new paper examining all major cities in the U.S., the authors find that the average Black resident is exposed to air that is warmer by 0.28 degrees Celsius relative to the city average. In contrast, the average white urban resident lives where air temperature is cooler by 0.22 degrees Celsius relative to the same average.

    The new work, published last week in the journal One Earth, involved a two-part effort. The study’s authors aimed to produce a more useful nationwide estimate of urban heat stress—a more accurate account of how our body responds to outdoor heat. By creating and comparing these estimates against demographic data, they also tried to better understand which populations are most exposed to urban heat stress.

    The findings reveal pervasive income- and race-based disparities within U.S. cities. Nearly all the U.S. urban population—94 percent, or roughly 228 million people—live in cities where summertime peak heat stress exposure disproportionately burdens the poor.

    The study’s authors also find that people who now live within historically redlined neighborhoods, where loan applicants were once denied on racially discriminatory grounds, would be exposed to higher outdoor heat stress than their neighbors living in originally non-redlined parts of the city. 

    The work also highlights shortcomings in the typical approach scientists take in estimating urban heat stress at these scales, which frequently relies on satellite data. This conventional satellite-based method can overestimate such disparities, according to the new work. As the world warms, the findings stand to inform urban heat response plans put forward by local governments who seek to help vulnerable groups. 

    What is heat stress? 

    The human body has evolved to operate within a relatively narrow temperature range. Raise your core body temperature beyond just six or seven degrees and drastic physiological consequences soon follow. Cellular processes break down, the heart is taxed, and organs begin to fail.

    Sweating helps. But the cooling power of sweating depends partly on how humid the environment is. When both heat and humidity are omnipresent and difficult to escape, the body struggles to adapt.

    How is heat stress measured? 

    To measure heat stress, scientists use a handful of indicators, many of which depend on air temperature and humidity. Weather stations provide such data. Because most weather stations are outside of cities, though, scientists often rely on other means to get some idea about urban heat stress, including using sensors on satellites.

    Those sensors infer the temperature of the land surface from measurements of thermal radiation. But such measurements fall short of delivering a full picture of heat stress, said lead author and Earth scientist TC Chakraborty. Measuring just the skin of the Earth, like the surface of a sidewalk or a patch of grass, said Chakraborty, offers only an idea of what it’s like to lay flat on that surface. 

    “Unless you’re walking around barefoot or lying naked on the ground, you’re not really feeling that,” said Chakraborty. “Land surface temperature is, at best, a crude proxy of urban heat stress.” 

    Indeed, most of us are upright, moving through a world where air temperature and moisture dictate how heat actually feels. And these satellite data are only available for clear-sky days—another limiting factor. More complete and physiologically relevant estimates of heat stress incorporate a blend of factors, which models can provide, said Chakraborty.

    To better understand differences between satellite-derived land surface temperature and ambient heat exposure within cities, Chakraborty’s team examined 481 urbanized areas across the continental United States using both satellites and model simulations.

    NASA’s Aqua satellite provided the land surface temperature; and through model simulations that account for urban areas, the authors generated nationwide estimates of all variables required to calculate moist heat stress. Two such metrics of heat stress—the National Weather Service’s heat index and the Humidex, often used by Canadian meteorologists—allowed the scientists to capture the combined impacts of air temperature and humidity on the human body.

    They then identified heat stress hotspots across the country for summer days between 2014 and 2018. Overlaying maps of both historically redlined neighborhoods and census tracts, the team identified relationships between heat exposure and communities.

    How is heat distributed within cities?

    Residents in poorer neighborhoods often face greater heat stress. And a greater degree of income inequality in any given city often means greater heat stress exposure for its poorer residents.

    Most U.S. cities, including heavily populated cities like New York, Los Angeles, Chicago, and Philadelphia, show this disparity. But the relationship between heat stress and race-based residential segregation is even more stark. 

    Roughly 87.5 percent of the cities studied show that Black populations live in parts of the city with higher land surface temperatures, warmer air, and greater moist heat stress. Moreover, the association between the degree of heat stress disparity and the degree of segregation between white and non-white populations across cities is particularly striking, said Chakraborty.

    “The majority—83 percent—of non-white U.S. urban residents live in cities where outdoor moist heat stress disproportionately burdens them,” said Chakraborty, “Further, higher percentages of all races other than white are positively correlated with greater heat exposure no matter which variable you use to assess it.”

    In the 1930s, the U.S. federal government’s Home Owners’ Loan Corporation graded neighborhoods in an effort to rank the suitability of real estate investments. This practice is known as “redlining,” where lower grades (and consequently fewer loans) were issued to neighborhoods composed of poorer and minority groups. The authors find that these redlined neighborhoods still show worse environmental conditions.

    Neighborhoods with lower ratings face higher heat exposure than their non-redlined neighbors. Neighborhoods with higher ratings, in contrast, generally get less heat exposure. 

    This is consistent with previous research on originally redlined urban neighborhoods showing lower tree cover and higher land surface temperature. Chakraborty, however, notes that using land surface temperature would generally overestimate these disparities across neighborhood grades compared to using air temperature or heat index.

    “Satellites give us estimates of land surface temperature, which is a different variable from the temperature we feel while outdoors, especially within cities,” said Chakraborty. “Moreover, the physiological response to heat also depends on humidity, which satellites cannot directly provide, and urbanization also modifies.”

    What can be done?

    Planting more trees often comes up as a potential solution to heat stress, said Chakraborty. But densely built urban cores, where poorer and minority populations in the U.S. often live, have limited space for trees. And many previous estimates of vegetation’s potential to cool city surroundings are also based solely on land surface temperature—they are perhaps prone to similar overestimation, the authors suggest.

    More robust measurements of urban heat stress would help, they added. Factors like wind speed and solar insolation contribute to how heat actually affects the human body. But those factors are left out of most scientific assessments of urban heat stress because they are difficult to measure or model at neighborhood scales.

    In addition to Chakraborty, PNNL authors of the new work include Yun Qian. Andrew Newman at the National Center for Atmospheric Research, Angel Hsu at the University of North Carolina-Chapel Hill, and Glenn Sheriff at Arizona State University are also authors. This work was supported by DOE’s Office of Science and the National Institutes of Health.

    [ad_2]

    Pacific Northwest National Laboratory

    Source link

  • Sustaining U.S. Nuclear Power Plants Could be Key to Decarbonization

    Sustaining U.S. Nuclear Power Plants Could be Key to Decarbonization

    [ad_1]

    Newswise — Nuclear power is the single largest source of carbon-free energy in the United States and currently provides nearly 20 percent of the nation’s electrical demand. Many analyses have investigated the potential of future nuclear energy contributions in addressing climate change. However, few assess the value of existing nuclear power reactors.

    Research led by Pacific Northwest National Laboratory (PNNL) Earth scientist Son H. Kim with the Joint Global Change Research Institute (JGCRI), a partnership between PNNL and the University of Maryland, has added insight to the scarce literature and is the first to evaluate nuclear energy for meeting deep decarbonization goals. Kim sought to answer the question: Just how much do our existing nuclear reactors contribute to the mission of meeting the country’s climate goals, both now and if their operating licenses were extended?

    As the world races to discover solutions for reaching net zero, Kim’s report quantifies the economic value of bringing the existing nuclear fleet into the year 2100 and outlines its significant contributions in limiting global warming.

    Plants slated to close by 2050 could be among the most important players in a challenge that requires all carbon-free technology solutions that are available—emerging and existing—the report finds. New nuclear technology also has a part to play, and its contributions could be boosted by driving down construction costs.  

    “Even modest reductions in capital costs could bring big climate benefits,” said Kim. “Significant effort has been incorporated into the design of advanced reactors to reduce the use of all materials in general, such as concrete and steel, because that directly translates into reduced costs and carbon emissions.”

    Nuclear power reactors face an uncertain future

    The nuclear power fleet in the United States consists of 93 operating reactors across 28 states. Most of these plants were constructed and deployed between 1970-1990. This means half of the fleet has outlived its original operating license lifetime of 40 years. While most reactors have had their licenses renewed for an additional 20 years, and some for yet another 20, the total number of reactors that will receive a lifetime extension to operate a full 80 years from deployment is uncertain.

    Other countries also rely on nuclear energy. In France, for example, nuclear energy provides 70 percent of the country’s power supply. They and other countries will also have to consider whether to extend the lifetime, retire, or build new, modern reactors. However, the U.S. faces the potential retirement of a bulk of reactors in a short period of time—this could have a far stronger impact than the staggered closures other countries may experience.

    “Our existing nuclear power plants are aging and with their current 60-year lifetimes, nearly all of them will be gone by 2050. It’s ironic. We have a net zero goal to reach by 2050, yet our single largest source of carbon-free electricity is at risk of closure,“ said Kim.

    Exploring scenarios of lifetime extensions for nuclear power reactors

    Kim has built computational models that explore the interplay between economic processes, energy demand, and Earth’s climate since joining PNNL and JGCRI in 1995, when he was a doctoral intern with a fresh PhD in nuclear engineering. At JGCRI, researchers explore interactions between human, energy, and environmental systems to provide data for managing risks and analyzing options. His research is inspired by a drive to solve the energy and environmental crisis using modeling capabilities and tools like the Global Change Analysis Model (GCAM), developed at PNNL.

    Kim used GCAM to model multiple scenarios of extending the lifetime of the existing nuclear fleet into 2100. The article, published in Nuclear Technology, put a value on lifetime license extensions from 40 to 100 years at $330 billion to $500 billion in mitigation cost savings under a scenario that limits global temperature to 2°C. Mitigation costs savings, or the carbon value, are amounts of dollars saved in reducing greenhouse gas emissions. Legacy nuclear reactors alone have a carbon value of $500 billion if operational for 100 years. Every gigawatt of energy, or one nuclear power reactor, translates to $5 billion later saved. Because that gigawatt was produced without any carbon emitted into Earth’s atmosphere, no money would need to be spent to mitigate its effects.

    Maintaining existing nuclear power plants avoids replacing reactors with electricity sources that produce carbon emissions. In states where nuclear reactors have been shut down, carbon emissions have increased from replacing the carbon-free electricity with natural gas-generated electricity.

    Kim determined that lifetime extensions of existing nuclear power reactors from 60 to 80 years, without adding new nuclear capacity, contributed to a reduction of approximately 0.4 gigatons of carbon (GtCO2) emissions per year by 2050. The total cumulative difference in CO2 emissions between 2020 and 2100, in a scenario with lifetime extensions and future deployment of nuclear power plants (as compared to a scenario with a moratorium on new nuclear power plants), amounts to as much as 57 GtCO2.

    How much is 57 GtCO2? According to the International Energy Agency, U.S. carbon emissions in 2022 were 4.7 Gt, which means nuclear energy could save approximately 12 years’ worth of carbon emissions.

    An Intergovernmental Panel on Climate Change report on nuclear energy stated, “Nuclear power is therefore an effective greenhouse gas (GHG) mitigation option, especially through license extensions of existing plants enabling investments in retro-fitting and upgrading.”

    However, in a follow-on report to his research, Kim addresses the additional savings potential of driving down capital costs of building new nuclear power plants.

    Removing the uncertainty in nuclear power costs can increase emissions savings

    Building new nuclear power plants is expensive and construction takes a long period of time. The largest costs are often capital costs: the one-time price paid to build new structures and equipment.

    Advanced reactors—including small modular reactors and microreactors—are being developed with new technologies, enhanced security features, smaller physical footprints, and more flexible deployment options. They are expected to play an important role in the future U.S. electricity system and carbon mitigation efforts.

    “One of most important attributes of small modular reactors and microreactors is the reduced construction time,” Kim said. “SMRs and microreactors will be factory fabricated and delivered to site on trucks, and the uncertainty associated with financing cost should be reduced or eliminated.”

    Kim used GCAM to investigate a range of nuclear plant capital costs with scenarios of alternative carbon mitigation policies, and U.S. economy-wide net-zero emission goals by 2050, 2060, and 2070.

    Among the multiple findings in the report for DOE’s Office of Nuclear Energy, Kim found that an aggressive reduction of nuclear construction costs has a clear and pronounced impact on the expanded deployment of nuclear power under all scenarios, even without an explicit carbon mitigation policy.

    Continuing to generate electricity while removing all emissions of greenhouse gases by mid-century is a difficult challenge. “We must utilize all carbon-free technologies that are available to us,” said Kim, “and one of the great values of nuclear energy is that it doesn’t emit carbon while it’s generating power.”

    ###

    About PNNL

    Pacific Northwest National Laboratory draws on its distinguishing strengths in chemistry, Earth sciences, biology and data science to advance scientific knowledge and address challenges in sustainable energy and national security. Founded in 1965, PNNL is operated by Battelle for the Department of Energy’s Office of Science, which is the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science. For more information on PNNL, visit PNNL’s News Center. Follow us on Twitter, Facebook, LinkedIn and Instagram.

    [ad_2]

    Pacific Northwest National Laboratory

    Source link

  • Duling Named Associate Lab Director at PNNL

    Duling Named Associate Lab Director at PNNL

    [ad_1]

    Newswise — RICHLAND, Wash.—Joel W. Duling has been named associate laboratory director for Operational Systems at Pacific Northwest National Laboratory.

    Duling joined PNNL last August as chief projects officer for the Laboratory’s Operational Systems Directorate and was named acting associate laboratory director for OSD in January.

    In his new role, Duling will be responsible for managing PNNL’s facilities and infrastructure; environment, health, safety, and security programs; project management; and nuclear operations. He also will steward PNNL’s 10-year, $1.2-billion campus development plan and guide the Laboratory’s efforts to achieve net-zero emissions.

    “Throughout his career, Joel has demonstrated sound decision-making; a safety-first team orientation; and the ability to build effective, long-lasting stakeholder relationships,” said PNNL Director Steven Ashby in making the announcement.

    “I am proud to be leading such an outstanding group of individuals committed to supporting our nation’s priorities,” Duling added. “It’s an exciting time to be at PNNL with more than 100 campus modernization projects underway. Among our highest priorities is construction of the Grid Storage Launchpad, a $75-million research facility that will serve as a collaborative national center for validating and accelerating new, clean energy storage technologies,” said Duling of the construction project that is nearing completion.

    Duling has more than 35 years of leadership and experience in facility infrastructure operations, project management, environmental compliance, high-hazard nuclear operations and defense manufacturing. 

    Prior to joining PNNL, he was president of BWXT’s Nuclear Operations Group, Inc., a Lynchburg, Virginia-based subsidiary of BWX Technologies, Inc., which develops and manufactures nuclear reactor components for the commercial industry and U.S. government, including Navy submarines and aircraft carriers.

    Previously, Duling served as president of BWXT Nuclear Fuel Services, Inc., as vice president of production at the Y-12 National Security Complex in Tennessee, and in various leadership roles for Battelle, BWXT and previous contractors at the Idaho National Laboratory.

    Duling earned a bachelor’s degree in biophysical systems/chemistry from Northern Michigan University in Marquette, and an MBA from Auburn University in Alabama.

    [ad_2]

    Pacific Northwest National Laboratory

    Source link

  • Cybersecurity Defenders Are Expanding Their AI Toolbox

    Cybersecurity Defenders Are Expanding Their AI Toolbox

    [ad_1]

    Newswise — Scientists have taken a key step toward harnessing a form of artificial intelligence known as deep reinforcement learning, or DRL, to protect computer networks.

    When faced with sophisticated cyberattacks in a rigorous simulation setting, deep reinforcement learning was effective at stopping adversaries from reaching their goals up to 95 percent of the time. The outcome offers promise for a role for autonomous AI in proactive cyber defense.

    Scientists from the Department of Energy’s Pacific Northwest National Laboratory documented their findings in a research paper and presented their work Feb. 14 at a workshop on AI for Cybersecurity during the annual meeting of the Association for the Advancement of Artificial Intelligence in Washington, D.C.

    The starting point was the development of a simulation environment to test multistage attack scenarios involving distinct types of adversaries. Creation of such a dynamic attack-defense simulation environment for experimentation itself is a win. The environment offers researchers a way to compare the effectiveness of different AI-based defensive methods under controlled test settings.

    Such tools are essential for evaluating the performance of deep reinforcement learning algorithms. The method is emerging as a powerful decision-support tool for cybersecurity experts—a defense agent with the ability to learn, adapt to quickly changing circumstances, and make decisions autonomously. While other forms ofAI are standard to detect intrusions or filter spam messages, deep reinforcement learning expands defenders’ abilities to orchestrate sequential decision-making plans in their daily face-off with adversaries.

    Deep reinforcement learning offers smarter cybersecurity, the ability to detect changes in the cyber landscape earlier, and the opportunity to take preemptive steps to scuttle a cyberattack.

     

    DRL: Decisions in a broad attack space

    “An effective AI agent for cybersecurity needs to sense, perceive, act and adapt, based on the information it can gather and on the results of decisions that it enacts,” said Samrat Chatterjee, a data scientist who presented the team’s work. “Deep reinforcement learning holds great potential in this space, where the number of system states and action choices can be large.”

    DRL, which combines reinforcement learning and deep learning, is especially adept in situations where a series of decisions in a complex environment need to be made. Good decisions leading to desirable results are reinforced with a positive reward (expressed as a numeric value); bad choices leading to undesirable outcomes are discouraged via a negative cost.

    It’s similar to how people learn many tasks. A child who does their chores might receive positive reinforcement with a desired playdate; a child who doesn’t do their work gets negative reinforcement, like the takeaway of a digital device.

    “It’s the same concept in reinforcement learning,” Chatterjee said. “The agent can choose from a set of actions. With each action comes feedback, good or bad, that becomes part of its memory. There’s an interplay between exploring new opportunities and exploiting past experiences. The goal is to create an agent that learns to make good decisions.”

     

    Open AI Gym and MITRE ATT&CK

    The team used an open-source software toolkit known as Open AI Gym as a basis to create a custom and controlled simulation environment to evaluate the strengths and weaknesses of four deep reinforcement learning algorithms.

    The team used the MITRE ATT&CK framework, developed by MITRE Corp., and incorporated seven tactics and 15 techniques deployed by three distinct adversaries. Defenders were equipped with 23 mitigation actions to try to halt or prevent the progression of an attack.

    Stages of the attack included tactics of reconnaissance, execution, persistence, defense evasion, command and control, collection and exfiltration (when data is transferred out of the system). An attack was recorded as a win for the adversary if they successfully reached the final exfiltration stage.

    “Our algorithms operate in a competitive environment—a contest with an adversary intent on breaching the system,” said Chatterjee. “It’s a multistage attack, where the adversary can pursue multiple attack paths that can change over time as they try to go from reconnaissance to exploitation. Our challenge is to show how defenses based on deep reinforcement learning can stop such an attack.”

     

    DQN outpaces other approaches

    The team trained defensive agents based on four deep reinforcement learning algorithms: DQN (Deep Q-Network) and three variations of what’s known as the actor-critic approach. The agents were trained with simulated data about cyberattacks, then tested against attacks that they had not observed in training.

    DQN performed the best.

    • Least sophisticated attacks (based on varying levels of adversary skill and persistence): DQN stopped 79 percent of attacks midway through attack stages and 93 percent by the final stage.
    • Moderately sophisticated attacks: DQN stopped 82 percent of attacks midway and 95 percent by the final stage.
    • Most sophisticated attacks: DQN stopped 57 percent of attacks midway and 84 percent by the final stage—far higher than the other three algorithms.

    “Our goal is to create an autonomous defense agent that can learn the most likely next step of an adversary, plan for it, and then respond in the best way to protect the system,” Chatterjee said.

    Despite the progress, no one is ready to entrust cyber defense entirely up to an AI system. Instead, a DRL-based cybersecurity system would need to work in concert with humans, said coauthor Arnab Bhattacharya, formerly of PNNL.

    AI can be good at defending against a specific strategy but isn’t as good at understanding all the approaches an adversary might take,” Bhattacharya said. “We are nowhere near the stage where AI can replace human cyber analysts. Human feedback and guidance are important.”

    In addition to Chatterjee and Bhattacharya, authors of the AAAI workshop paper include Mahantesh Halappanavar of PNNL and Ashutosh Dutta, a former PNNL scientist. The work was funded by DOE’s Office of Science. Some of the early work that spurred this specific research was funded by PNNL’s Mathematics for Artificial Reasoning in Science initiative through the Laboratory Directed Research and Development program.

    # # #

    [ad_2]

    Pacific Northwest National Laboratory

    Source link

  • New Sodium, Aluminum Battery Aims to Integrate Renewables for Grid Resiliency

    New Sodium, Aluminum Battery Aims to Integrate Renewables for Grid Resiliency

    [ad_1]

    Newswise — RICHLAND, Wash.—A new battery design could help ease integration of renewable energy into the nation’s electrical grid at lower cost, using Earth-abundant metals, according to a study just published in Energy Storage Materials. A research team, led by the Department of Energy’s Pacific Northwest National Laboratory, demonstrated that the new design for a grid energy storage battery built with the low-cost metals sodium and aluminum provides a pathway towards a safer and more scalable stationary energy storage system.

    “We showed that this new molten salt battery design has the potential to charge and discharge much faster than other conventional high-temperature sodium batteries, operate at a lower temperature, and maintain an excellent energy storage capacity,” said Guosheng Li, a materials scientist at PNNL and the principal investigator of the research. “We are getting similar performance with this new sodium-based chemistry at over 100 °C [212 °F] lower temperatures than commercially available high-temperature sodium battery technologies, while using a more Earth-abundant material.”

    More energy storage delivered

    Imre Gyuk, director of DOE’s Office of Electricity, Energy Storage Program, which supported this research, noted “This battery technology, which is built with low-cost domestically available materials brings us one step closer toward meeting our nation’s clean energy goals.”

    The new sodium-based molten salt battery uses two distinct reactions. The team previously reported a neutral molten salt reaction. The new discovery shows that this neutral molten salt can undergo a further reaction into an acidic molten salt. Crucially, this second acidic reaction mechanism increases the battery’s capacity. Specifically, after 345 charge/discharge cycles at high current, this acidic reaction mechanism retained 82.8 percent of peak charge capacity.

    The energy that a battery can deliver in the discharge process is called its specific energy density, which is expressed as “watt hour per kilogram” (Wh/kg). Although the battery is in early-stage or  “coin cell” testing, the researchers speculate that it could result in a practical energy density of up to 100 Wh/kg. In comparison, the energy density for lithium-ion batteries used in commercial electronics and electric vehicles is around 170–250 Wh/kg. However, the new sodium-aluminum battery design has the advantage of being inexpensive and easy to produce in the United States from much more abundant materials.

    “With optimization, we expect the specific energy density and the life cycle could reach even higher and longer,” added Li.

    Sodium battery shows its mettle

    Indeed, PNNL scientists collaborated with colleagues at the U.S.-based renewable energy pioneer Nexceris to assemble and test the battery. Nexceris, through their new business Adena Power, supplied their patented solid-state, sodium-based electrolyte to PNNL to test the battery’s performance. This crucial battery component allows the sodium ions to travel from the negative (anode) to the positive (cathode) side of the battery as it charges.

    “Our primary goal for this technology is to enable low-cost, daily shifting of solar energy into the electrical grid over a 10- to 24-hour period,” said Vince Sprenkle, a PNNL battery technology expert with more than 30 patented designs for energy storage systems and associated technology. “This is a sweet spot where we can start to think about integrating higher levels of renewables into the electrical grid to provide true grid resiliency from renewable resources such as wind and solar power.”

    Sprenkle was part of the team that developed this battery’s new flexible design, which also shifted the battery from a traditional tubular shape to a flat, scalable one that can more easily be stacked and expanded as the technology develops from coin-sized batteries to a larger grid-scale demonstration size. More importantly, this flat cell design allows the cell capacity to be increased by simply using a thicker cathode, which the researchers leveraged in this work to demonstrate a triple capacity cell with sustained discharge of 28.2-hours under laboratory conditions.

    Most current battery technologies, including lithium-ion batteries, are well suited for short-term energy storage. To meet the demand for 10-plus hours of energy storage will require the development of new, low-cost, safe, and long duration battery concepts beyond current state-of-the-art battery technologies. This research provides a promising lab-scale demonstration toward that goal.

    Variation on a grid resilience theme

    The ability to store energy generated by renewable energy and release it on demand to the electrical grid has driven rapid advances in battery technology, with many new designs competing for attention and customers. Each new variation must satisfy the demands of its own niche use. Some batteriessuch as those having PNNL’s freeze-thaw battery design, are capable of storing energy generated seasonally for months at a time.

    Compared with a seasonal battery, this new design is especially adept at short- to medium-term grid energy storage over 12 to 24 hours. It is a variation of what’s called a sodium-metal halide battery. A similar design employing a nickel cathode as part of the system has been shown effective at commercial scale and is already commercially available.

    “We have eliminated the need for nickel, a relatively scarce and expensive element, without sacrificing battery performance,” said Li. “Another advantage of using aluminum over nickel is that the aluminum cathode charges more quickly, which is crucial to enable the longer discharge duration demonstrated in this work.”

    With this milestone reached, the team is focusing on further improvements to increase the discharge duration, which could greatly improve grid flexibility for greater incorporation of renewable power sources.

    And because it operates at a lower temperature, it can be manufactured with inexpensive battery materials, instead of requiring more complex and expensive components and processes as in conventional high-temperature sodium batteries, said David Reed, a PNNL battery expert and study co-author.

    More grid energy storage at lower cost

    In 2023, the state-of-the-art for grid energy storage using lithium-ion batteries is about four hours of energy storage capacity, said Sprenkle. “This new system could significantly increase the amount of stored energy capacity if we can reach the expected cost targets for materials and manufacturing,” he added.

    As part of the study, the researchers estimated that a sodium-aluminum battery design based on inexpensive raw materials could cost just $7.02 per kWh for the active materials. Through optimization and increasing the practical energy density, they project that this cost could be lowered even further. This promising low-cost, grid-scale storage technology could enable intermittent renewables like wind and solar power to contribute more dynamically to the nation’s electrical grid.

    Neil Kidner, a study co-author and president of Adena Power, a sodium solid-state battery manufacturer, is collaborating with PNNL to advance sodium-based battery technology. “This research demonstrates that our sodium electrolyte works not only with our patented technology but also with a sodium-aluminum battery design,” he said. “We look forward to continuing our partnership with the PNNL research team towards advancing sodium battery technology.”

    The research was supported by the DOE Office of Electricity and the International Collaborative Energy Technology R&D Program of the Korea Institute of Energy Technology Evaluation and Planning. The electrolyte development was supported by a DOE Small Business Innovation Research program. The nuclear magnetic resonance measurements were made in EMSL, Environmental Molecular Sciences Laboratory, a DOE Office of Science User Facility sponsored by the Biological and Environmental Research program.

    Learn more about PNNL’s grid modernization research, and the Grid Storage Launchpad, opening in 2024.

    ​About PNNL

    Pacific Northwest National Laboratory draws on its distinguishing strengths in chemistryEarth sciencesbiology and data science to advance scientific knowledge and address challenges in sustainable energy and national security. Founded in 1965, PNNL is operated by Battelle for the Department of Energy’s Office of Science, which is the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science. For more information on PNNL, visit PNNL’s News Center. Follow us on TwitterFacebookLinkedIn and Instagram.

    [ad_2]

    Pacific Northwest National Laboratory

    Source link

  • Tracking Explosions with Toughened-Up Tracers

    Tracking Explosions with Toughened-Up Tracers

    [ad_1]

    Newswise — What happens in an explosion? Where do the products of that explosion go following the blast? These questions are often difficult to solve. New rugged tracer particles, developed by Pacific Northwest National Laboratory (PNNL) researchers, can provide some answers.

    Beyond explosives, many industries may be interested in tracking particulates through harsh environments—which often include high pressures, high temperatures, and different chemicals.  

    “Lots of chemical tracers exist,” said Lance Hubbard, materials scientist supporting PNNL’s national security research. “The challenge is developing one that can survive harsh environments. It took a few years to convince anyone we could do it.”

    Hubbard and his team, along with fellow PNNL researchers April Carman and Michael Foxe, created a tracer that could not only survive but thrive in extreme conditions. Their work was published in MRS Communications.

    Quantum dots and water-soaked glass

    Organic materials, such as fluorescent dyes, are commonly used as tracers for water leaks and tracking cells in biological experiments. While they work great in those conditions, they aren’t so good for tracing material in explosions. Their problem?

    “They burn,” said Hubbard.

    Instead, Hubbard and his team focused on inorganic materials to develop their rugged tracers—particularly quantum dots. Though they fared much better than organic materials in harsh conditions, the research team still needed to protect the quantum dots from the extreme conditions of a chemical explosion.

    “Finding a way to protect the tracer while still maintaining its luminescent intensity proved to be difficult,” said Carman.

    The tracer’s brightness—or luminescent intensity—can be greatly affected by the local environment. Some protective methods can diminish the brightness, making the tracer more difficult to detect. The team focused on using hydrated silica—“basically water-soaked glass” as Hubbard puts it—to protect the quantum dots and maintain their brightness.

    Though previous silica coating methods significantly decreased tracer luminescence, the coated tracers designed by the PNNL team were almost as bright as the original quantum dots. Further testing showed that the particles could survive for long periods of time through a range of pH conditions.

    “We knew we created something special when we saw our results,” said Hubbard.

    Making tracers tunable and mass-producible

    Special is one thing, but useable on the commercial scale is another. Lucky for the PNNL team, their synthesis method was designed from the get-go to be completely scalable to produce mass quantities—from kilograms to potential tons per day.

    Not only can they make large amounts of the tracer, but they can customize them as well. “We can tune both the tracer’s size and color to any specificity,” said Foxe. “The tracer can be fine-tuned to create a mimic of the mass or material that is being tracked. We can also use a variety of sizes with different colors to visualize how an explosion affects particles of different sizes.”

    The tracers are rugged enough to be deployed in harsh environments to track mass and improve scientists’ understanding of environmental fate and transport. They can function under conditions that are too severe for traditional tracers—like in oil and gas refineries or geothermal plants. With tunable parameters and an easy-to-use system, these tracers have many potential applications for tracking material fate and transport in harsh environments.

    Persistence pays off

    The research has now grown from a small initial investment from the National Nuclear Security Administration (NNSA), Defense Nuclear Nonproliferation Research and Development program to encompassing several related projects.

    “We are glad we could keep pursuing this project despite initial skepticism,” said Carman. “We are also thrilled to see where it leads us next.”

    Additional PNNL authors on this research are Clara Reed, Anjelica Bautista, Maurice Lonsway, Nicolas Uhnak, Ryan Sumner, Trevor Cell, Erin Kinney, Nathaniel Smith, and Caleb Allen. Scientists and engineers from Los Alamos National Laboratory, Lawrence Livermore National Laboratory, Mission Support and Test Services LLC, and Sandia National Laboratories also contributed to the project.

    ###

    About PNNL

    Pacific Northwest National Laboratory draws on its distinguishing strengths in chemistry, Earth sciences, biology and data science to advance scientific knowledge and address challenges in sustainable energy and national security. Founded in 1965, PNNL is operated by Battelle for the Department of Energy’s Office of Science, which is the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science. For more information on PNNL, visit PNNL’s News Center. Follow us on Twitter, Facebook, LinkedIn and Instagram.

    [ad_2]

    Pacific Northwest National Laboratory

    Source link