ReportWire

Tag: Nature (journal)

  • Lithium-sulfur batteries are one step closer to powering the future

    Lithium-sulfur batteries are one step closer to powering the future

    [ad_1]

    Newswise — With a new design, lithium-sulfur batteries could reach their full potential.

    Batteries are everywhere in daily life, from cell phones and smart watches to the increasing number of electric vehicles. Most of these devices use well-known batteries“>lithium-ion battery technology. And while lithium-ion batteries have come a long way since they were first introduced, they have some familiar drawbacks as well, such as short lifetimes, overheating and supply chain challenges for certain raw materials.

    Scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory are researching solutions to these issues by testing new materials in battery construction. One such material is sulfur. Sulfur is extremely abundant and cost effective and can hold more energy than traditional ion-based batteries.

    In a new study, researchers advanced sulfur-based battery research by creating a layer within the battery that adds energy storage capacity while nearly eliminating a traditional problem with sulfur batteries that caused corrosion.

    “These results demonstrate that a redox-active interlayer could have a huge impact on Li-S battery development. We’re one step closer to seeing this technology in our everyday lives.” — Wenqian Xu, a beamline scientist at APS

    A promising battery design pairs a sulfur-containing positive electrode (cathode) with a lithium metal negative electrode (anode). In between those components is the electrolyte, or the substance that allows ions to pass between the two ends of the battery.

    Early lithium-sulfur (Li-S) batteries did not perform well because sulfur species (polysulfides) dissolved into the electrolyte, causing its corrosion. This polysulfide shuttling effect negatively impacts battery life and lowers the number of times the battery can be recharged.

    To prevent this polysulfide shuttling, previous researchers tried placing a redox-inactive interlayer between the cathode and anode. The term ​“redox-inactive” means the material does not undergo reactions like those in an electrode. But this protective interlayer is heavy and dense, reducing energy storage capacity per unit weight for the battery. It also does not adequately reduce shuttling. This has proved a major barrier to the commercialization of Li-S batteries.

    To address this, researchers developed and tested a porous sulfur-containing interlayer. Tests in the laboratory showed initial capacity about three times higher in Li-S cells with this active, as opposed to inactive, interlayer. More impressively, the cells with the active interlayer maintained high capacity over 700 charge-discharge cycles.

    “Previous experiments with cells having the redox-inactive layer only suppressed the shuttling, but in doing so, they sacrificed the energy for a given cell weight because the layer added extra weight,” said Guiliang Xu, an Argonne chemist and co-author of the paper. ​“By contrast, our redox-active layer adds to energy storage capacity and suppresses the shuttle effect.”

    To further study the redox-active layer, the team conducted experiments at the 17-BM beamline of Argonne’s Advanced Photon Source (APS), a DOE Office of Science user facility. The data gathered from exposing cells with this layer to X-ray beams allowed the team to ascertain the interlayer’s benefits.

    The data confirmed that a redox-active interlayer can reduce shuttling, reduce detrimental reactions within the battery and increase the battery’s capacity to hold more charge and last for more cycles. ​“These results demonstrate that a redox-active interlayer could have a huge impact on Li-S battery development,” said Wenqian Xu, a beamline scientist at APS. ​“We’re one step closer to seeing this technology in our everyday lives.”

    Going forward, the team wants to evaluate the growth potential of the redox-active interlayer technology. ​“We want to try to make it much thinner, much lighter,” Guiliang Xu said.

    paper based on the research appeared in the Aug. 8 issue of Nature Communications. Khalil Amine, Tianyi Li, Xiang Liu, Guiliang Xu, Wenqian Xu, Chen Zhao and Xiao-Bing Zuo contributed to the paper.

    This research was sponsored by the DOE’s Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Office Battery Materials Research Program and the National Research Foundation of Korea.

    About the Advanced Photon Source

    The U. S. Department of Energy Office of Science’s Advanced Photon Source (APS) at Argonne National Laboratory is one of the world’s most productive X-ray light source facilities. The APS provides high-brightness X-ray beams to a diverse community of researchers in materials science, chemistry, condensed matter physics, the life and environmental sciences, and applied research. These X-rays are ideally suited for explorations of materials and biological structures; elemental distribution; chemical, magnetic, electronic states; and a wide range of technologically important engineering systems from batteries to fuel injector sprays, all of which are the foundations of our nation’s economic, technological, and physical well-being. Each year, more than 5,000 researchers use the APS to produce over 2,000 publications detailing impactful discoveries, and solve more vital biological protein structures than users of any other X-ray light source research facility. APS scientists and engineers innovate technology that is at the heart of advancing accelerator and light-source operations. This includes the insertion devices that produce extreme-brightness X-rays prized by researchers, lenses that focus the X-rays down to a few nanometers, instrumentation that maximizes the way the X-rays interact with samples being studied, and software that gathers and manages the massive quantity of data resulting from discovery research at the APS.

    This research used resources of the Advanced Photon Source, a U.S. DOE Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357.

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

    The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.

    [ad_2]

    Argonne National Laboratory

    Source link

  • Physicists confirm effective wave growth theory in space

    Physicists confirm effective wave growth theory in space

    [ad_1]

    Newswise — A team from Nagoya University in Japan has observed, for the first time, the energy transferring from resonant electrons to whistler-mode waves in space. Their findings offer direct evidence of previously theorized efficient growth, as predicted by the non-linear growth theory of waves. This should improve our understanding of not only space plasma physics but also space weather, a phenomenon that affects satellites. 

    When people imagine outer space, they often envision it as a perfect vacuum. In fact, this impression is wrong because the vacuum is filled with charged particles. In the depths of space, the density of charged particles becomes so low that they rarely collide with each other. Instead of collisions, the forces related to the electric and magnetic fields filling space, control the motion of charged particles. This lack of collisions occurs throughout space, except for very near to celestial objects, such as stars, moons, or planets. In these cases, the charged particles are no longer traveling through the vacuum of space but instead through a medium where they can strike other particles. 

    Around the Earth, these charged-particle interactions generate waves, including electromagnetic whistler-mode waves, which scatter and accelerate some of the charged particles. When diffuse auroras appear around the poles of planets, observers are seeing the results of an interaction between waves and electrons. Since electromagnetic fields are so important in space weather, studying these interactions should help scientists predict variations in the intensity of highly energetic particles. This might help protect astronauts and satellites from the most severe effects of space weather.  

    A team comprising Designated Assistant Professor Naritoshi Kitamura and Professor Yoshizumi Miyoshi of the Institute for Space and Earth Science (ISEE) at Nagoya University, together with researchers from the University of Tokyo, Kyoto University, Tohoku University, Osaka University, and Japan Aerospace Exploration Agency (JAXA), and several international collaborators, mainly used data obtained using low-energy electron spectrometers, called Fast Plasma Investigation-Dual Electron Spectrometers, on board NASA’s Magnetospheric Multiscale spacecraft. They analyzed interactions between electrons and whistler-mode waves, which were also measured by the spacecraft. By applying a method of using a wave particle interaction analyzer, they succeeded in directly detecting the ongoing energy transfer from resonant electrons to whistler-mode waves at the location of the spacecraft in space. From this, they derived the growth rate of the wave. The researchers published their results in Nature Communications

    The most important finding was that the observed results were consistent with the hypothesis that non-linear growth occurs in this interaction. “This is the first time anybody has directly observed the efficient growth of waves in space for the wave-particle interaction between electrons and whistler-mode waves,” explains Kitamura. “We expect that the results will contribute to research on various wave-particle interactions and to also improve our understanding of the progress of plasma physics research. As more specific phenomena, the results will contribute to our understanding of the acceleration of electrons to high energies in the radiation belt, which are sometimes called ‘killer electrons’ because they inflict damage on satellites, as well as the loss of high-energy electrons in the atmosphere, which form diffuse auroras.” 

    [ad_2]

    Nagoya University

    Source link

  • Protein ‘anchors’ found to play key role in neurotransmitter GABA action

    Protein ‘anchors’ found to play key role in neurotransmitter GABA action

    [ad_1]

    Newswise — New clues about the way brain chemical transmitter GABA functions suggest that a protein ‘anchor’ plays a key role in helping position its receptors in nervous cells.

    New research published in Nature Communications has found that a protein called Filamin A is responsible for guiding receptors to their correct places in brain cells. These receptors control brain activity in response to GABA, the main inhibitory neurotransmitter in the brain.

    GABA plays a critical role in the brain including controlling bodily movements and the transmission of pain. By activating specific receptors in the brain, GABA maintains proper brain activity by slowing down electric impulses as they travel between brain cells.

    The discovery that protein Filamin A is involved in positioning these receptors to the right place could enable researchers to develop new therapies to manage a range of neurological disorders including Multiple Sclerosis.

    Davide Calebiro, Professor of Molecular Endocrinology at the University of Birmingham and lead author of the paper said:

    “Filamin A answers a question that scientists have been asking about how GABA is able to control a range of functions in the brain. By acting like an anchor that precisely positions GABA-B receptors where they are needed, it allows GABA to modulate a whole host of brain functions that are implicated in multiple neurological diseases.

    “While the GABA-A receptor get most of the attention as it mediates rapid GABA responses, twin brother B that mediates slower responses is a hugely potential drug target, and our findings could have significant impacts in treating everything from multiple sclerosis to epilepsy.

    “Moreover, we hypothesise that defects in Filamin A could impair the normal localisation of GABA-B receptors in neurons, disrupting the correct processing of signals in the brain and ultimately leading to the brain not being able to communicate effectively with the rest of the body.”

     

    Pioneering lab work

    Filamin A’s role in GABA activity was found thanks to new pioneering research methods developed at the Centre of Membrane Proteins and Receptors (COMPARE), a research institute of the University of Birmingham in partnership with the University of Nottingham.

    In particular, the use of single molecule and super resolution microscopy approaches developed by the Calebiro lab have enabled the research team to directly follow individual receptors and Filamin molecules as they interact on the surface of living cells with unprecedented detail.

    [ad_2]

    University of Birmingham

    Source link

  • Electronic bridge allows rapid energy sharing between semiconductors

    Electronic bridge allows rapid energy sharing between semiconductors

    [ad_1]

    Newswise — As semiconductor devices become ever smaller, researchers are exploring two-dimensional (2D) materials for potential applications in transistors and optoelectronics. Controlling the flow of electricity and heat through these materials is key to their functionality, but first we need to understand the details of those behaviors at atomic scales.

    Now, researchers have discovered that electrons play a surprising role in how energy is transferred between layers of 2D semiconductor materials tungsten diselenide (WSe2) and tungsten disulfide (WS2). Although the layers aren’t tightly bonded to one another, electrons provide a bridge between them that facilitates rapid heat transfer, the researchers found.

    “Our work shows that we need to go beyond the analogy of Lego blocks to understand stacks of disparate 2D materials, even though the layers aren’t strongly bonded to one another,” said Archana Raja, a scientist at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), who led the study. “The seemingly distinct layers, in fact, communicate through shared electronic pathways, allowing us to access and eventually design properties that are greater than the sum of the parts.”

    The study appeared recently in Nature Nanotechnology and combines insights from ultrafast, atomic-scale temperature measurements and extensive theoretical calculations.

    “This experiment was motivated by fundamental questions about atomic motions in nanoscale junctions, but the findings have implications for energy dissipation in futuristic electronic devices,” said Aditya Sood, co-first author of the study and currently a research scientist at Stanford University. “We were curious about how electrons and atomic vibrations couple to one another when heat flows between two materials. By zooming into the interface with atomic precision, we uncovered a surprisingly efficient mechanism for this coupling.”

    An ultrafast thermometer with atomic precision

    The researchers studied devices consisting of stacked monolayers of WSe2 and WS2. The devices were fabricated by Raja’s group at Berkeley Lab’s Molecular Foundry, who perfected the art of using Scotch tape to lift off crystalline monolayers of the semiconductors, each less than a nanometer in thickness. Using polymer stamps aligned under a home-built stacking microscope, these layers were deposited on top of each other and precisely placed over a microscopic window to enable the transmission of electrons through the sample.

    In experiments conducted at the Department of Energy’s SLAC National Accelerator Laboratory, the team used a technique known as ultrafast electron diffraction (UED) to measure the temperatures of the individual layers while optically exciting electrons in just the WSe2 layer. The UED served as an “electron camera”, capturing the atom positions within each layer. By varying the time interval between the excitation and probing pulses by trillionths of a second, they could track the changing temperature of each layer independently, using theoretical simulations to convert the observed atomic movements into temperatures.

    “What this UED approach enables is a new way of directly measuring temperature within this complex heterostructure,” said Aaron Lindenberg, a co-author on the study at Stanford University. “These layers are only a few angstroms apart, and yet we can selectively probe their response and, as a result of the time resolution, can probe at fundamental time scales how energy is shared between these structures in a new way.”

    They found that the WSe2 layer heated up, as expected, but to their surprise, the WS2 layer also heated up in tandem, suggesting a rapid transfer of heat between layers. By contrast, when they didn’t excite electrons in the WSe2 and heated the heterostructure using a metal contact layer instead, the interface between WSe2 and WS2 transmitted heat very poorly, confirming previous reports.

    “It was very surprising to see the two layers heat up almost simultaneously after photoexcitation and it motivated us to zero in on a deeper understanding of what was going on,” said Raja.

    An electronic “glue state” creates a bridge

    To understand their observations, the team employed theoretical calculations, using methods based on density functional theory to model how atoms and electrons behave in these systems with support from the Center for Computational Study of Excited-State Phenomena in Energy Materials (C2SEPEM), a DOE-funded Computational Materials Science Center at Berkeley Lab.

    The researchers conducted extensive calculations of the electronic structure of layered 2D WSe2/WS2, as well as the behavior of lattice vibrations within the layers. Like squirrels traversing a forest canopy, who can run along paths defined by branches and occasionally jump between them, electrons in a material are limited to specific states and transitions (known as scattering), and knowledge of that electronic structure provides a guide to interpreting the experimental results.

    “Using computer simulations, we explored where the electron in one layer initially wanted to scatter to, due to lattice vibrations,” said Jonah Haber, co-first author on the study and now a postdoctoral researcher in the Materials Sciences Division at Berkeley Lab. “We found that it wanted to scatter to this hybrid state – a kind of ‘glue state’ where the electron is hanging out in both layers at the same time. We have a good idea of what these glue states look like now and what their signatures are and that lets us say relatively confidently that other, 2D semiconductor heterostructures will behave the same way.”

    Large-scale molecular dynamics simulations confirmed that, in the absence of the shared electron “glue state”, heat took far longer to move from one layer to another. These simulations were conducted primarily at the National Energy Research Scientific Computing Center (NERSC).

    “The electrons here are doing something important: they are serving as bridges to heat dissipation,” said Felipe de Jornada, a co-author from Stanford University. “If we can understand and control that, it offers a unique approach to thermal management in semiconductor devices.”

    NERSC and the Molecular Foundry are DOE Office of Science user facilities at Berkeley Lab.

    This research was funded primarily by the Department of Energy’s Office of Science.  

    ### 

    Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 16 Nobel Prizes. Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and the universe. Scientists from around the world rely on the Lab’s facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of California for the U.S. Department of Energy’s Office of Science.

    DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.

    [ad_2]

    Lawrence Berkeley National Laboratory

    Source link

  • Self-powered, printable smart sensors created from emerging semiconductors could mean cheaper, greener Internet of Things

    Self-powered, printable smart sensors created from emerging semiconductors could mean cheaper, greener Internet of Things

    [ad_1]

    Newswise — Creating smart sensors to embed in our everyday objects and environments for the Internet of Things (IoT) would vastly improve daily life—but requires trillions of such small devices. Simon Fraser University professor Vincenzo Pecunia believes that emerging alternative semiconductors that are printable, low-cost and eco-friendly could lead the way to a cheaper and more sustainable IoT.

    Leading a multinational team of top experts in various areas of printable electronics, Pecunia has identified key priorities and promising avenues for printable electronics to enable self-powered, eco-friendly smart sensors. His forward-looking insights are outlined in his paper published on Dec. 28 in Nature Electronics.

    “Equipping everyday objects and environments with intelligence via smart sensors would allow us to make more informed decisions as we go about in our daily lives,” says Pecunia. “Conventional semiconductor technologies require complex, energy-intensity, and expensive processing, but printable semiconductors can deliver electronics with a much lower carbon footprint and cost, since they can be processed by printing or coating, which require much lower energy and materials consumption.”

    Pecunia says making printable electronics that can work using energy harvested from the environment—from ambient light or ubiquitous radiofrequency signals, for example—could be the answer.

    “Our analysis reveals that a key priority is to realize printable electronics with as small a material set as possible to streamline their fabrication process, thus ensuring the straightforward scale-up and low cost of the technology,” says Pecunia. The article outlines a vision of printed electronics that could also be powered by ubiquitous mobile signals through innovative low-power approaches—essentially allowing smart sensors to charge out of thin air.

    “Based on recent breakthroughs, we anticipate that printable semiconductors could play a key role in realizing the full sustainability potential of the Internet of Things by delivering self-powered sensors for smart homes, smart buildings and smart cities, as well as for manufacturing and industry.”

    Pecunia has already achieved numerous breakthroughs towards self-powered printable smart sensors, demonstrating printed electronics with record-low power dissipation and the first-ever printable devices powered by ambient light via tiny printable solar cells.

    His research group at SFU’s School of Sustainable Energy Engineering focuses on the development of innovative approaches to eco-friendly, printable solar cells and electronics for use in next-generation smart devices.

    Pecunia notes that the semiconductor technologies being developed by his group could potentially allow the seamless integration of electronics, sensors, and energy harvesters at the touch of a ‘print’ button at single production sites—thereby reducing the carbon footprint, supply chain issues and energetic costs associated with long-distance transport in conventional electronics manufacturing.

    “Due to their unique manufacturability, printable semiconductors also represent a unique opportunity for Canada,” he says. “Not only to become a global player in next-generation, eco-friendly electronics, but also to overcome its reliance on electronics from faraway countries and the associated supply chain and geo-political issues.

    “Our hope is that these semiconductors will deliver eco-friendly technologies for a future of clean energy generation and sustainable living, which are key to achieving Canada’s net-zero goal.”

    [ad_2]

    Simon Fraser University

    Source link

  • New Computer Program ‘Learns’ to Identify Mosaic Mutations That Cause Disease

    New Computer Program ‘Learns’ to Identify Mosaic Mutations That Cause Disease

    [ad_1]

    Newswise — Genetic mutations cause hundreds of unsolved and untreatable disorders. Among them, DNA mutations in a small percentage of cells, called mosaic mutations, are extremely difficult to detect because they exist in a tiny percentage of the cells.

    Current DNA mutation software detectors, while scanning the 3 billion bases of the human genome, are not well suited to discern mosaic mutations hiding among normal DNA sequences. Often medical geneticists must review DNA sequences by eye to try to identify or confirm mosaic mutations — a time-consuming endeavor fraught with the possibility of error.

    Writing in the January 2, 2023 issue of Nature Biotechnology, researchers from the University of California San Diego School of Medicine and Rady Children’s Institute for Genomic Medicine describe a method for teaching a computer how to spot mosaic mutations using an artificial intelligence approach termed “deep learning.”

    Deep learning, sometimes referred to as artificial neural networks, is a machine learning technique that teaches computers to do what comes naturally to humans: learn by example, especially from large amounts of information. Compared with traditional statistical models, deep learning models use artificial neural networks to process visually represented data. The models function in ways similar to human visual processing, with much greater accuracy and attention to detail, leading to major advances in computational abilities, including mutation detection.

    “One example of an unsolved disorder is focal epilepsy,” said senior study author Joseph Gleeson, MD, Rady Professor of Neuroscience at UC San Diego School of Medicine and director of neuroscience research at the Rady Children’s Institute for Genomic Medicine.

    “Epilepsy affects 4% of the population, and about one-quarter of focal seizures fail to respond to common medication. These patients often require surgical excision of the short-circuited focal part of the brain to stop seizures.  Among these patients, mosaic mutations within the brain can cause epileptic focus.

    “We have had many epilepsy patients where we were not able to spot the cause, but once we applied our method, called ‘DeepMosaic,’ to the genomic data, the mutation became obvious.  This has allowed us to improve the sensitivity of DNA sequencing in certain forms of epilepsy, and had led to discoveries that point to new ways to treat brain disease.”

    Gleeson said accurate detection of mosaic mutations is the first step in medical research toward developing treatments for many diseases.

    Co-first and co-corresponding author Xiaoxu Yang, Ph.D., a postdoctoral scholar in Gleeson’s lab, said DeepMosaic was trained on almost 200,000 simulated and biological variants across the genome until, “finally, we were satisfied with its ability to detect variants from data it had never encountered before.”

    To train the computer, the authors fed examples of trustworthy mosaic mutations as well as many normal DNA sequences and taught the computer to tell the difference. By repeatedly training and retraining with ever-more complex datasets and selection between a dozen of models, the computer was eventually able to identify mosaic mutations much better than human eyes and prior methods. DeepMosaic was also tested on several independent large-scale sequencing datasets that it had never seen, outperforming prior approaches.

    “DeepMosaic surpassed traditional tools in detecting mosaicism from genomic and exonic sequences,” said co-first author Xin Xu, a former undergraduate research assistant at UC San Diego School of Medicine and now a research data scientist at Novartis. “The prominent visual features picked up by the deep learning models are very similar to what experts are focusing on when manually examining variants.”

    DeepMosaic is freely available to scientists. It is not a single computer program, but rather an open-source platform that can enable other researchers to train their own neural networks to achieve a more targeted detection of mutations using a similar image-based setup, the researchers said.

    Co-authors include: Martin W. Breuss, Danny Antaki, Laurel L. Ball, Changuk Chung, Jiawei Shen, Chen Li and Renee D. George, UC San Diego and Rady Children’s Institute for Genomic Medicine; Yifan Wang, Taejeong Bae and Alexei Abyzov, Mayo Clinic; Yuhe Cheng, Ludmil B. Alexandrov and Jonathan L. Sebat, UC San Diego; Liping Wei, Peking University; and NIMH Brain Somatic Mosaicism Network.

    Funding for this research came, in part, from the National Institutes of Health (grants U01MH108898 and R01MH124890), the San Diego Supercomputer Center and UC San Diego Institute of Genomic Medicine.

    # # #

    [ad_2]

    University of California San Diego

    Source link

  • South Asian Black carbon aerosols accelerate loss of glacial mass over the Tibetan plateau

    South Asian Black carbon aerosols accelerate loss of glacial mass over the Tibetan plateau

    [ad_1]

    Newswise — Black carbon aerosols are produced by the incomplete combustion of fossil fuels and biomass, and are characterized by strong light absorption. Black carbon deposition in snow/ice reduces the albedo of snow/ice surfaces, which may accelerate the melting of glaciers and snow cover, thus changing the hydrological process and water resources in the region.

    The South Asia region adjacent to the Tibetan Plateau has among the highest levels of black carbon emission in the world. Many studies have emphasized black carbon aerosols from South Asia can be transported across the Himalayan Mountains to the inland region of the Tibetan Plateau.

    Recently, a joint research team led by Prof. KANG Shichang from the Northwest Institute of Eco-Environment and Resources of the Chinese Academy of Sciences (CAS), Prof. CHEN Deliang from the University of Gothenburg, and Prof. Robert Gillies from Utah State University analyzed the influence of black carbon aerosols on regional precipitation and glaciers over the Qinghai-Tibet Plateau.

    Their findings were published in Nature Communications on Nov. 30.

    The researchers found that since the 21st century, South Asian black carbon aerosols have indirectly affected the mass gain of the Tibetan Plateau glaciers by changing long-range water vapor transport from the South Asian monsoon region.

    “Black carbon aerosols in South Asia heat up the middle and upper atmosphere, thus increasing the North­–South temperature gradient,” said Prof. KANG. “Accordingly, the convective activity in South Asia is enhanced, which causes convergence of water vapor in South Asia. Meanwhile, black carbon also increases the number of cloud condensation nuclei in the atmosphere.”

    These changes in meteorological conditions caused by black carbon aerosols make more water vapor form precipitation in South Asia, and the northward transport to the Tibetan Plateau was weakened. As a result, precipitation in the central and the southern Tibetan Plateau decreases during the monsoon, especially in the southern Tibetan Plateau.

    The decrease in precipitation further leads to a decrease of mass gain of glaciers. From 2007 to 2016, the reduced mass gain by precipitation decrease accounted for 11.0% of the average glacier mass loss on the Tibetan Plateau and 22.1% in the Himalayas.

    “The transboundary transport and deposition of black carbon aerosols from South Asia accelerate glacier ablation over the Tibetan Plateau. Meanwhile, the reduction of summer precipitation over the Tibetan Plateau will reduce the mass gain of plateau glaciers, which will increase the amount of glacier mass deficit,” said Prof. KANG.

    [ad_2]

    Chinese Academy of Sciences

    Source link

  • Study discovers triple immunotherapy combination as possible treatment for pancreatic cancer

    Study discovers triple immunotherapy combination as possible treatment for pancreatic cancer

    [ad_1]

    Newswise — HOUSTON ― Researchers at The University of Texas MD Anderson Cancer Center have discovered a novel immunotherapy combination, targeting checkpoints in both T cells and myeloid suppressor cells, that successfully reprogrammed the tumor immune microenvironment (TIME) and significantly improved anti-tumor responses in preclinical models of pancreatic cancer.

    In this study, published today in Nature Cancer, researchers used comprehensive immune profiling in mouse and human pancreatic cancers to systematically identify mechanisms of immunotherapy resistance and investigate potential therapeutic targets. They found that neutralizing several distinct immunosuppressive mechanisms of the TIME dramatically improved survival rates in laboratory models, pointing to a potential treatment option for this notoriously lethal and unresponsive cancer.  

    “This triple combination therapy led to an unprecedented curative response in our models,” said corresponding author Ronald DePinho, M.D., professor of Cancer Biology. “The prevailing view has been that pancreatic cancer is impervious to immunotherapy, but this preclinical study shows that it can be vulnerable to the right combination therapy. Moreover, the presence of these targets in human pancreatic cancer specimens raises the exciting possibility that such therapeutic combinations could one day help our patients.”

    Pancreatic cancer is one of the leading causes of cancer death in the United States, partially because 80% of cases are diagnosed at an advanced stage. Pancreatic cancer is also considered to be “non-immunogenic,” meaning it is unresponsive to commonly used anti-PD-1 and anti-CTLA-4 immune checkpoint inhibitors. This is due in part to the immunosuppressive conditions in the TIME, but the mechanisms behind this resistance are not fully understood.

    The researchers used high-dimensional immune profiling and single-cell RNA sequencing to study how the TIME is affected by a variety of immunotherapies. They identified specific immune checkpoint proteins, 41BB and LAG, that were highly expressed in exhausted T cells.

    In testing antibodies targeting these checkpoints, the researchers observed that models treated with a 41BB agonist and LAG3 antagonist in combination had slower tumor progression, higher levels of anti-tumor immunity indicators and significantly improved survival rates compared to treatment with either antibody alone or with other checkpoint inhibitors. Notably, these preclinical studies faithfully mirrored the human data in their lack of efficacy of anti-PD1 or anti-CTLA-4 therapy.

    The researchers also confirmed these two therapeutic targets are present in human pancreatic cancer samples, with 81% and 93% of patients analyzed having T cells with 41BB and LAG3 expression, respectively. 

    Because this dual-therapy combination did not completely eliminate established tumors, the investigators also examined efforts to reprogram the TIME to further sensitize tumors to immunotherapy. At baseline, the TIME contained an abundance of myeloid-derived suppressor cells (MDSCs) expressing CXCR2, a protein associated with recruiting immunosuppressive cells. Inhibiting CXCR2 alone decreased MDSC migration and blocked tumor growth, but it was not curative. This prompted the investigators to consider a combination targeting 41BB, LAG3 and CXCR2.

    It was this triple combination that resulted in complete tumor regression and improved overall survival in 90% of preclinical models. In a more stringent lab model that develops multiple spontaneously arising tumors with higher treatment resistance, the combination achieved complete tumor regression in over 20% of cases.

    “These are encouraging results, especially considering the lack of effective immunotherapy options in pancreatic cancer,” DePinho said. “By targeting multiple synergistic mechanisms that get in the way of the immune response, we can give T cells a fighting chance to attack these tumors. Of course, we still need to see how this combination translates into a safe and effective regimen in the clinic, and we invite other researchers to build upon these results. We are optimistic that pancreatic cancers, and hopefully other non-immunogenic cancers, can ultimately be rendered vulnerable to combination immunotherapy.”

    The authors point out that these particular immunotherapy agents currently are undergoing clinical trials as monotherapies, suggesting potential opportunities to rapidly translate this triple combination into clinical studies.

    This work was supported by the National Institutes of Health/National Cancer Institute (P01 CA117969, RO1CA240526, RO1CA236864, R01CA231349, R01CA220236, P50CA221707),  the Elsa U. Pardee Foundation, MD Anderson’s Advanced Scholar Program, the Eleanor Russo Fund for Pancreatic Research, Ralph A. Loveys Family Charitable Foundation, the Cultural & Charitable Club of Somerset Run, the New Jersey Health Foundation, the Sheikh Ahmed Bin Zayed Al Nahyan Center for Pancreatic Cancer Research, and MD Anderson’s Pancreatic Cancer Moon Shot®. A full list of collaborating authors and their disclosures can be found with the full paper here.

     

    – 30 –

    [ad_2]

    University of Texas M. D. Anderson Cancer Center

    Source link

  • Study reveals how chronic blood cancer transitions to aggressive disease

    Study reveals how chronic blood cancer transitions to aggressive disease

    [ad_1]

    Newswise — A type of chronic leukemia can simmer for many years. Some patients may need treatment to manage this type of blood cancer — called myeloproliferative neoplasms (MPN) — while others may go through long periods of watchful waiting. But for a small percentage of patients, the slower paced disease can transform into an aggressive cancer, called secondary acute myeloid leukemia, that has few effective treatment options. Little has been known about how this transformation takes place.

    But now, researchers at Washington University School of Medicine in St. Louis have identified an important transition point in the shift from chronic to aggressive leukemia. They have shown that blocking a key molecule in the transition pathway prevents this dangerous disease progression in mice with models of the disease and in mice with tumors sampled from human patients.

    The research appears Dec. 29 in the journal Nature Cancer.

    “Secondary acute myeloid leukemia has a grim prognosis,” said senior author Stephen T. Oh, MD, PhD, an associate professor of medicine and co-director of the Division of Hematology at the School of Medicine. “Almost every patient who develops acute leukemia after a history of myeloproliferative neoplasms will die from the disease. Therefore, a major focus of our research is to better understand this conversion from chronic to aggressive disease and to develop better therapies and, hopefully, prevention strategies for these patients.”

    The study suggests that inhibiting this key transition molecule — called DUSP6 — helps overcome the resistance that these cancers often develop to JAK2 inhibitors, the therapy typically used to treat them. JAK2 inhibitors are an anti-inflammatory therapy also used to treat rheumatoid arthritis.

    “These patients are commonly treated with JAK2 inhibitors, but their disease progresses despite that therapy, so we’re also trying to identify how the disease is able to worsen even in the setting of JAK2 inhibition,” said Oh, who treats patients at Siteman Cancer Center at Barnes-Jewish Hospital and Washington University School of Medicine.

    The researchers conducted a deep dive into the genetics of these tumors, both during the slow chronic phase and after the disease had transformed into the aggressive form while patients were taking JAK2 inhibitors. The DUSP6 gene stood out as highly expressed in the 40 patients whose tumors were analyzed in this study.

    Using genetic techniques to delete the DUSP6 gene prevented the transition to aggressive disease in mice with models of this cancer. The researchers also tested a drug compound that inhibits DUSP6 and found that the compound — only available for animal research — stopped progression of the chronic disease to the aggressive disease in two different mouse models of the cancer and in mice with human tumors sampled from patients. Reducing DUSP6 levels both genetically and with a drug also reduced inflammation in these models.

    Since the drug that inhibits DUSP6 is not available for human clinical trials, Oh and his colleagues are interested in exploring treatments that inhibit another molecule that they found is activated downstream of DUSP6 and that they showed is also required to perpetuate the negative effects of DUSP6. There are drugs in clinical trials that inhibit this downstream molecule, known as RSK1. Oh’s team is interested in investigating these drugs for their potential to block the dangerous transition from chronic to aggressive disease and address resistance to JAK2 inhibition.

    “A future clinical trial might enroll myeloproliferative neoplasm patients who are taking JAK2 inhibitors and, despite that, show evidence of their disease worsening,” Oh said. “At that point, we might add the type of RSK inhibitor that’s now in trials to their therapy to see if that helps block progression of the disease into an aggressive secondary acute myeloid leukemia. A newly developed RKS inhibitor is in phase 1 clinical trials for patients with breast cancer, so we’re hopeful our work provides a promising foundation for developing a new treatment strategy for patients with this chronic blood cancer.”

    ###

    This work was supported by the National Institutes of Health (NIH), grant numbers R01HL134952, T32HL007088 and R01HL147978; the Leukemia and Lymphoma Society Translational Research Program; the MPN Research Foundation; the When Everyone Survives Foundation; the Edward P. Evans Foundation; the Gabrielle’s Angel Foundation; the Leukemia and Lymphoma Society; a Canderel Rising Star Summer Studentship; a Canadian Research Chair in Functional Genomics; and Canadian Institutes of Health Research (CIHR) grants PJT-156233 and PJT-438303. Technical support was provided by the Alvin J. Siteman Cancer Center Tissue Procurement Core Facility; the Biostatistics Shared Resource; the Flow Cytometry Core; Barnes-Jewish Hospital; the Institute of Clinical and Translational Sciences; and the Immunomonitoring Laboratory, which are supported by NCATS Clinical and Translational Sciences Award UL1 TR002345 and National Cancer Institute (NCI) Cancer Center Support Grant P30CA91842. Additional support was provided by the Barnard Cancer Institute. The Immunomonitoring Laboratory is also supported by the Andrew M. and Jane M. Bursky Center for Human Immunology and Immunotherapy Programs.

    Kong T, et al. DUSP6 mediates resistance to JAK2 inhibition and drives leukemic progression. Nature Cancer. Dec. 29, 2022.

    About Washington University School of Medicine

    WashU Medicine is a global leader in academic medicine, including biomedical research, patient care and educational programs with 2,700 faculty. Its National Institutes of Health (NIH) research funding portfolio is the fourth largest among U.S. medical schools, has grown 54% in the last five years, and, together with institutional investment, WashU Medicine commits well over $1 billion annually to basic and clinical research innovation and training. Its faculty practice is consistently within the top five in the country, with more than 1,790 faculty physicians practicing at over 60 locations and who are also the medical staffs of Barnes-Jewish and St. Louis Children’s hospitals of BJC HealthCare. WashU Medicine has a storied history in MD/PhD training, recently dedicated $100 million to scholarships and curriculum renewal for its medical students, and is home to top-notch training programs in every medical subspecialty as well as physical therapy, occupational therapy, and audiology and communications sciences.

     

    [ad_2]

    Washington University in St. Louis

    Source link

  • A glimpse of a cell’s sense of touch

    A glimpse of a cell’s sense of touch

    [ad_1]

    Newswise — Building tissues and organs is one of the most complex and essential tasks that cells must accomplish during embryogenesis. In this collective task, cells communicate through a variety of communication methods, including biochemical signals – similar to a cell’s sense of smell – and mechanical cues – the cell’s sense of touch. Researchers in a variety of disciplines have been fascinated by cell communication for decades. Professor Otger Campàs together with his colleagues from the Physics of Life (PoL) Cluster of Excellence at Technische Universität Dresden and from the University of California Santa Barbara (UCSB) have now been able to unravel another mystery surrounding the question of how cells use their sense of touch to make vital decisions during embryogenesis. Their paper has now been published in the journal Nature Materials.

    Testing the surroundings
    In their paper, the researchers report how cells within a living embryo mechanically test their environment and what mechanical parameters and structures they perceive. “We know a lot about how cells sense and respond to mechanical cues in a dish. However, their microenvironment is quite different within an embryo and we did not know what mechanical cues they perceive in a living tissue,” said Campàs, Chair of Tissue Dynamics and PoL Managing Director.

    The mechanical cures helps cells make important decisions, such as whether or not to divide, move or even differentiate, the differentiation process by which stem cells turn into more specialized cells able to perform specific functions. Previous works revealed that stem cells placed on a synthetic substrate rely heavily on mechanical cues to make decisions: Cells on surfaces with a stiffness similar to bones became osteoblasts (bone cells), whereas cells on surfaces with a stiffness similar to brain tissue became neurons. The findings greatly advanced the field of tissue engineering as researchers used these mechanical cues to create synthetic scaffolds to coax stem cells to develop into desired outcomes. These scaffolds are used today in a variety of biomedical applications.

    From a dish to the living embryo

    However, a dish is not the cell’s natural habitat. While building an organism, cells are not in contact with synthetic scaffolds in a flat dish, but rather with complex living materials in three dimensions.

    Over the last decade, Prof. Campàs’ research group uncovered the mechanical cues that guide cells in the complex tissues of an embryo. Using a unique technique developed in his lab, the researchers could probe the living tissue in a similar way as cells do and find out what mechanical structures the cells sense. “We first studied how cells mechanically test their micro-environment as they differentiate and build the body axis of a vertebrate, as they differentiate,” Campàs said. “Cells used different protrusions to push and pull on their environment. So we quantified how fast and strong they were pushing.” Using a ferromagnetic oil droplet that they inserted between developing cells and subjecting it to a controlled magnetic field, they were able to mimic these tiny forces and measure the mechanical response of the cells surroundings.

    Sensing the tissue architecture and cells change fate

    Critical to these embryonic cells’ actions is their collective physical state, which Campàs and his research group described in a previous paper to be that of an active foam, similar in consistency to soap suds or beer froth, with cells clumped together by cell adhesion and tugging of each other. What the cells are mechanically probing, Campàs and team found out, is the collective state of this “living foam” – how stiff it is and how confined the assemblage is. “And right at the moment that cells differentiate and decide to change their fate, there is a change in the material properties of the tissue that they perceive.” According to him, at the moment the cells within the tissue decide on their fate, the tissue falls its stiffness.

    Going forward

    What’s not yet proven in this study is the complex question of whether – and if so, how – the change in the stiffness in the embryonic environment drives the change in the cell state. “There is an interplay between the mechanical characteristics of the structures that cells collectively build, such as tissues or organs, and the decisions they make individually, as these depend on the mechanics cues that cells sense in the tissue. This interplay is at the core of how nature builds organisms.”

    The findings from this study might also have important implications for tissue engineering. Potential materials that mimic the foam-like characteristics of the embryonic tissue, as opposed to the widely used synthetic polymer or gel scaffolds, may allow researchers to create more robust and sophisticated synthetic tissues, organs and implants in the lab, with the appropriate geometries and mechanical characteristics for the desired functions.

    [ad_2]

    Technische Universitat Dresden

    Source link

  • Good and bad feelings for brain stem serotonin

    Good and bad feelings for brain stem serotonin

    [ad_1]

    Newswise — New insights into the opposing actions of serotonin-producing nerve fibres in mice could lead to drugs for treating addictions and major depression.

    Scientists in Japan have identified a nerve pathway involved in the processing of rewarding and distressing stimuli and situations in mice. 

    The new pathway, originating in a bundle of brain stem nerve fibres called the median raphe nucleus, acts in opposition to a previously identified reward/aversion pathway that originates in the nearby dorsal raphe nucleus. The findings, published by scientists at Hokkaido University and Kyoto University with their colleagues in the journal Nature Communications, could have implications for developing drug treatments for various mental disorders, including addictions and major depression.

    Previous studies had already revealed that activating serotonin-producing nerve fibres from the dorsal raphe nucleus in the brain stem of mice leads to the pleasurable feeling associated with reward. However, selective serotonin reuptake inhibitors (SSRIs), antidepressant drugs that increase serotonin levels in the brain, fail to exert clear feelings of reward and to treat the loss of ability to feel pleasure associated with depression. This suggests that there are other serotonin-producing nerve pathways in the brain associated with the feelings of reward and aversion.

    To further study the reward and aversion nerve pathways of the brain, Hokkaido University neuropharmacologist Yu Ohmura and Kyoto University pharmacologist Kazuki Nagayasu, together with colleagues at several universities in Japan, focused their attention on the median raphe nucleus. This region has not received as much research attention as its brain stem neighbour, the dorsal raphe nucleus, even though it also is a source of serotonergic nerve fibres.

    The scientists conducted a wide variety of tests to measure activity of serotonin neurons in mice, in response to stimulating and inhibiting the median raphe, by using fluorescent proteins that detect entry of calcium ions, a proxy of neuronal activation in a cell-type specific manner.

    They found that, for example, pinching a mouse’s tail—an unpleasant stimulus—increased calcium-dependent fluorescence in the serotonin neurons of the median raphe. Giving mice a treat such as sugar, on the other hand, reduced median raphe serotonin fluorescence. Also, directly stimulating or inhibiting the median raphe nucleus, using a genetic technique involving light, led to aversive or reward-seeking behaviours, such as avoiding or wanting to stay in a chamber—depending on the type of stimulus applied.

    The team also conducted tests to discover where the switched-on serotonergic nerve fibres of the median raphe were sending signals to and found an important connection with the brain stem’s interpenduncular nucleus. They also identified serotonin receptors within this nucleus that were involved in the aversive properties associated with median raphe serotonergic activity.

    Further research is needed to fully elucidate this pathway and others related to rewarding and aversive feelings and behaviours. “These new insights could lead to a better understanding of the biological basis of mental disorders where aberrant processing of rewards and aversive information occur, such as in drug addiction and major depressive disorder,” says Ohmura.

    [ad_2]

    Hokkaido University

    Source link

  • New sensor uses MRI to detect light deep in the brain

    New sensor uses MRI to detect light deep in the brain

    [ad_1]

    Newswise — CAMBRIDGE, MA — Using a specialized MRI sensor, MIT researchers have shown that they can detect light deep within tissues such as the brain.

    Imaging light in deep tissues is extremely difficult because as light travels into tissue, much of it is either absorbed or scattered. The MIT team overcame that obstacle by designing a sensor that converts light into a magnetic signal that can be detected by MRI (magnetic resonance imaging).

    This type of sensor could be used to map light emitted by optical fibers implanted in the brain, such as the fibers used to stimulate neurons during optogenetic experiments. With further development, it could also prove useful for monitoring patients who receive light-based therapies for cancer, the researchers say.

    “We can image the distribution of light in tissue, and that’s important because people who use light to stimulate tissue or to measure from tissue often don’t quite know where the light is going, where they’re stimulating, or where the light is coming from. Our tool can be used to address those unknowns,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering.

    Jasanoff, who is also an associate investigator at MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears today in Nature Biomedical Engineering. Jacob Simon PhD ’21 and MIT postdoc Miriam Schwalm are the paper’s lead authors, and Johannes Morstein and Dirk Trauner of New York University are also authors of the paper.

    A light-sensitive probe

    Scientists have been using light to study living cells for hundreds of years, dating back to the late 1500s, when the light microscope was invented. This kind of microscopy allows researchers to peer inside cells and thin slices of tissue, but not deep inside an organism.

    “One of the persistent problems in using light, especially in the life sciences, is that it doesn’t do a very good job penetrating many materials,” Jasanoff says. “Biological materials absorb light and scatter light, and the combination of those things prevents us from using most types of optical imaging for anything that involves focusing in deep tissue.”

    To overcome that limitation, Jasanoff and his students decided to design a sensor that could transform light into a magnetic signal.

    “We wanted to create a magnetic sensor that responds to light locally, and therefore is not subject to absorbance or scattering. Then this light detector can be imaged using MRI,” he says.

    Jasanoff’s lab has previously developed MRI probes that can interact with a variety of molecules in the brain, including dopamine and calcium. When these probes bind to their targets, it affects the sensors’ magnetic interactions with the surrounding tissue, dimming or brightening the MRI signal.

    To make a light-sensitive MRI probe, the researchers decided to encase magnetic particles in a nanoparticle called a liposome. The liposomes used in this study are made from specialized light-sensitive lipids that Trauner had previously developed. When these lipids are exposed to a certain wavelength of light, the liposomes become more permeable to water, or “leaky.” This allows the magnetic particles inside to interact with water and generate a signal detectable by MRI.

    The particles, which the researchers called liposomal nanoparticle reporters (LisNR), can switch from permeable to impermeable depending on the type of light they’re exposed to. In this study, the researchers created particles that become leaky when exposed to ultraviolet light, and then become impermeable again when exposed to blue light. The researchers also showed that the particles could respond to other wavelengths of light.

    “This paper shows a novel sensor to enable photon detection with MRI through the brain. This illuminating work introduces a new avenue to bridge photon and proton-driven neuroimaging studies,” says Xin Yu, an assistant professor radiology at Harvard Medical School, who was not involved in the study.

    Mapping light

    The researchers tested the sensors in the brains of rats — specifically, in a part of the brain called the striatum, which is involved in planning movement and responding to reward. After injecting the particles throughout the striatum, the researchers were able to map the distribution of light from an optical fiber implanted nearby.

    The fiber they used is similar to those used for optogenetic stimulation, so this kind of sensing could be useful to researchers who perform optogenetic experiments in the brain, Jasanoff says.

    “We don’t expect that everybody doing optogenetics will use this for every experiment — it’s more something that you would do once in a while, to see whether a paradigm that you’re using is really producing the profile of light that you think it should be,” Jasanoff says.

    In the future, this type of sensor could also be useful for monitoring patients receiving treatments that involve light, such as photodynamic therapy, which uses light from a laser or LED to kill cancer cells.

    The researchers are now working on similar probes that could be used to detect light emitted by luciferases, a family of glowing proteins that are often used in biological experiments. These proteins can be used to reveal whether a particular gene is activated or not, but currently they can only be imaged in superficial tissue or cells grown in a lab dish.

    Jasanoff also hopes to use the strategy used for the LisNR sensor to design MRI probes that can detect stimuli other than light, such as neurochemicals or other molecules found in the brain.

    “We think that the principle that we use to construct these sensors is quite broad and can be used for other purposes too,” he says.

    ###

    The research was funded by the National Institutes of Health, the G. Harold and Leyla Y. Mathers Foundation, a Friends of the McGovern Fellowship from the McGovern Institute for Brain Research, the MIT Neurobiological Engineering Training Program, and a Marie Curie Individual Fellowship from the European Commission.

    [ad_2]

    Massachusetts Institute of Technology (MIT)

    Source link

  • Hunter-gatherer social ties spread pottery-making far and wide

    Hunter-gatherer social ties spread pottery-making far and wide

    [ad_1]

    Newswise — Analysis of more than 1,200 vessels from hunter-gatherer sites has shown that pottery-making techniques spread vast distances over a short period of time through social traditions being passed on.

    The team, which includes researchers from the University of York and the British Museum, analysed the remains of 1,226 pottery vessels from 156 hunter-gatherer sites across nine countries in Northern and Eastern Europe. They combined radiocarbon dating, together with data on the production and decoration of ceramic vessels, and analysis of the remains of food found inside the pots.

    Their findings, published in the journal Nature Human Behaviour, suggest that pottery-making spread  rapidly westwards from 5,900 BCE onwards and took only 300–400 years to advance over 3,000 km, equivalent to 250 km in a single generation. 

    Professor Oliver Craig, from the University of York’s Department of Archaeology, said: “Our analysis of the ways pots were designed and decorated as well as new radiocarbon dates suggests that knowledge of pottery spread through a process of cultural transmission.  

    “By this we mean that the activity spread by the exchange of ideas between groups of hunter-gatherers living nearby, rather than through migration of people or an expanding population as we see for other key changes in human history  such as the introduction of agriculture.”  

    “That methods of pottery-making spread so far and so fast through the passing on of ideas is quite surprising. Specific knowledge may have been shared through marriages or at centres of aggregation, specific points in the landscape where groups of hunter-gatherers came together perhaps at certain times of the year.” 

    By studying traces of organic materials left in the pots, the team demonstrated that the pottery was used for cooking, so the ideas of pottery-making may have been spread through shared culinary traditions. 

    Carl Heron, from the British Museum, said: “We found evidence that the vessels were used for cooking a wide range of animals, fish and plants, and this variety suggests that the drivers for making the pottery were not in response to a particular need, such as detoxifying plants or processing fish, as has previously been suggested. 

    “We also found patterns suggesting that pottery use was transmitted along with knowledge of their manufacture and decoration. These can be seen as culinary traditions that were rapidly transmitted with the artefacts themselves.” 

    The world’s earliest pottery containers come from East Asia and may have spread rapidly eastwards through Siberia, before being taken up by hunter-gatherer societies across Northern Europe, long before the arrival of farming. 

    This research is funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme.

    [ad_2]

    University of York

    Source link

  • Model analysis of atmospheric observations reveals methane leakage in North China

    Model analysis of atmospheric observations reveals methane leakage in North China

    [ad_1]

    1. Background

    Natural gas is a relatively clean burning fossil fuel, that causes less air pollution than coal and is widely used in the world. Recent studies have shown that the natural gas leaks from production, supply chain, and end-use facilities are a large source of atmospheric methane (CH4), and the leaking budget is underestimated in many places by bottom-up inventories. CH4 is the second most important greenhouse gas (GHG) contributing to global warming after carbon dioxide (CO2), with a relatively shorter lifetime, making the reduction of CH4 emission a suitable target for implementing rapid and achievable mitigation strategies of the Paris Agreement.

    Over the last decade, natural gas has become the fastest-growing fossil energy source in China due to the coal-to-gas government initiative that has been implemented to reduce air pollution and CO2 emissions. Natural gas consumption has increased dramatically from 108.5 billion standard cubic meters (bcm) (4% of primary energy consumption) in 2010 to a record level of 280 bcm (7.6% of primary energy consumption) in 2018. In addition, according to China’s energy plan, the share of primary energy from gas will keep increasing and is likely to reach 15% by 2030, while coal and oil consumption will decline. From 2010 to 2018, the length of gas supply pipelines in urban areas of China increased approximately three-fold from 298 to 842 thousand kilometers. However, CH4 leakage from those pipelines has not been actively reported, and there is limited publicly available data on upstream emissions and local distribution of natural gas emissions in China.

    2. Research Outline and Results

    In this study, we used nine years (2010–2018) of CH4 observations by the Greenhouse gases Observing SATellite “IBUKI” (GOSAT) and surface station data from the World Data Centre for Greenhouse Gases (WDCGG) to estimate CH4 emissions in different regions of China. GOSAT observes the column-averaged dry-air mole fractions of CH4 in the atmosphere, and the surface stations monitor CH4 concentrations near surface. The observation data were used for simulations by the high-resolution inverse model NTFVAR (NIES-TM-FLEXPART-variational) to infer the surface flux of CH4 emissions. Inverse modelling optimizes prior flux estimates, which are constrained so that an acceptable agreement between the simulated and observed atmospheric concentrations is achieved.

    Figure 1 shows the model-estimated CH4 fluxes in four regions of China. The four regions, North China (NE), South China (SE), North-west China (NW), and the Qinghai-Tibetan Plateau (TP), vary with respect climate, geographical features, types of agriculture, major economic activities, and CH4 emission sources. The model-estimated average CH4 emissions from the four subregions over the period 2010–2018 are 30.0±1.0 (average ± standard deviation) Tg CH4 yr-1 from the SE region, 23.3±2.7 Tg CH4 yr-1 from the NE region, 2.9±0.2 Tg CH4 yr-1 from the NW region, and 1.7±0.1 Tg CH4 yr-1 from the TP region. The trends in CH4 emissions have varied in the different regions of China over the last nine years, with significant increase trends detected in the NE region and the whole China.

    We focused our analysis on the NE region where natural gas production and consumption have increased dramatically and are likely one of the main contributors to the increase estimated in regional total CH4 emissions. The CH4 emissions from natural gas, including leakage from fuel extraction, processing, transport, and the end-use stage, were estimated using an approach that combined data for the province-level emissions inventory and published inverse model studies. The model-estimated total CH4 emissions and the estimated natural gas emissions both increased significantly during 2010–2018 (Figure 2). The total amount of natural gas emissions due to leakages constitutes a significant waste of energy and value. For example, in 2018, natural gas consumption in the NE region was 101.5 bcm and the estimated total natural gas emissions were 3.2%–5.3% of regional consumption.

    Figure 3 shows the changes in estimated CH4 emissions from natural gas and the model-estimated total CH4 emissions for 2010-2018 compared to previous years in the NE region. The year-over-year change in the model-estimated total CH4 emission closely follows the changes in CH4 emissions from natural gas. In January 2016, record cold wave hit the region causing a sudden increase in natural gas use, and natural gas suppliers recorded an increase in natural gas loss (i.e., the difference between the amount of gas purchased and the amount of gas sold). Simultaneously, the atmospheric observations also captured the emission changes, as reflected in our inverse estimates (Figure 3). The analysis shows a strong correlation between trends in natural gas use and the increase in the atmospheric CHconcentration over the NE region, which is indicative the ability of GOSAT to monitor variations in regional anthropogenic sources.

    3. Future Perspectives

    The findings of our study highlight that the increase in natural gas use threatens China’s carbon reduction efforts. The increase in CH4 leaks from natural gas production and the supply chain will adversely affect the interests of diverse stakeholders, despite the introduction of carbon reduction measures. Given that the large natural gas distribution pipelines span more than 900 thousand kilometers in China, natural gas leaks constitute a significant waste of energy and value. The year-over-year changes in regional emissions and trends were detected by satellite and surface observations in this study. In the future, additional observations using high-resolution satellites will help to more accurately quantify emissions and provide scientific directions for emission reduction measures. There is also a need to further detect and locate such leaks using advanced mobile platforms in order to effectively mitigate CH4 emissions in China and bring about economic, environmental, and health benefits.

    4. Data Availability

    GOSAT data used in this study are available from the GOSAT Data Archive Service https://data2.gosat.nies.go.jp/index_en.html

    In-situ methane observation data are archived on the WDCGG Global Network: https://gaw.kishou.go.jp/

    Emissions Database for Global Atmospheric Research (EDGAR) emission inventories are available for download at

    https://edgar.jrc.ec.europa.eu/

    Global Fire Assimilation System (GFAS) fire emissions Database are from https://www.ecmwf.int/en/forecasts/dataset/global-fire-assimilation-system

    Wetland emission by Vegetation Integrative SImulator for Trace gases (VISIT) model are available at

    https://www.nies.go.jp/doi/10.17595/20210521.001-e.html

    The NIES airborne and Japan-Russia Siberian Tall Tower Inland Observation network (JR STATION) data are available at

    https://db.cger.nies.go.jp/ged/en/index.html

    The Japanese 55-year Reanalysis (JRA-55) data from the Japanese Meteorological Agency (JMA) are available at

    https://search.diasjp.net/en/dataset/JRA55

    5. Supplementary Information

    ○ Greenhouse gases Observing SATellite “IBUKI” (GOSAT)

    The Greenhouse Gases Observing Satellite “IBUKI” (GOSAT) is the world’s first spacecraft to monitor the concentrations of the two major GHGs CO2 and CH4 from space. NIES has promoted the GOSAT series projects for GHG observation from space, together with the Ministry of the Environment, Japan (MOE) and the Japan Aerospace Exploration Agency (JAXA). GOSAT (IBUKI) is the first satellite in the series and has been observing column-averaged concentrations of CO2 and CH4 for more than 13 years since its launch in 2009. The second satellite, GOSAT-2 (IBUKI-2) was launched in 2018 and started observing carbon monoxide in addition to CO2 and CH4. Furthermore, the third satellite, Global Observing SATellite for Greenhouse gases and Water cycle (GOSAT-GW) is under development and due for launch in Japanese fiscal year 2023.

    ○ Lifetime of methane in the atmosphere

    Methane is the second most important well-mixed GHG contributing to human-induced climate change after CO2. The lifetime of CH4 in the atmosphere refers to the time that CH4 stays in the air after being emitted from a variety of sources. CH4 is removed from the atmosphere mostly by chemical reactions. The atmospheric lifetime of CH4 is 10 ± 2 years, which is relatively shorter than that of CO2 (approximately 5 to 200 years) (IPCC, 2013).

    ○ Methane emission sources

    Methane is emitted from a variety of anthropogenic and natural sources. Approximately 60% of all CH4 emissions come from anthropogenic sources, such as agricultural activities, waste treatment, oil and natural gas systems, coal mining, stationary and mobile combustion, and certain industrial processes. Natural emissions include wetlands, freshwater bodies such as lakes and rivers, and geological sources such as terrestrial and marine seeps and volcanoes. Other smaller sources include ruminant wild animals, termites, hydrates and permafrost.

    ○ Underestimation of methane emissions from oil and gas using bottom-up inventories

    Methane can leak into the atmosphere from upstream/downstream natural gas operations (i.e., extraction and gathering, processing, transmission and storage, and distribution) and end-use combustion. Atmospheric measurement studies have shown that a large amount of CH4 emissions from oil and gas production are unaccounted for in bottom-up inventories. Using high-resolution satellite observations, Zhang et al. (2020) estimated a leakage equivalent to 3.7% (~60% higher than the national average leakage rate) of all the gas extracted from the largest oil-producing basin in the United States. Chan et al. (2020) reported eight-year estimates of CH4 emissions from oil and gas operations in western Canada and found that they were nearly twice that from inventories. Weller et al. (2020) used an advanced mobile leak detection (AMLD) platform combined with GIS information of utility pipelines to estimate CH4 leakage from pipelines of local distribution systems in the United States. They found that the leakage from those pipelines was approximately five times greater than that reported in inventories compiled based on self-reported utility leakage data.

    ○ High-resolution inverse model NIES-TM-FLEXPART-variational (NTFVAR)

    Inverse modeling is an important and essential method for estimating GHGs emissions. The model uses atmospheric observation data as a controller in atmospheric models to optimize bottom-up emission inventories (prior fluxes).

    The NIES-TM-FLEXPART-variational (NTFVAR) global inverse model was developed by Dr.Shamil Maksyutov’s group at NIES. NTFVAR is combined with a joint Eulerian three-dimensional transport model, the National Institute for Environmental Studies Transport Model (NIES-TM) v08.1i, and a Lagrangian model, the FLEXPART model v.8.0. The transport model is driven by JRA-55 meteorological data from JMA. The prior fluxes include gridded anthropogenic emissions from the EDGAR database, such as energy, agriculture, waste and other sectors; wetland emissions estimated by the Wetland emission by the VISIT model; biomass burning emissions estimated by GFAS; and climatological emissions from oceanic, geological, and termite sources. The inverse modeling problem is formulated and solved to find the optimal value of corrections to prior fluxes minimizing mismatches between observations and modelled concentrations. Variational optimization is applied to obtain flux corrections to vary prior uncertainty fields at a resolution of 0.1° × 0.1° with bi-weekly time steps. A variational inversion scheme is combined with the high-resolution variant of the transport model and its adjoint described by Maksyutov et al. (2021).

    References:

    Chan, E. et al. Eight-Year Estimates of Methane Emissions from Oil and Gas Operations in Western Canada Are Nearly Twice Those Reported in Inventories. Environmental Science & Technology 54, 14899-14909, doi:10.1021/acs.est.0c04117 (2020).

    IPCC 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. [Stocker, T. F. Q. et al.]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.

    Maksyutov, S. et al. Technical note: A high-resolution inverse modelling technique for estimating surface CO2 fluxes based on the NIES-TM – FLEXPART coupled transport model and its adjoint. Atmospheric Chemistry Physics 21, 1245–1266 doi:10.5194/acp-21-1245-2021(2021).

    Weller, Z., Hamburg, S. & von Fischer, J. A National Estimate of Methane Leakage from Pipeline Mains in Natural Gas Local Distribution Systems. Environmental Science & Technology 54, 8958-8967, doi:10.1021/acs.est.0c00437 (2020).

    Zhang, Y. et al. Quantifying methane emissions from the largest oil-producing basin in the United States from space. Science Advances 6, doi:10.1126/sciadv.aaz5120 (2020).

    [ad_2]

    National Institute for Environmental Studies

    Source link

  • Ultrafast and ultra-sensitive protein detection method allows for ultra-early disease diagnoses

    Ultrafast and ultra-sensitive protein detection method allows for ultra-early disease diagnoses

    [ad_1]

    Newswise — Osaka, Japan – Protein detection based on antigen–antibody reaction is vital in early diagnosis of a wide range of diseases. How to effectively detect proteins, however, has frequently bedeviled researchers. Osaka Metropolitan University scientists have discovered a new principle underlying light-induced acceleration of antigen–antibody reaction, allowing for simple, ultrafast, and highly sensitive detection of proteins. Their findings were published in Communications Biology.

    “The antigen–antibody reaction is a biochemical reaction that plays a crucial role in immunity, the body’s defense function,” explained lead researcher Professor Takuya Iida, Director of the Research Institute for Light-induced Acceleration System at Osaka Metropolitan University. Methods to analyze trace amounts of proteins based on antigen–antibody reaction enable diagnosis at an early stage of many diseases, including cancer, dementia, and microbial infections. However, such methods either have limited sensitivity or require complex and time-consuming processing to allow antigen–antibody reactions to occur.

    Aiming to accelerate antigen–antibody reactions, the researchers introduced target proteins and probe particles, with modified antibodies that selectively bind to the target proteins, into a channel that is as narrow as a human hair or artery and then applied irradiation with infrared laser light for 3 minutes, making it possible to carry out detection at a sensitivity approximately 100 times higher than that of conventional protein testing.

    The researchers achieved, for the first time, the rapid measurement of trace amounts of target proteins on the order of tens of attograms (ag = 10−18 g; one quintillionth of a gram) after only 3 minutes of laser irradiation.

    The study results demonstrate that rapid and highly sensitive detection can be achieved by condensing proteins through the simple operation of confining them in a small space and irradiating them with a laser to accelerate the reaction. These findings will facilitate the detection of disease-related substances from a small amount of body fluids, such as a single drop of blood, and will assist in the discovery of novel disease markers, potentially leading to breakthroughs in the development of systems for ultra-early diagnosis of various diseases.

    “In an interdisciplinary collaboration beyond physics, chemistry, and biology, we uncovered a new principle underlying the control of antigen–antibody reaction by optical force, or light-induced force,” concluded Professor Iida. “I hope that the advantage of being able to measure trace markers with high sensitivity and speed by simple laser irradiation will aid in ultra-early diagnosis.”

     

    ###

    About OMU

    Osaka Metropolitan University is a new public university established by a merger between Osaka City University and Osaka Prefecture University in April 2022. For more science news, see https://www.omu.ac.jp/en/info/research-news/, and follow @OsakaMetUniv_en, or search #OMUScience.

    [ad_2]

    Osaka City University

    Source link

  • New study models the transmission of foreshock waves towards Earth

    New study models the transmission of foreshock waves towards Earth

    [ad_1]

    Newswise — An international team of scientists led by Lucile Turc, an Academy Research Fellow at the University of Helsinki and supported by the International Space Science Institute in Bern has studied the propagation of electromagnetic waves in near-Earth space for three years. The team has studied the waves in the area where the solar wind collides with Earth’s magnetic field called foreshock region, and how the waves are transmitted to the other side of the shock. The results of the study are now published in Nature Physics.

    “How the waves would survive passing through the shock has remained a mystery since the waves were first discovered in the 1970s. No evidence of those waves has ever been found on the other side of the shock”, says Turc.

    The team has used a cutting-edge computer model, Vlasiator, developed at the University of Helsinki by a group led by professor Minna Palmroth, to recreate and understand the physical processes at play in the wave transmission. A careful analysis of the simulation revealed the presence of waves on the other side of the shock, with almost identical properties as in the foreshock.

    “Once it was known what and where to look for, clear signatures of the waves were found in satellite data, confirming the numerical results”, says Lucile Turc.

    The waves in the foreshock can enter the Earth’s magnetic field

    Around our planet is a magnetic bubble, the magnetosphere, which shields us from the solar wind, a stream of charged particles coming from the Sun. Electromagnetic waves, appearing as small oscillations of the Earth’s magnetic field, are frequently recorded by scientific observatories in space and on the ground. These waves can be caused by the impact of the changing solar wind or come from the outside of the magnetosphere.

    The electromagnetic waves play an important role in creating adverse space weather around our planet: they can for example accelerate particles to high energies, which can then damage spacecraft electronics, and cause these particles to fall into the atmosphere.

    On the side of Earth facing the Sun, scientific observatories frequently record oscillations at the same period as those waves that form ahead of the Earth’s magnetosphere, singing a clear magnetic song in a region of space called the foreshock.

    This has led space scientists to think that there is a connection between the two, and that the waves in the foreshock can enter the Earth’s magnetosphere and travel all the way to the Earth’s surface. However, one major obstacle lies in their way: the waves must cross the shock before reaching the magnetosphere.

    “At first, we thought that the initial theory proposed in the 1970s was correct: the waves could cross the shock unchanged. But there was an inconsistency in the wave properties that this theory could not reconcile, so we investigated further”, says Turc.

    “Eventually, it became clear that things were much more complicated than it seemed. The waves we saw behind the shock were not the same as those in the foreshock, but new waves created at the shock by the periodic impact of foreshock waves.”

    When the solar wind flows through the shock, it is compressed and heated. The shock strength determines how much compression and heating take place. Turc and her colleagues showed that foreshock waves are able to tune the shock, making it alternatively stronger or weaker when wave troughs or crests arrive at the shock. As a result, the solar wind behind the shock changes periodically and creates new waves, in concert with the foreshock waves.

    The numerical model also pinpointed that these waves could only be detected in a narrow region behind the shock, and that they could easily be hidden by the turbulence in this region. This likely explains why they had not been observed before.

    While the waves originating from the foreshock only play a limited role in space weather at Earth, they are of great importance to understand the fundamental physics of our universe.

    [ad_2]

    University of Helsinki

    Source link

  • Shedding light on the origin of complex life forms

    Shedding light on the origin of complex life forms

    [ad_1]

    How did the complex organisms on Earth arise? This is one of the big open questions in biology. A collaboration between the working groups of Christa Schleper at the University of Vienna and Martin Pilhofer at ETH Zurich has come a step closer to the answer. The researchers succeeded in cultivating a special archaeon and characterizing it more precisely using microscopic methods. This member of the Asgard archaea exhibits unique cellular characteristics and may represent an evolutionary “missing link” to more complex life forms such as animals and plants. The study was recently published in the journal “Nature”.

    All life forms on earth are divided into three major domains: eukaryotes, bacteria and archaea. Eukaryotes include the groups of animals, plants and fungi. Their cells are usually much larger and, at first glance, more complex than the cells of bacteria and archaea. The genetic material of eukaryotes, for example, is packaged in a cell nucleus and the cells also have a large number of other compartments. Cell shape and transport within the eukaryotic cell are also based on an extensive cytoskeleton. But how did the evolutionary leap to such complex eukaryotic cells come about? Most current models assume that archaea and bacteria played a central role in the evolution of eukaryotes. A eukaryotic primordial cell is believed to have evolved from a close symbiosis between archaea and bacteria about two billion years ago. In 2015, genomic studies of deep-sea environmental samples discovered the group of the so-called “Asgard archaea”, which in the tree of life represent the closest relatives of eukaryotes. The first images of Asgard cells were published in 2020 from enrichment cultures by a Japanese group.

    Asgard archaea cultivated from marine sediments

    Christa Schleper’s working group at the University of Vienna has now succeeded for the first time in cultivating a representative of this group in higher concentrations. It comes from marine sediments on the coast of Piran, Slovenia, but is also an inhabitant of Vienna, for example in the bank sediments of the Danube. Because of its growth to high cell densities, this representative can be studied particularly well. “It was very tricky and laborious to obtain this extremely sensitive organism in a stable culture in the laboratory,” reports Thiago Rodrigues-Oliveira, postdoc in the Archaea working group at the University of Vienna and one of the first authors of the study.

    Asgard archaea have a complex cell shape with an extensive cytoskeleton

    The remarkable success of the Viennese group to cultivate a highly enriched Asgard representative finally allowed a more detailed examination of the cells by microscopy. The ETH researchers in Martin Pilhofer’s group used a modern cryo-electron microscope to take pictures of shock-frozen cells. “This method enables a three-dimensional insight into the internal cellular structures,” explains Pilhofer. “The cells consist of round cell bodies with thin, sometimes very long cell extensions. These tentacle-like structures sometimes even seem to connect different cell bodies with each other,” says Florian Wollweber, who spent months tracking down the cells under the microscope. The cells also contain an extensive network of actin filaments thought to be unique to eukaryotic cells. This suggests that extensive cytoskeletal structures arose in archaea before the appearance of the first eukaryotes and fuels evolutionary theories around this important and spectacular event in the history of life.

    Future insights through the new model organism

    “Our new organism, called ‘Lokiarchaeum ossiferum’, has great potential to provide further groundbreaking insights into the early evolution of eukaryotes,” comments microbiologist Christa Schleper. “It has taken six long years to obtain a stable and highly enriched culture, but now we can use this experience to perform many biochemical studies and to cultivate other Asgard archaea as well.” In addition, the scientists can now use the new imaging methods developed at ETH to investigate, for example, the close interactions between Asgard archaea and their bacterial partners. Basic cell biological processes such as cell division can also be studied in the future in order to shed light on the evolutionary origin of these mechanisms in eukaryotes.

    This text was published in a similar form by ETH Zurich.

    [ad_2]

    University of Vienna

    Source link

  • Scientists use machine learning to get an unprecedented view of small molecules

    Scientists use machine learning to get an unprecedented view of small molecules

    [ad_1]

    Newswise — A new machine learning model will help scientists identify small molecules, with applications in medicine, drug discovery and environmental chemistry. Developed by researchers at Aalto University and the University of Luxembourg, the model was trained with data from dozens of laboratories to become one of the most accurate tools for identifying small molecules.

    Thousands of different small molecules, known as metabolites, transport energy and transmit cellular information throughout the human body. Because they are so small, metabolites are difficult to distinguish from each other in a blood sample analysis – but identifying these molecules is important to understand how exercise, nutrition, alcohol use and metabolic disorders affect wellbeing.

    Metabolites are normally identified by analysing their mass and retention time with a separation technique called liquid chromatography followed by mass spectrometry. This technique first separates metabolites by running the sample through a column, which results in different flow rates – or retention times – through the measurement device. Mass spectrometry is then used to fine-tune the identification process by sorting metabolites according to their mass. Researchers can also break metabolites into smaller pieces to analyse their composition using a technique called tandem mass spectrometry.

    ‘Even the best methods can’t identify more than 40% of the molecules in samples without making some additional assumptions about the candidate molecules,’ says Professor Juho Rousu of Aalto University.

    Now, Rousu’s group has developed a novel machine learning model to identify small molecules. It was recently published in Nature Machine Intelligence.

    ‘This new open-source model offers the whole research community an enriched view of small molecules. It will help research into methods to identify metabolic disorders, such as diabetes, or even cancer,’ says Rousu.

    The new approach elegantly sidesteps one of the challenges facing conventional methods. Because the retention times of molecules vary from lab to lab, data cannot be compared between labs. Eric Bach, a doctoral student at Aalto, came up with an alternative during his PhD research that solved the problem.

    ‘Our research shows that while absolute retention times may vary, the retention order is stable across measurements by different labs,’ Bach explains. ‘This allowed us to merge all publicly available data on metabolites for the first time ever and feed it into our machine learning model.’

    With the incorporation of data from dozens of laboratories around the globe, the machine learning model is accurate enough to distinguish between mirror image molecules, known as stereochemical variants. So far, identification tools have not been able to tell stereochemical variants apart, and the new capability is expected to open up new avenues in drug design and other fields.

    ‘The fact that using stereochemistry improved the identification performance is a revelation for all developers of metabolite identification methods’ says Emma Schymanski, associate professor at the Luxembourg Centre for Systems Biomedicine (LCSB) of the University of Luxembourg. ‘This method could also be used to help identify and trace micropollutants in the environment or characterise new metabolites in plant cells.’

    [ad_2]

    Aalto University

    Source link

  • London Underground polluted with metallic particles small enough to enter human bloodstream

    London Underground polluted with metallic particles small enough to enter human bloodstream

    [ad_1]

    Newswise — The London Underground is polluted with ultrafine metallic particles small enough to end up in the human bloodstream, according to University of Cambridge researchers. These particles are so small that they are likely being underestimated in surveys of pollution in the world’s oldest metro system.

    The researchers carried out a new type of pollution analysis, using magnetism to study dust samples from Underground ticket halls, platforms and operator cabins.

    The team found that the samples contained high levels of a type of iron oxide called maghemite. Since it takes time for iron to oxidise into maghemite, the results suggest that pollution particles are suspended for long periods, due to poor ventilation throughout the Underground, particularly on station platforms.

    Some of the particles are as small as five nanometres in diameter: small enough to be inhaled and end up in the bloodstream, but too small to be captured by typical methods of pollution monitoring. However, it is not clear whether these particles pose a health risk.

    Other studies have looked at overall pollution levels on the Underground and the associated health risks, but this is the first time that the size and type of particles has been analysed in detail. The researchers suggest that periodic removal of dust from Underground tunnels, as well as magnetic monitoring of pollution levels, could improve air quality throughout the network. Their results are reported in the journal Scientific Reports.

    The London Underground carries five million passengers per day. Multiple studies have shown that air pollution levels on the Underground are higher than those in London more broadly, and beyond the World Health Organization’s (WHO) defined limits. Earlier studies have also suggested that most of the particulate matter on the Underground is generated as the wheels, tracks and brakes grind against one another, throwing up tiny, iron-rich particles.

    “Since most of these air pollution particles are metallic, the Underground is an ideal place to test whether magnetism can be an effective way to monitor pollution,” said Professor Richard Harrison from Cambridge’s Department of Earth Sciences, the paper’s senior author. “Normally, we study magnetism as it relates to planets, but we decided to explore how those techniques could be applied to different areas, including air pollution.”

    Pollution levels are normally monitored using standard air filters, but these cannot capture ultrafine particles, and they do not detect what kinds of particles are contained within the particulate matter.

    “I started studying environmental magnetism as part of my PhD, looking at whether low-cost monitoring techniques could be used to characterise pollution levels and sources,” said lead author Hassan Sheikh from Cambridge’s Department of Earth Sciences. “The Underground is a well-defined micro-environment, so it’s an ideal place to do this type of study.”

    Working with colleagues from Cambridge’s Department of Materials Science and Metallurgy, Sheikh and Harrison analysed 39 dust samples from the London Underground, provided by Transport for London (TfL). The samples were collected in 2019 and 2021 from platforms, ticket halls, and train operator cabins on the Piccadilly, Northern, Central, Bakerloo, Victoria, Northern, District and Jubilee lines. The sampling included major stations such as King’s Cross St Pancras, Paddington, and Oxford Circus.

    The researchers used magnetic fingerprinting, 3D imaging and nanoscale microscopy to characterise the structure, size, shape, composition and magnetic properties of particles contained in the samples. Earlier studies have shown that 50% of the pollution particles in the Underground are iron-rich, but the Cambridge team were able to look in much closer detail. They found a high abundance of maghemite particles, ranging in diameter from five to 500 nanometres, and with an average diameter of 10 nanometres. Some particles formed larger clusters with diameters between 100 and 2,000 nanometres.

    “The abundance of these very fine particles was surprising,” said Sheikh. “The magnetic properties of iron oxides fundamentally change as the particle size changes. In addition, the size range where those changes happen is the same as where air pollution becomes a health risk.”

    While the researchers did not look at whether these maghemite particles pose a direct health risk, they say that their characterisation methods could be useful in future studies.

    “If you’re going to answer the question of whether these particles are bad for your health, you first need to know what the particles are made of and what their properties are,” said Sheikh.

    “Our techniques give a much more refined picture of pollution in the Underground,” said Harrison. “We can measure particles that are small enough to be inhaled and enter the bloodstream. Typical pollution monitoring doesn’t give you a good picture of the very small stuff.”

    The researchers say that due to poor ventilation in the Underground, iron-rich dust can be resuspended in the air when trains arrive at platforms, making the air quality on platforms worse than in ticket halls or in operator cabins.

    Given the magnetic nature of the resuspended dust, the researchers suggest that an efficient removal system might be magnetic filters in ventilation, cleaning of the tracks and tunnel walls, or placing screen doors between platforms and trains.

    The research was supported in part by the European Union, the Cambridge Trust and Selwyn College, Cambridge.

    [ad_2]

    University of Cambridge

    Source link

  • Highest metal concentrations in US public water systems found among Hispanic/Latino and American Indian communities

    Highest metal concentrations in US public water systems found among Hispanic/Latino and American Indian communities

    [ad_1]

    Newswise — December 14, 2022–Significantly higher arsenic and uranium concentrations in public drinking water have been linked to communities with higher proportions of Hispanic/Latino, American Indian/Alaskan Native, and non-Hispanic Black residents, according to a new study at Columbia University Mailman School of Public Health. Arsenic and uranium were higher for Hispanic/Latino and American Indian communities nationwide, while higher proportions of non-Hispanic Black residents were associated with higher arsenic and uranium only in the West and Midwest regions where water arsenic and uranium are the highest.

    Until now studies evaluating these associations were not possible because estimates of nationwide contaminant concentrations were not publicly available for the majority of public water systems. The findings are published online in the journal Nature Communications.

    In many U.S. communities, drinking water is a significant source of exposure to arsenic and uranium, which are major environmental exposures associated with cancer, cardiovascular disease and other adverse health outcomes. The EPA sets a maximum contaminant level (MCL) of 30 µg/L for uranium and 10 µg/L for arsenic. However, EPA’s non-enforceable maximum contaminant level goal for both is 0 µg/L because there is no known safe level of exposure to either.

    “Our findings are particularly relevant to public health because there is no safe level of exposure to inorganic arsenic and uranium,” noted Irene Martinez-Morata, MD, PhD candidate in Environmental Health Sciences at Columbia University Mailman School of Public Health and first author. “These findings support that inequalities in public water contaminant exposures are more severe in regions with more residents from communities of color relying on public drinking water and higher concentrations of specific contaminants in source water.”

    “All communities, regardless of racial/ethnic makeup, deserve access to clean, high quality drinking water,” said Anne Nigra, PhD, Assistant Professor of Environmental Health Sciences at Columbia University Mailman School of Public Health. “Our analysis indicates that this is not currently the case in the US. Even after accounting for socioeconomic status, communities of color have higher arsenic and uranium in their regulated public drinking water.”

    The researchers used county-level, population-weighted concentration estimates of arsenic and uranium concentrations in public water systems across the U.S. — estimates based on the most recent publicly available nationwide monitoring data gathered by the U.S. Environmental Protection Agency. Water metal concentrations were available for a total of 2,585 counties for arsenic and 1,174 counties for uranium. Parallel analyses were conducted for each of these racial and ethnic groups: non-Hispanic Black, American Indian/Alaskan Native, Hispanic/Latino, and non-Hispanic White.

    “The quality of your drinking water should not be related to the racial/ethnic makeup of your community,” remarks Martinez-Morata. “Our findings can advance environmental justice initiatives by informing federal regulatory action and financial and technical support to protect communities of color.”

    An interactive map of county-level CWS metal concentrations is also available at: https://msph.shinyapps.io/drinking-water-dashboard/

    Co-authors are Dustin Duncan, Maya Spaur, Kevin Patterson, Seth Prins, and Ana Navas-Acien, Columbia Mailman School; Benjamin C. Bostick, Columbia Climate School; Otakuye Conroy-Ben, Arizona State University; and Miranda Jones, Johns Hopkins University.

    The study was supported by National Institute of Dental & Craniofacial Research (DP5OD031849), National Institute of Environmental Health Sciences (2T32ES007322, P300ES009089, P42 ES033719); and by a fellowship from La Caixa Foundation (ID100010434).

    Columbia University Mailman School of Public Health

    Founded in 1922, the Columbia University Mailman School of Public Health pursues an agenda of research, education, and service to address the critical and complex public health issues affecting New Yorkers, the nation and the world. The Columbia Mailman School is the fourth largest recipient of NIH grants among schools of public health. Its nearly 300 multi-disciplinary faculty members work in more than 100 countries around the world, addressing such issues as preventing infectious and chronic diseases, environmental health, maternal and child health, health policy, climate change and health, and public health preparedness. It is a leader in public health education with more than 1,300 graduate students from 55 nations pursuing a variety of master’s and doctoral degree programs. The Columbia Mailman School is also home to numerous world-renowned research centers, including ICAP and the Center for Infection and Immunity. For more information, please visit www.mailman.columbia.edu.

     

     

    [ad_2]

    Columbia University, Mailman School of Public Health

    Source link