ReportWire

Tag: Top Hit Stories

  • Princeton University awards plasma physics graduate student Suying Jin a highly selective honorific fellowship

    Princeton University awards plasma physics graduate student Suying Jin a highly selective honorific fellowship

    [ad_1]

    Jin expressed deep appreciation on receiving the fellowship. “I feel truly honored, and I’m fortunate to be at an institution that lifts up its students in this way,” she said. “I am also deeply grateful for all the support, academic and otherwise, that has made this possible.”

    The Program in Plasma Physics is based at the Department of Energy’s Princeton Plasma Physics Laboratory (PPPL) and is a graduate program within the Department of Astrophysical Sciences at Princeton University. Graduates of the program have shaped the field of plasma physics in recent decades, working in academia, national laboratories, industry and beyond.

    Spontaneously arising order

    Jin’s dissertation is investigating the challenging question of how plasmas self-organize in the presence of magnetic fields. “You see it happening all the time, everywhere in the universe, where you have order spontaneously arising from turbulence or chaos,” she said. “I like to go after things that defy intuition and much about the mechanism by which this self-organization occurs remains mysterious.  

    When her advisor, principal research physicist Ilya Dodin, offered Jin several thesis topics to choose from, “Suying fearlessly chose the most challenging project over low-hanging fruits,” Dodin said. “She felt that although immediate rewards were not to be expected, the results of that project would be more important in the long run. I have much respect for that attitude,” he said. “Suying is an outstanding researcher and a classic role model who strongly deserves a Princeton honorific fellowship.”

    Jin traces her passionate interest in plasma science to her preparation for a final exam at the University of California, Los Angeles  (UCLA), where she graduated in physics with honors in 2018. “I was working my way through an electrodynamics textbook, and I came across this problem that introduced me to the whole idea of plasma,” she said. “It was my first time thinking about what would happen if you had a bunch of charged particles together and it seemed like anything would be possible in a medium like that.”

    Basic Science

    While her thesis topic “is basic science and not fusion focused, ultimately, I think the fusion effort will benefit greatly from just fundamental plasma research,” she said. “There’s a lot we still need to understand about plasmas, period.”

    Her dedication to learning extends to teaching, which she has pursued as a teaching assistant at the graduate and undergraduate levels. She’s taught in Dodin’s graduate class in plasma waves, where “she was very proactive and did a great job,” he recalls. She also helped teach an undergraduate course in fusion and fission that has expanded her interest in real-world problems.

    Her research has led to frequent peer-reviewed publications, including five papers as a first author and two as a co-author. In addition, she shares a patent disclosure with two PPPL physicists.

    Outside the classroom, Jin has been an active participant in plasma programs. She was a cofounder of Princeton Women in Plasma Physics (PWiPP), whose mission includes promoting “a supportive community for women and gender minorities in plasma physics at Princeton.”  She has lectured at plasma physics workshops and been a panelist and discussion leader at a local conference for undergraduate women in physics.

    Tae Kwon Do

    When not deeply engaged in plasma physics, Jin pursues long-time hobbies including the Korean martial art Tae Kwon Do, in which she holds a black belt and has practiced for 15 years. She also enjoys cooking and playing the piano.

    Looking ahead, Jin says she would prefer a teaching job to a purely research position and sees herself “continuing down the path of academia. “I’ve had such fantastic mentors from day one when I entered this field, and I would really like to work with students to pass that mentorship along.”

    The Program in Plasma Physics has graduated more than 300 students since it began in 1959.
    In an environment that, over the past few decades, has seen enormous changes in the fields of plasma physics and controlled fusion, the program has consistently focused on fundamentals in physics and mathematics and on intense exposure to contemporary experimental and theoretical research in plasma physics. Learn more.

    [ad_2]

    Princeton Plasma Physics Laboratory

    Source link

  • Zeroing in on a Fundamental Property of the Proton’s Internal Dynamics

    Zeroing in on a Fundamental Property of the Proton’s Internal Dynamics

    [ad_1]

    The Science

    Inside the proton are elementary particles called quarks. Quarks and protons have an intrinsic angular momentum called spin. Spin can point in different directions. When it is perpendicular to the proton’s momentum, it is called a transverse spin. Just like the proton carries an electric charge, it also has another fundamental charge called the tensor charge. The tensor charge is the net transverse spin of quarks in a proton with transverse spin. The only way to obtain the tensor charge from experimental data is using the theory of quantum chromodynamics (QCD) to extract the “transversity” function. This universal function encodes the difference between the number of quarks with their spin aligned and anti-aligned to the proton’s spin when it is in a transverse direction. Using state-of-the-art data science techniques, researchers recently made the most precise empirical determination of the tensor charge.

    The Impact

    Due to the phenomenon known as “confinement,” quarks are always bound in the proton or other hadrons (particles with multiple quarks). The challenge is to connect the theory of quark interactions (QCD) to experimental measurements of high-energy collisions involving hadrons. In this study, researchers used a complete collection of transverse-spin data from electron-positron, electron-proton, and proton-proton scattering in the first global analysis of its kind. They employed this data to make the most precise known empirical calculation of the tensor charge. Scientists need a precise and accurate determination of the proton’s tensor charge to understand the proton’s internal structure and the dynamics of QCD strong interactions. This information is also very important in searches for new physics.

    Summary

    Researchers from the Coordinated Theoretical Approach to Transverse Momentum Dependent Hadron Structure in QCD Topical Collaboration (TMD Collaboration) , working in conjunction with the Thomas Jefferson National Accelerator Facility (Jefferson Lab) Angular Momentum Collaboration (JAM Collaboration), analyzed data from a wide range of experiments where protons and/or quarks were transversely polarized. This allowed for the most precise empirical determination of the proton’s tensor charge. The tensor charge is not only a fundamental property of the proton but also needed in searches for new physics. The results were then compared to computations of the proton’s tensor charge by lattice QCD, which simulates the proton’s structure on a supercomputer. After about a decade of results showing disagreement between empirical methods and lattice QCD for the proton’s tensor charge, researchers for the first time found agreement between the two.

    The empirical study was performed using QCD theory and state-of-the-art numerical methods. A crucial part of the analysis was the utilization of data from electron-positron, electron-proton, and proton-proton scattering. This opens a new frontier in QCD global analyses to simultaneously include all possible measurements, like those from the future Electron-Ion Collider and Jefferson Lab 12 GeV, to continue to increase the precision and accuracy of extracting the proton’s tensor charge.

     

    Funding

    This work was supported by the Department of Energy Office of Science, Nuclear Physics program under the Coordinated Theoretical Approach to Transverse Momentum Dependent Hadron Structure in QCD (TMD Topical Collaboration). This work was also supported in part by the Department of Energy and the National Science Foundation and the agencies’ Early Career Programs.


    Journal Link: Physical Review D, Aug-2022

    [ad_2]

    Department of Energy, Office of Science

    Source link

  • How Argonne makes the power grid more reliable and resilient

    How Argonne makes the power grid more reliable and resilient

    [ad_1]

    Newswise — Through innovative methods of deeply understanding the complexities of the grid, the lab helps secure the nation’s energy future.

    The U.S. power grid is almost incomprehensibly large. Comprising nearly 12,000 power plants, 200,000 miles of high-voltage transmission lines, 60,000 substations and 3 million miles of power lines, it may well be the most massive and complex machine ever assembled. Households, businesses, governments and essential infrastructure — including water, telecommunications, food supply, health care and wastewater treatment — rely on the grid around the clock. The power it generates fuels the U.S. economy.

    All this complexity makes it critical to understand the vulnerabilities of the nation’s electric transmission and distribution systems and to protect the grid from an evolving set of human-caused and natural hazards. Those can include cyberattacks from foreign governments and terrorists as well as extreme weather events driven by climate change. Record-setting heat waves, unprecedented storms and flooding, historic droughts and wildfires all pose hazards to the grid.

    “What sets Argonne apart is that we are very good at looking at all these problems from a multidisciplinary perspective. There are no research silos here.” — Mark Petri, head of Argonne’s Electric Power Grid Program

    The U.S. Department of Energy’s (DOE) Argonne National Laboratory plays a vital role in maintaining and developing a stable and secure grid. At the nation’s first national lab, located in southwest suburban Chicago, scientists and engineers bring to bear collective expertise in economics, threat assessment and mitigation, system vulnerability analysis, critical infrastructure interdependency modeling, proactive cybersecurity defense and emergency readiness and response support. The lab also leverages cutting-edge high performance computing hardware, mathematical software technologies, and artificial intelligence and machine learning resources.

    “What sets Argonne apart is that we are very good at looking at all these problems from a multidisciplinary perspective,” says Mark Petri, head of the lab’s Electric Power Grid Program, who leads security and resilience activities. Petri also serves as technical team lead for the Markets, Policies & Regulations pillar of DOE’s Grid Modernization Initiative. ​“We bring together engineers, infrastructure analysts, computer scientists and modelers, artificial intelligence experts, economists, battery researchers and others in a focused effort to tackle these critical national challenges. There are no research silos here.”

    Argonne also collaborates with local, state, regional, tribal and territorial stakeholders, as well as academia, utilities and other national laboratories. This helps Argonne develop and deploy innovative solutions and advanced technologies that enhance the grid’s ability to withstand and recover from threats. Argonne is a key contributor to the Grid Modernization Laboratory Consortium, a strategic partnership between DOE and the national labs to bring together leading experts, technologies and resources to collaborate on the goal of modernizing the nation’s grid.

    Specialized models and training help design and defend an evolving grid

    For more than two decades, Argonne has pioneered the analysis of grid infrastructure. That includes identifying natural and man-made external threats to the system — everything from hail to hackers — and honing in precisely on system vulnerabilities. ​“If I have flooding, high winds, ice — what are the things that are likely to break on the system?” Petri asks. ​“Are transmission towers going to go out? Are substations going to be under water? Am I going to lose power generation? Knowing the weak links in the chain is key.”

    Researchers are also interested in deeply examining the complex interdependencies that exist between electricity infrastructure and other energy systems such as natural gas. Understanding the interconnections, the ways the systems operate in concert and how disruption in one sector has the potential to cause cascading failures across the entire complex, allows researchers to anticipate potential disruptions, manage impacts and develop adaptation measures for the future.

    Argonne scientists have developed specialized computer modeling tools to enable decision makers to make informed, data-backed choices when proactively hardening the grid or responding to threats in real time. For instance, they developed one of the highest resolution climate models covering North America, which projects the impacts of climate change 50 years into the future. While most climate modeling is done at the scale of 100-kilometer grid blocks on a map, Argonne’s model behind its Climate Risk and Resilience Portal, driven by some of the nation’s most powerful supercomputers, zooms in to the level of 12 kilometers. (Argonne’s next climate models will have a resolution closer to four kilometers, which approaches the size of a large urban neighborhood or small rural town.)

    “Developing the hazard and climate risk models that leverage the latest in the science and the leadership class computational resources at Argonne and DOE has enabled us to work with a multitude of private and public sector utilities” said Rao Kotamarthi, science director of the Center for Climate Resilience and Decision Science and a senior scientist at Argonne’s Environmental Science division.

    Kotamarthi explained that the breakthrough offers more actionable hyperlocal information for leaders thinking through climate resiliency planning. Companies including AT&T and ComEd, as well as government agencies like the New York Power Authority, already see the model’s value. Looking to improve the resilience of their grid-level infrastructure and keep critical services up and running, they can see which pieces of valuable equipment sit in likely future climate-related danger zones. This helps them to identify locations that may need to be stabilized or relocated altogether.

    Argonne has also developed several other leading modeling tools, including the Hurricane Electric Assessment Damage Outage, which forecasts likely power outages after a storm. The EPfast tool examines power outage impacts on large electric grid systems. The Restore tool provides insights into repair times for outages at critical infrastructure facilities. And the Electric Grid Resilience Improvement Program models power system restoration after a major blackout.

    Moreover, to help system operators respond more quickly to grid failures, limit impacts on customers and speed recovery, Argonne supports system operator training so they can effectively respond to major grid disruptions. Stakeholders responsible for resilience are put through readiness exercises that replicate real-world threat, response and recovery scenarios — hurricanes, blizzards, earthquakes, cyberattacks — and hone their in-the-moment decision-making skills.

    New tools predict outcomes from emergent grid resources

    Adding yet another layer of complexity to the grid, distributed energy resources (DERs) like rooftop solar panels and generators have emerged as significant power generation sources. DERs contribute to a power system’s overall capacity, but operators must assess their impact and forecast their potential, especially during extreme weather events. That’s why Argonne created TDcoSim, a cutting-edge transmission and distribution co-simulation software tool that enables high-fidelity modeling of DERs. It’s the first model capable of simulating both transmission (the high-voltage network used to transfer power long distances) and distribution (the localized low-voltage network used by the utilities to deliver power to consumers).

    “This is a totally new paradigm in grid modeling. Nobody has done this before,” says Vladimir Koritarov, director of the lab’s Center for Energy, Environmental and Economic Systems Analysis. ​“At Argonne, we specialize in developing these kinds of new, advanced grid models, algorithms, optimization methods and approaches that are more efficient, faster and more accurate than previously available ones.”

    Among those models is the Argonne Low-Carbon Electricity Analysis Framework, known as A-LEAF, an integrated national-scale simulation framework for power system operations and planning. It allows operators to evaluate different pathways to decarbonization of electric grids. A related Argonne-developed interactive tool called the Geospatial Energy Mapper helps users identify sites across the country best suited for renewable energy infrastructure projects.

    As the U.S. aims to meet a goal of net-zero carbon emissions by 2050, the grid’s energy mix will likely include far more renewables than today. But sources such as solar and wind are variable in their production and output may be reduced in extreme weather. Adapting to this variability interests Argonne energy systems engineer Neal Mann. At a time when long-term planning decisions are being made about which energy infrastructure technologies are invested in and built, and which will be retired, Mann focuses on the role nuclear power might play in the future grid. ​“If we rely too much on weather-driven generation, do we end up compromising reliability under stressed climate-related conditions?” he asks. ​“In those cases, having nuclear and other so-called dispatchable technologies available could be the difference between widespread outages or not.”

    Grid-level energy storage is focus of materials and manufacturing R&D

    To compensate for the uncertainty of variable renewables and to capture excess generation, researchers across Argonne are focused on low-cost, high-efficiency energy storage. Those efforts include research into various novel battery technologies such as advanced sodium-ion cathodes and new flow cell chemistries; chemical and thermal storage; and pumped storage hydropower, a common type of hydroelectric energy storage that can provide power even during extended lulls in solar and wind generation.

    One project involves the development of a model based on the R&D 100 winning EverBatt model, called ​“EverGrid.” The free to use model will help determine the impacts of stationary energy storage technologies such as flow batteries and advanced lead acid batteries at end-of-life, including recycling. The model will help researchers make better decisions during the technology development process as well as help find hot spots in processing that can lead to optimization and scale up.

    “In order to reduce greenhouse gas emissions and hit U.S. climate goals, we’re going to be increasingly relying on renewable energy, which is not a constant source of energy,” says Chris Heckle, director of the Materials Manufacturing Innovation Center at Argonne. ​“We need to develop grid-level energy storage solutions, which will need to be large in scale. That will involve manufacturing challenges, transportation challenges and systems challenges, all of which Argonne is well positioned to meet.”

    For Petri, the growing complexity of the grid and the evolving threats against it make Argonne’s interdisciplinary approach more necessary than ever to help secure the nation’s energy future.

    “Our ability to understand how the grid’s complex systems behave, how they might be disrupted, and how operators can improve response is vitally important,” he says. ​“It’s important to people’s lives, it’s important to our economy, it’s important to our national security. And here at Argonne, we are right in the middle of improving these systems from a reliability and resilience perspective.”

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

    The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.

    [ad_2]

    Argonne National Laboratory

    Source link

  • Finnish study shows those at risk were less likely to get vaccinated.

    Finnish study shows those at risk were less likely to get vaccinated.

    [ad_1]

    Newswise — A large-scale registry study in Finland has identified several factors associated with uptake of the first dose of COVID-19 vaccination. In particular, persons with low or no labor income and persons with mental health or substance abuse issues were less likely to vaccinate.

    The study, carried out in collaboration between the University of Helsinki and the Finnish Institute of Health and Welfare, tested the association of nearly 3000 health, demographic and socio-economic variables with the uptake of the first COVID-19 vaccination dose across the entire Finnish population. 

    This work, just published in the Nature Human Behavior, is the largest study to date on this topic. 

    The single most significant factors that associated with reduced likelihood of being vaccinated were lack of labor income in the year preceding the pandemic, mother tongue other than Finnish or Swedish and having unvaccinated close relatives, especially the mother. Among health-related variables, factors related to mental health and substance abuse problems associated with reduced vaccination.

    “Lack of labor income can be due to unemployment, sickness or retirement. Furthermore, among individuals with labor income, we saw that low-income earners where the least likely to vaccinate”, explains Tuomo Hartonen, Postdoctoral Researcher at the Institute for Molecular Medicine Finland FIMM, University of Helsinki.

    The study was based on the FinRegistry data. Researchers analysed population-wide national health and population register data from the pre-pandemic period and compared these with the vaccination status data. The analyses were limited to people aged 30-80 years.

    “A particular strength of our study is that it is based on registers covering the entire Finnish population. This way we can avoid all selection bias, which is a major challenge of survey studies”, Postdoctoral Researcher Bradley Jermy from FIMM says.

    The researchers stress that their results describe the association between the studied variables and vaccination uptake at the population level, but do not allow conclusions to be drawn about causal relationships. Furthermore, the generalizability of the findings outside Finland requires further studies. However, it is clear from the results that in Finland, vaccination uptake was lowest among those who are already in a vulnerable position.

    Researchers created a machine learning-based model to predict vaccination uptake

    In addition to studying single predictors, the research team constructed a machine learning-based model to predict vaccination uptake. This prediction model allowed the researchers to group individuals according to their likelihood of receiving the COVID-19 vaccine.

    Approximately 90% of the total study population received at least one dose of COVID-19 vaccination. In contrast, the group with the lowest probability of being vaccinated based on the model had a vaccination rate of less than 19%.

    “Our research has created a framework for using machine learning and statistical approaches to identify those groups that are at higher risk of not vaccinating”, says the corresponding author of the study, Associate Professor Andrea Ganna from FIMM.

     “These results and the predictive model could be used in the future, for example in designing vaccination campaigns”, says the Principal Investigator of the FinRegistry study, Research Professor Markus Perola from THL.

    “This study is a great example of the possibilities that the FinRegistry study creates for investigating highly topical issues in a short timeframe. The collaboration between THL’s genetic and registry researchers and FIMM scientists will help to understand the many pathways that lead to susceptibility to different diseases,” Perola continues.

    The study is part of the FinRegistry project, a joint research project between the Finnish Institute for Health and Welfare (THL) and the Institute for Molecular Medicine Finland (FIMM) at the University of Helsinki.

    [ad_2]

    University of Helsinki

    Source link

  • Substance use disorders do not increase the likelihood of COVID-19 deaths

    Substance use disorders do not increase the likelihood of COVID-19 deaths

    [ad_1]

    Newswise — BOSTON – New research from Boston Medical Center found that substance use disorders do not increase the likelihood of dying from COVID-19. Published in Substance Abuse: Research and Treatment, the study showed that the increased risk for severe COVID-19 in people with SUD that has been seen may be the result of co-occurring medical conditions.

    Multiple large cohort studies from early in the pandemic have shown higher rates of hospitalization, intubation, and death from COVID-19 in those with SUD, while other studies found no association between SUD and COVID-19-related mortality or mixed results depending on substance use pattern. Given these conflicting data, the Centers for Disease Control and Prevention has classified persons with SUD as suggestive of higher risk for severe COVID-19. The goal of this study was to assess the association between SUD and inpatient COVID-19-related mortality.

    “BMC is known for excellent clinical care and innovative research related to substance use disorder. Since the early days of the pandemic, BMC has also been a leader in the treatment of individuals with COVID-19, including persons with complex medical and social needs,” said first author Angela McLaughlin, MD, MPH, an infectious disease fellow at Boston Medical Center. “These findings showing a similar likelihood of COVID-19-related complications in hospitalized patients with and without SUD helps expand knowledge of the infectious complications of SUD.” 

    As BMC sees a high proportion of patients who use substances, it was an apt location for the study: almost 14% of the study population had SUD, exceeding the national average of 10.8% in people ages 18 or older. Researchers reviewed medical records of 353 adults without SUD and 56 adults with SUD admitted to Boston Medical Center early into COVID-19 pandemic and compared the likelihood of COVID-19 related complications between individuals with and without substance use disorders. They compared the relationship between COVID-19 and mortality, clinical complications, and resource utilization.

    “Early in the pandemic, BMC developed protocols to closely monitor and quickly manage COVID-19-related complications in all hospitalized patients,” said senior author Sabrina Assoumou, MD, MPH, an infectious disease doctor at Boston Medical Center and Assistant Professor of Medicine at Boston University Chobanian & Avedisian School of Medicine. “The current findings suggest that such an approach might have benefited many patients, including individuals with substance use disorders.” 

    In this retrospective cohort study of patients admitted to a safety net hospital during the early phase of the COVID-19 pandemic, SUD was not associated with the primary outcome of COVID-19-associated inpatient mortality. The secondary analysis showed that those with and without SUD had similar COVID-19-related clinical complications, including secondary infections, renal failure requiring dialysis, acute liver injury, venous thromboembolism, cardiac complications, and the composite “any complications.” Of note, some clinical outcomes such as stroke were very uncommon overall. Likewise, there was no difference in resource utilization secondary outcomes between the two groups. In contrast to other studies, this found similar likelihoods of mechanical ventilation and ICU admission in patients with and without SUD. Although patients with SUD presented to the hospital earlier in their disease course, their total hospital length of stay was ultimately similar to patients without SUD. Insights such as these into the clinical complications and resource utilization patterns of patients with SUD and COVID-19 can help clinicians anticipate the trajectory of infection and healthcare needs in this vulnerable group.

    There were some notable limitations to the study. The results are from a single site, which might limit generalizability of the findings despite the racial and ethnic diversity of the BMC patient population. Second, the data presented are from the earliest phase of COVID-19 in the United States, so trends may have differed with subsequent waves and as COVID-19 management strategies have evolved over time. Third, there were no specific controls for socioeconomic factors like medical insurance status or income level, as over 75% of the BMC patient population has public payer insurance (Medicare, Medicaid, or Children’s Health Insurance Program) or no insurance. Lastly, differences in COVID-19 outcomes between current versus past SUD could not be detected – this area would benefit from further research.

    In conclusion, in this study of hospitalized individuals at an urban safety net hospital with a diverse patient population in the early days of the COVID-19 pandemic, inpatient mortality and morbidity between patients with and without SUD were similar. The findings provide a detailed evaluation of outcomes in a unique patient population that has been disproportionately impacted by COVID-19 and may provide beneficial insights for similar settings across the country. These results point away from SUD as an independent risk factor for severe COVID-19 and further suggest a focus on medical comorbidities to mitigate the effects of COVID-19. Additional studies are needed to further evaluate for differential outcomes in this high-risk population, particularly in an era of newer COVID-19-directed therapies.

    ###

    About Boston Medical Center

    Boston Medical Center is a leading academic medical center with a deep commitment to health equity and a proud history of serving all who come to us for care. BMC provides high-quality healthcare and wrap around support that treats the whole person, extending beyond our physical campus into our vibrant and diverse communities. BMC is advancing medicine, while training the next generation of healthcare providers and researchers as the primary teaching affiliate of Boston University Chobanian & Avedisian School of Medicine. BMC is a founding member of Boston Medical Center Health System, which supports patients and health plan members through a value based, coordinated continuum of care.

    [ad_2]

    Boston Medical Center

    Source link

  • New Braintrust Seeks to Launch Era of North American Regional Competitiveness

    New Braintrust Seeks to Launch Era of North American Regional Competitiveness

    [ad_1]

    Newswise — Given the U.S.-China trade conflict and concerns over trade disruptions caused by Russia’s invasion of Ukraine, regionalizing supply chains is at the center of the discussion in North America. Now, a new working group spearheaded by the University of California San Diego is using this opportunity to propose policy recommendations for the relocation of global production chains in North America where it’s economically advantageous.

    The working group is a partnership between Center for U.S.-Mexican Studies (USMEX) at UC San Diego’s School of Global Policy and Strategy, the George W. Bush Institute, Canada’s Future Borders Coalition and the Mexican Council on Foreign Relations.

    “U.S. and China decoupling has prompted renewed interest in integrated North American trade and investment as well as considerations of a broader economic community that could include Central American nations,” said Caroline Freund, dean of the School of Global Policy and Strategy and working group member. “Our group is poised to propose policy approaches to ensure that the current opportunities strengthen North American economic integration, boosting the productivity, prosperity and competitiveness of the U.S., Mexico, Canada and neighboring countries.”

    The group hopes U.S. economic leadership can launch a new era of North American competitiveness. They cite President Joe Biden’s two signature legislative accomplishments, the CHIPS and Science Act (CHIPS Act) and the Inflation Reduction Act (IRA), which are aimed at strengthening the U.S. industrial base, particularly regarding the manufacturing of semiconductors, electric vehicles and products related to clean energy and the decarbonization of the U.S. economy.

    The consensus in Washington, D.C., that China represents a strategic rival to the U.S. also calls for exploration of stronger supply chains in North America, according to Rafael Fernández de Castro, director of the Center for U.S.-Mexican Studies and member of the group’s steering committee.

    “These regional opportunities are rare events in a century — North America cannot waste this opportunity,” Fernández de Castro said. “Our working group is developing a road map so that nearshoring becomes a reality for the region.”

    The timing is bolstered by North American alliances. Both Canada and Mexico have proved their worth as essential partners for U.S. supply chains because of their geographical location as neighboring countries, reliability as partners, complementary economic strengths and the framework provided by the United States-Mexico-Canada Agreement (USMCA).

    Although Central America has a narrower industrial base, it also presents cost and access advantages that make it a strong potential link in North American supply chains.  

    Members of the working group have backgrounds in government, academia, non-governmental organizations and private sector. They include the former Deputy Prime Minister of Canada Anne McLellan, former Undersecretary of Foreign Trade in Mexico Juan Carlos Baker, as well individuals from the Mexican firm Deacero and Harvard Kennedy School.

    “We have assembled a fantastic brain trust led by three women with very distinguished careers in public service and think tanks in Canada, the U.S. and Mexico to chair the working group: Louise Blais of the Business Council in Canada, Luz María de la Mora of the Atlantic Council and Shannon K. O’Neil of the Council on Foreign Relations,” said Cecilia Farfán-Méndez, head of research at the Center for U.S.-Mexican Studies and steering committee member. “Under their leadership, we are convinced the group will produce clear, implementable recommendations for the benefit of the North American region.”

    The working group will meet virtually during 2023 and will issue a series of policy recommendations in early 2024 — a key year for North America, since both Mexico and the U.S. will hold presidential elections.

    For information on the working group, go to this website.

    [ad_2]

    University of California San Diego

    Source link

  • ORNL malware ‘vaccine’ generator licensed for Evasive.ai platform

    ORNL malware ‘vaccine’ generator licensed for Evasive.ai platform

    [ad_1]

    Newswise — Access to artificial intelligence and machine learning is rapidly changing technology and product development, leading to more advanced, efficient and personalized applications by leveraging a massive amount of data.

    However, the same abilities also are in the hands of bad actors, who use AI to create malware that evades detection by the algorithms widely employed by network security tools. Government agencies, banking institutions, critical infrastructure, and the world’s largest companies and their most used products are increasingly under threat from malware that can evade anti-virus systems, hijack networks, halt operations and expose sensitive and personal information.

    A technology developed at the Department of Energy’s Oak Ridge National Laboratory and used by the U.S. Naval Information Warfare Systems Command, or NAVWAR, to test the capabilities of commercial security tools has been licensed to cybersecurity firm Penguin Mustache to create its Evasive.ai platform. The company was founded by the technology’s creator, former ORNL scientist Jared M. Smith, and his business partner, entrepreneur Brandon Bruce.

    “One of ORNL’s core missions is to advance the science behind national security,” said Susan Hubbard, ORNL’s deputy for science and technology. “This technology is the result of our deep AI expertise applied to a big challenge — protecting the nation’s cyber- and economic security.”

    Smith, who worked in ORNL’s Cyber Resilience and Intelligence Division for six years, created the technology — the adversarial malware input generator, or AMIGO — at the request of the Department of Defense. AMIGO was created as the evaluation tool for a challenge issued by NAVWAR for AI applications that autonomously detect and quarantine cybersecurity threats. NAVWAR is an operations unit within the Navy that focuses on secure communications and networks.

    “ORNL’s Cyber Resilience and Intelligence Division is a world leader in cybersecurity technology,” said Moe Khaleel, associate laboratory director for the lab’s National Security Sciences Directorate. “Moving AMIGO into the marketplace will help protect our nation’s critical infrastructure from attack.”

    “We put AMIGO to the test in a realistic environment. It’s been through the wringer and has been validated at a high technical readiness level,” Smith said. “The core technology is designed to build evasive malware, like a virus, that can bypass an existing detection technology.”

    Drawing on more than 35 million malware samples — some publicly available and others never before seen — AMIGO generates optimally evasive malware in tandem with the training information needed for a security system to detect it in the future.

    Smith likens the process to vaccine development. “It’s as if we generated a million virus variants and a million vaccines to protect against them — we can collapse that into one vaccine and inoculate everyone. They’re protected against the threat, but also all the natural evolutions of the threat going forward.”

    Luke Koch, who in 2019 worked on the AMIGO development team through the DOE Office of Science’s SULI, or Science Undergraduate Laboratory Internship program, is now a doctoral student at the Bredesen Center for Interdisciplinary Research and Graduate Education, a collaboration between ORNL and the University of Tennessee, as well as a graduate research assistant in ORNL’s Cybersecurity Research Group. With Smith’s direction, Koch wrote the binary instrumentation code used in AMIGO.

    “Cybersecurity commercialization is important because our adversaries are always probing for weaknesses throughout the supply chain,” Koch said. “One single flaw is all it takes to invalidate a clever and expensive defense.”

    Amid a growing public understanding of the power of AI, the team is eager to see AMIGO integrated into Evasive.ai and implemented by national security agencies to protect government assets and infrastructure.

    “Bad actors are already using artificial intelligence to advance their attacks,” Bruce said. “As open AI tools improve, attempts to penetrate security systems will increase in volume and sophistication.”

    Additionally, long-term use of the Evasive.ai platform could inform a more complete understanding of the mechanisms that contribute to adversarial samples. This insight will make the next generation of machine learning defenses more robust.

    And what does any of this have to do with penguins? The company’s playful name is a riff on the problem of a small mutation enabling a virus to evade existing defenses — a penguin disguised with a mustache.

    ORNL commercialization manager Andreana Leskovjan negotiated the terms of the license. For more information about ORNL’s intellectual property in information technology and communications, email ORNL Partnerships or call 865-574-1051. To connect with the Evasive.ai team, complete the online form on the Evasive.ai website.

    The Bredesen Center program is part of the University of Tennessee Oak Ridge Innovation Institute.

    UT-Battelle manages ORNL for the Department of Energy’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. The Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.

    [ad_2]

    Oak Ridge National Laboratory

    Source link

  • Lack of canine COVID-19 data fuels persisting concerns over dog-human interactions

    Lack of canine COVID-19 data fuels persisting concerns over dog-human interactions

    [ad_1]

    Newswise — WEST LAFAYETTE, Ind. — Early COVID-19 pandemic suspicions about dogs’ resistance to the disease have given way to a long-haul clinical data gap as new variants of the virus have emerged.

    “It is not confirmed that the virus can be transmitted from one dog to another dog or from dogs to humans,” said veterinarian Mohamed Kamel, a postdoctoral fellow at Purdue University.

    During the pandemic’s early days, dogs seemed resistant to the coronavirus, showing little evidence of infection or transmission, said Mohit Verma, assistant professor of agricultural and biological engineering and Purdue’s Weldon School of Biomedical Engineering. “As the virus evolved, or maybe the surveillance technology advanced, there seem to be more instances of potentially asymptomatic dogs.”

    These are among the findings that Kamel, Verma and two co-authors summarized in a research literature review “Interactions Between Humans and Dogs in the COVID-19 Pandemic.” The summary, with recent updates and future perspectives, recently appeared in a special issue of the journal Animals on Susceptibility of Animals to SARS-CoV-2.

    Additional co-authors are Rachel Munds, a research scientist at Krishi Inc. and a Purdue visiting scholar in the Department of Agricultural and Biological Engineering, and Amr El-Sayed of Egypt’s Cairo University.

    Last June the U.S. Department of Agriculture’s Animal and Plant Health Inspection Service announced it was committing up to $24 million for research related to SARS-CoV-2. The funding, provided by the American Rescue Plan Act, focuses on the One Health concept, which recognizes the link between the health of people, animals and the environment.

    The SARS-CoV-2 virus that originated in Wuhan, China, in 2019 has infected more than 600 million people globally and had claimed more than 6.5 million lives by October 2022.

    “COVID-19 has become one of the most important economic, health and humanitarian problems of the 21st century,” the co-authors wrote in the Animals article. Studies have documented the movement of the SARS-CoV-2 virus through various animal species. And about 75 percent of infectious diseases in humans start in animals.

    “This spread raises concerns about the possibility of pet animals serving as reservoirs for the virus,” the co-authors wrote.

    More than two dozen animal species have been infected by SARS-CoV-2 virus, ranging from cats, dogs and rabbits to deer, cattle and gorillas. More than 470 million dogs were owned worldwide before the COVID-19 outbreak. Their susceptibility to the virus remains poorly understood because they are infrequently tested, said Kamel, who is also a faculty member at Cairo University.

    “Compared to cats or other animals, the susceptibility is less,” Kamel said. He cautioned, however, that the susceptibility of dogs to the new variants may have changed to a lesser or greater extent.

    “There are a lot of variants. It’s not only one virus,” Kamel said. “The infections differ from the old variant to the new variant.”

    Dogs’ apparent resistance to COVID-19 could result from their general low levels of the angiotensin converting enzyme (ACE2), target receptors in their lung cells and related mutations.

    “ACE2 is the main part of the virus attachment found on the cells,” Kamel noted.

    The Animals journal article also discusses how the spread of an epidemic can be tracked, predicted and contained through a combination of geographic information systems, molecular biology and even detection dogs. Because of their heightened sense of smell, dogs can be trained to detect a wide range of human diseases, Kamel said. Using dogs to detect COVID-19, as reported in the journal article, is fast and less expensive compared to other methods where screening large crowds may be needed.

    Verma’s startup, Krishi Inc., is already developing innovative paper-based, rapid-result tests for bovine respiratory disease, antimicrobial resistance and COVID-19. The testing system uses a method called loop-mediated isothermal amplification (LAMP) and is under development in Verma’s lab for produce safety applications. Adapting LAMP for animal testing of SARS-CoV-2 may come next.

    Krishi Inc. received an initial investment from Ag-Celerator. Created in 2015, Ag-Celerator is a $2 million innovation fund designed to provide critical startup support for Purdue innovators who bring Purdue’s patented intellectual property or “know-how” technologies to market. The fund is operated by Purdue Ventures with assistance from the Purdue University College of Agriculture, the Purdue Research Foundation Office of Technology Commercialization and the agriculture industry.

    The Animals journal article cites multiple studies from Purdue and elsewhere validating the usefulness of LAMP testing. Krishi’s focus thus far has been developing a test for antimicrobial resistance in animals, but the LAMP assay has broader potential, Verma said.

    “If we want to do widespread surveillance, can we make our test versatile for any species? LAMP is portable,” Verma said. “Because it can be done in a simple manner and provide results without a lab setup, we can potentially do this on a wider scale and make it cost-effective.”

    Currently available commercial at-home coronavirus tests for humans can also be used on dogs and cats. However, these tests may not be sensitive enough to detect the lower viral loads in animals.

    “They’re not validated for animals, so we don’t know how well they would work. That’s the gap we’re hoping to bridge with the test that we are developing – better tools of surveillance,” Verma said.

    [ad_2]

    Purdue University

    Source link

  • AI improves detail, estimate of urban air pollution

    AI improves detail, estimate of urban air pollution

    [ad_1]

    Newswise — ITHACA, N.Y. – Using artificial intelligence, Cornell University engineers have simplified and reinforced models that accurately calculate the fine particulate matter (PM2.5) – the soot, dust and exhaust emitted by trucks and cars that get into human lungs – contained in urban air pollution. 

    Now, city planners and government health officials can obtain a more precise accounting about the well-being of urban dwellers and the air they breathe, from new research published December 2022 in the journal Transportation Research Part D.

    “Infrastructure determines our living environment, our exposure,” said senior author Oliver Gao, the Howard Simpson Professor of Civil and Environmental Engineering in the College of Engineering at Cornell University. “Air pollution impact due to transportation – put out as exhaust from the cars and trucks that drive on our streets – is very complicated. Our infrastructure, transportation and energy policies are going to impact air pollution and hence public health.”

    Previous methods to gauge air pollution were cumbersome and reliant on extraordinary amounts of data points. “Older models to calculate particulate matter were computationally and mechanically consuming and complex,” said Gao, a faculty fellow at the Cornell Atkinson Center for Sustainability. “But if you develop an easily accessible data model, with the help of artificial intelligence filling in some of the blanks, you can have an accurate model at a local scale.”

    Lead author Salil Desai and visiting scientist Mohammad Tayarani, together with Gao, published “Developing Machine Learning Models for Hyperlocal Traffic Related Particulate Matter Concentration Mapping,” to offer a leaner, less data-intensive method for making accurate models.

    Ambient air pollution is a leading cause of premature death around the world. Globally, more than 4.2 million annual fatalities – in the form of cardiovascular disease, ischemic heart disease, stroke and lung cancer – were attributed to air pollution in 2015, according to a Lancet study cited in the Cornell research.

    In this work, the group developed four machine learning models for traffic-related particulate matter concentrations in data gathered in New York City’s five boroughs, which have a combined population of 8.2 million people and a daily-vehicle miles traveled of 55 million miles.

    The equations use few inputs such as traffic data, topology and meteorology in an AI algorithm to learn simulations for a wide range of traffic-related, air-pollution concentration scenarios.

    Their best performing model was the Convolutional Long Short-term Memory, or ConvLSTM, which trained the algorithm to predict many spatially correlated observations.

    “Our data-driven approach – mainly based on vehicle emission data – requires considerably fewer modeling steps,” Desai said. Instead of focusing on stationary locations, the method provides a high-resolution estimation of the city street pollution surface. Higher resolution can help transportation and epidemiology studies assess health, environmental justice and air quality impacts.

    Funding for this research came from the U.S. Department of Transportation’s University Transportation Centers Program and Cornell Atkinson.

    [ad_2]

    Cornell University

    Source link

  • Researchers Create Smaller, Cheaper Flow Batteries for Clean Energy

    Researchers Create Smaller, Cheaper Flow Batteries for Clean Energy

    [ad_1]

    Newswise — Clean energy is the leading solution for climate change. But solar and wind power are inconsistent at producing enough energy for a reliable power grid. Alternatively, lithium-ion batteries can store energy but are a limited resource.

    “The advantage of a coal power plant is it’s very steady,” said Nian Liu, an assistant professor at the Georgia Institute of Technology. “If the power source fluctuates like it does with clean energy, it makes it more difficult to manage, so how can we use an energy storage device or system to smooth out these fluctuations?”

    Flow batteries offer a solution. Electrolytes flow through electrochemical cells from storage tanks in this rechargeable battery. The existing flow battery technologies cost more than $200/kilowatt hour and are too expensive for practical application, but Liu’s lab in the School of Chemical and Biomolecular Engineering (ChBE) developed a more compact flow battery cell configuration that reduces the size of the cell by 75%, and correspondingly reduces the size and cost of the entire flow battery. The work could revolutionize how everything from major commercial buildings to residential homes are powered.

    The all-Georgia Tech research team published their findings in the paper, “A Sub-Millimeter Bundled Microtubular Flow Battery Cell With Ultra-high Volumetric Power Density,” in Proceedings of the National Academy of Sciences.

     

    Finding the Flow

    Flow batteries get their name from the flow cell where electron exchange happens. Their conventional design, the planar cell, requires bulky flow distributors and gaskets, increasing size and cost but decreasing overall performance. The cell itself is also expensive. To reduce footprint and cost, the researchers focused on improving the flow cell’s volumetric power density (W/L-of-cell).

    They turned to a configuration commonly used in chemical separation — sub-millimeter, bundled microtubular (SBMT) membrane — made of a fiber-shaped filter membrane known as a hollow fiber. This innovation has a space-saving design that can mitigate pressure across the membranes that ions pass through without needing additional support infrastructure.

    “We were interested in the effect of the battery separator geometry on the performance of flow batteries,” said Ryan Lively, a professor in ChBE. “We were aware of the advantages that hollow fibers imparted on separation membranes and set out to realize those same advantages in the battery field.”

    Applying this concept, the researchers developed an SMBT that reduces membrane-to-membrane distance by almost 100 times. The microtubular membrane in the design works as an electrolyte distributor at the same time without the need for large supporting materials. The bundled microtubes create a shorter distance between electrodes and membranes, increasing the volumetric power density. This bundling design is the key discovery for maximizing flow batteries’ potential.   

     

    Powering the Battery

    To validate their new battery configuration, the researchers used four different chemistries: vanadium, zinc-bromide, quinone-bromide, and zinc-iodide. Although all chemistries are functional, two were most promising. Vanadium was the most mature chemistry, but also less accessible, and the reduced form of it is unstable in air. They found zinc iodide was the most energy-dense option, making it the most effective for residential units. Zinc-iodide offered many advantages even compared to lithium: It has less of a supply chain issue and also can be turned into zinc oxide and dissolve in acid, making it much easier to recycle.

    This electrochemical solution for this unique shape of the flow battery proved more powerful than conventional planar cells.

    “The superior performance of the SMBT was also demonstrated by finite element analysis,” said Xing Xie, an assistant professor in the School of Civil and Environmental Engineering. “This simulation method will also be applied in our future study for cell performance optimization and scaling up.”

    With zinc-iodide chemistry, the battery could run for more than 220 hours, or to > 2,500 cycles at off-peak conditions. It could also potentially reduce the cost from $800 to less than $200 per kilowatt hour by using recycled electrolyte.

     

    Building the Future of Energy

    The researchers are already working on commercialization, focusing on developing batteries with different chemistries like vanadium and scaling up their size. Scaling will require coming up with an automated process to manufacture a hollow fiber module, which now is done manually, fiber by fiber. They eventually hope to deploy the battery in Georgia Tech’s 1.4-megawatt microgrid in Tech Square, a project that tests microgrid integration into the power grid and offers living laboratory for professors and students.

    The SBMT cells could also be applied to different energy storage systems like electrolysis and fuel cells. The technology could even be strengthened with advanced materials and different chemistry in various applications.

    “This innovation is very application driven,” Liu said. “We have the need to reach carbon neutrality by increasing the percentage of renewable energy in our energy generation, and right now, it’s less than 15% in the U.S. Our research could change this.”

    Yutong Wu, Fengyi Zhang, Ting Wang, Po-Wei Huang, Alexandros Filippas, Haochen Yang, Yanghang Huang, Chao Wang, Huitian Liu, Xing Xie, Ryan P. Lively, Nian Liu, “A Submillimeter Bundled Microtubular Flow Battery Cell with Ultrahigh Volumetric Power Density.” PNAS (2023).

    DOI: https://doi.org/10.1073/pnas.2213528120

     

    ######

     

    The Georgia Institute of Technology, or Georgia Tech, is one of the top public research universities in the U.S., developing leaders who advance technology and improve the human condition. The Institute offers business, computing, design, engineering, liberal arts, and sciences degrees. Its more than 46,000 students, representing 50 states and more than 150 countries, study at the main campus in Atlanta, at campuses in France and China, and through distance and online learning. As a leading technological university, Georgia Tech is an engine of economic development for Georgia, the Southeast, and the nation, conducting more than $1 billion in research annually for government, industry, and society.

    [ad_2]

    Georgia Institute of Technology

    Source link

  • Electricity harvesting from evaporation, raindrops and moisture inspired by nature

    Electricity harvesting from evaporation, raindrops and moisture inspired by nature

    [ad_1]

    Newswise — Raindrops, evaporating water, and even moisture in the air are all potentially sources of decentralized clean electricity generation, but many of the technologies that take advantage of this ambient and vast source of energy—many of which are inspired by the electricity harvesting techniques of plants and animals—remain at the lab-bench stage. A group of researchers and engineers have put together a survey of the opportunities and challenges this very young field face.

    Their review paper was published in the journal Nano Research Energy on November 30, 2022.

    Enormous hydroelectric dams are perhaps the first thing one thinks of when considering sustainable electricity generation, or possibly large tidal barrages. If one is very familiar with the state of play in clean energy development, one might also be aware of wave-energy converters on the sea surface or seabed that convert the energy from high-intensity waves into usable electricity.

    All of these options depend upon heavy, bulky and above all centralized forms of harvesting of the energy contained in water. Yet there are a myriad of other potential technology pathways that can harvest electricity from water in much more decentralized fashion, taking advantage of water’s ubiquitous presence almost everywhere on the Earth. These would produce usable electricity from processes of evaporation, condensation, rainfall, moisture, and even minute flows of water at the scale of a droplet falling off a leaf, and the very tiniest of waves.

    Proposed technologies along these lines take advantage of various physical phenomena, including the piezoelectric effect (whereby electric charge accumulates in response to the application of stress or pressure), triboelectricity (in which certain materials become electrically charged after they are separated from a different material with which they had been in contact), thermoelectricity (the conversion of heat to electricity and vice versa), and the hydrovoltaic effect (in which electricity is generated via interaction between water and nanomaterials).

    “Water is everywhere. It is ambiently available like no other entity. So all this clean energy is just sitting there, unused and waiting for us to take advantage of it,” said Zuankai Wang, paper author of the review and researcher with the Department of Mechanical Engineering at the City University of Hong Kong. “It makes sense for us to tap into this vast reservoir of energy not just for bulk electricity production, but for a range of applications such as sensors and wearable devices where a micro-scale of energy harvesting is much more appropriate to the use it is being put to.”

    Much of the work in the development of such distributed water-energy technologies remains very much in its infancy however. Many of these lab-bench concepts for distributed water-energy harvesting techniques suffer from poor durability, poor scalability and, worst of all, low energy conversion. This latter problem means that for all the effort put into harvesting energy out of such processes, not much is squeezed out.

    The development of generators that are driven by water vapor in the air for example uses materials that so far exhibit poor capacity for water adsorption (adhesion to the surface), resulting in incomplete interaction between the water and the material, producing low electrical output, and declining even more in the face of harsh environments.

    “And yet the rest of nature has figured out thousands of different ways to do exactly this,” added Wang. “Evolution has basically perfected the process of extracting energy from ambient hydrologic processes in ways that are extremely efficient.”

    The lotus leaf for example at the micro and nano scale enjoys an extreme hydrophobic structure that allows droplets of water to roll across its surface with extremely low resistance—essentially on a cushion of air.  This phenomenon has inspired engineers to study textured superhydrophobic surfaces. The asymmetric 3D ratchets of the Araucaria leaf causes liquids with varying surface tensions to flow in different directions. And the ability of nepenthes, the group of carnivorous plants also known as pitcher plants, to direct liquid through its surface structure, inspired the authors of the review paper to develop a ‘slippery liquid-infused porous surface’ (SLIPS) system that can repel liquid extremely efficiently. A water-energy generator with durable SLIPS allows for constant electrical output from droplets in harsh environments with high humidity, high concentrations of salt, and even ultralow temperature.

    And it’s not just plants. As water-driven electricity generators are well suited for harvesting energy from human motion due to their deformability and compact size, another group of researchers inspired by electric eel membranes developed artificial electric organs making use of hydrogel arrays (highly absorbent polymers that do not dissolve in water) that work as analogues of the eel membrane components.

    Despite the explosion in development of such bio-inspired engineering, or ‘bionics’, for water-energy harvesting, the current generation of water-driven electricity generators remains largely ad hoc. The researchers felt that a comprehensive review of the field was urgently needed to place it on a firmer theoretical foundation and identify research gaps in order to better guide design of systems and development of novel materials.

    The review covers the main mechanisms of electricity production for bio-inspired water-driven generators. It also offers a tour d’horizon of the various bio-inspired devices that have been developed, specifically evaporation, moisture, rainwater, and wave and flow-driven generators, covering three use cases: sensors, wearable electricity generators, and self-powered electronics.

    The researchers concluded that the underlying structures of water-driven electricity generation remains undertheorized, in particular that of charge transport and transfer, as well as of energy conversion. Most notably, there is no general theory of charge transfer at the interface of solid materials and water, and proposed mechanisms for this remain hotly debated.

    In addition, liquid residues on solid surfaces can significantly reduce electrical output, and so how to avoid or reduce such residues is one of the most vital avenues of research for the field. Most efforts have focussed on textural microstructures in materials that produces a super-hydrophobic surface in order to achieve an incomplete contact between liquid and solid. While this produces the desired water residue reduction, it also inevitably limits the solid-liquid contact area, reducing charge induction and thus lowering electrical output, producing the same result as a residue.

    In other areas, improving the ability to absorb water from the environment will be key to improving electricity generation. The researchers recommended that a greater focus be applied to the study of organisms that have evolved over a long period of time in extremely arid areas, such as deserts.

    Finally, the authors noted that much of the design of bio-inspired water-driven electricity generators remains at the lab-bench stage, with such devices confronting only a fairly mild experimental setting rather than the rough and tumble of real-world conditions.

    The life-span of these technologies even in the laboratory only survive a few days or at most a few months. This compares poorly to roughly 25-year life-span of a solar panel or the half-century or longer of a nuclear plant or hydro dam. There may be use cases, perhaps in medical applications, where a short lifespan poses few problems or is even desirable, but for wider adoption of the technology, such unsatisfactory lifespans will need to be overcome.

     

    ##

     

    About Nano Research Energy 

    Nano Research Energy is launched by Tsinghua University Press, aiming at being an international, open-access and interdisciplinary journal. We will publish research on cutting-edge advanced nanomaterials and nanotechnology for energy. It is dedicated to exploring various aspects of energy-related research that utilizes nanomaterials and nanotechnology, including but not limited to energy generation, conversion, storage, conservation, clean energy, etc. Nano Research Energy will publish four types of manuscripts, that is, Communications, Research Articles, Reviews, and Perspectives in an open-access form.

     

    About SciOpen 

    SciOpen is a professional open access resource for discovery of scientific and technical content published by the Tsinghua University Press and its publishing partners, providing the scholarly publishing community with innovative technology and market-leading capabilities. SciOpen provides end-to-end services across manuscript submission, peer review, content hosting, analytics, and identity management and expert advice to ensure each journal’s development by offering a range of options across all functions as Journal Layout, Production Services, Editorial Services, Marketing and Promotions, Online Functionality, etc. By digitalizing the publishing process, SciOpen widens the reach, deepens the impact, and accelerates the exchange of ideas.

    [ad_2]

    Tsinghua University Press

    Source link

  • Study reveals average age at conception for men versus women over past 250,000 years

    Study reveals average age at conception for men versus women over past 250,000 years

    [ad_1]

    Newswise — BLOOMINGTON, Ind. — The length of a specific generation can tell us a lot about the biology and social organization of humans. Now, researchers at Indiana University can determine the average age that women and men had children throughout human evolutionary history with a new method they developed using DNA mutations.

    The researchers said this work can help us understand the environmental challenges experienced by our ancestors and may also help us in predicting the effects of future environmental change on human societies.

    “Through our research on modern humans, we noticed that we could predict the age at which people had children from the types of DNA mutations they left to their children,” said study co-author Matthew Hahn, Distinguished Professor of biology in the College of Arts and Sciences and of computer science in the Luddy School of Informatics, Computing and Engineering at IU Bloomington. “We then applied this model to our human ancestors to determine what age our ancestors procreated.”

    According to the study, published today in Science Advances and co-authored by IU post-doctoral researcher Richard Wang, the average age that humans had children throughout the past 250,000 years is 26.9. Furthermore, fathers were consistently older, at 30.7 years on average, than mothers, at 23.2 years on average, but the age gap has shrunk in the past 5,000 years, with the study’s most recent estimates of maternal age averaging 26.4 years. The shrinking gap seems to largely be due to mothers having children at older ages.

    Other than the recent uptick in maternal age at childbirth, the researchers found that parental age has not increased steadily from the past and may have dipped around 10,000 years ago because of population growth coinciding with the rise of civilization.

    “These mutations from the past accumulate with every generation and exist in humans today,” Wang said. “We can now identify these mutations, see how they differ between male and female parents, and how they change as a function of parental age.”

    Children’s DNA inherited from their parents contains roughly 25 to 75 new mutations, which allows scientists to compare the parents and offspring, and then to classify the kind of mutation that occurred. When looking at mutations in thousands of children, IU researchers noticed a pattern: The kinds of mutations that children get depend on the ages of the mother and the father.

    Previous genetic approaches to determining historical generation times relied on the compounding effects of either recombination or mutation of modern human DNA sequence divergence from ancient samples. But the results were averaged across both males and females and across the past 40,000 to 45,000 years.

    Hahn, Wang and their co-authors built a model that uses de novo mutations — a genetic alteration that is present for the first time in one family member as a result of a variant or mutation in a germ cell of one of the parents or that arises in the fertilized egg during early embryogenesis — to separately estimate the male and female generation times at many different points throughout the past 250,000 years.

    The researchers were not originally seeking to understand the relationship of gender and age at conception over time; they were conducting a broader investigation about the number of mutations passed from parents to children. They only noticed the age-based mutation patterns while seeking to understand differences and similarities between these pattens in humans versus other mammals, such as cats, bears and macaques.

    “The story of human history is pieced together from a diverse set of sources: written records, archaeological findings, fossils, etc.,” Wang said. “Our genomes, the DNA found in every one of our cells, offer a kind of manuscript of human evolutionary history. The findings from our genetic analysis confirm some things we knew from other sources (such as the recent rise in parental age), but also offer a richer understanding of the demography of ancient humans. These findings contribute to a better understanding of our shared history.”

    Additional contributors to this research were Samer I. Al-Saffar, a graduate student at IU at the time of the study, and Jeffrey Rogers of the Baylor College of Medicine.

    [ad_2]

    Indiana University

    Source link

  • Fathoming the hidden heatwaves that threaten coral reefs

    Fathoming the hidden heatwaves that threaten coral reefs

    [ad_1]

    Newswise —   In April to May 2019, the coral reefs near the French Polynesian island of Moorea in the central South Pacific Ocean suffered severe and prolonged thermal bleaching. The catastrophe occurred despite the absence of El Niño conditions that year, intriguing ocean scientists around the world.

        An international research team led by Prof. Alex WYATT of the Department of Ocean Science at The Hong Kong University of Science and Technology, has investigated this surprising and paradoxical coral bleaching episode. The unexpected event was related to the passage of anti-cyclonic eddies that elevated sea levels and concentrated hot water over the reef, leading to an underwater marine heatwave that was largely hidden from view at the surface. The findings have recently been published in Nature Communications.  

        Most studies of coral bleaching patterns rely on sea-surface measures of water temperatures, which cannot capture the full picture of threats from ocean heating to marine ecosystems, including tropical coral reefs. These surface measurements conducted over broad areas with satellites are valuable, yet are unable to detect heating below the surface that influences communities living in waters deeper that the shallowest few metres of the ocean.

        Prof. Wyatt and colleagues analyzed data collected at Moorea over 15 years from 2005 to 2019, taking advantage of a rare combination of remotely sensed sea-surface temperatures and high-resolution, long-term in-situ temperatures and sea level anomalies. Results showed that the passage of anti-cyclonic eddies in the open ocean past the island raised sea levels and pushed internal waves down into deeper water. Internal waves travel along the interface between the warm surface layer of the ocean and cooler layers below, and, in a previous study also led by Prof. Wyatt, have been shown to provide frequent cooling of coral reef habitats. The present research shows that, as a result of the anti-cyclones, internal wave cooling was shut down in early 2019, as well as during some earlier heatwaves.  This led to unexpected heating over the reef, which in turn caused large-scale coral bleaching and subsequent mortality. Unfortunately for local reef biodiversity, the extensive coral death in 2019 has offset the recovery of coral communities that had been occurring around Moorea for the last decade.

        A notable observation, in contrast to the 2019 heatwave, was that the reefs in Moorea did not undergo significant bleaching mortality in 2016, despite the prevailing super El Niño that brought warm conditions and decimated many shallow reefs worldwide. The new research demonstrates the importance of collecting temperature data across the range of depths that coral reefs occupy because the capacity to predict coral bleaching can be lost with a focus only on surface conditions. Sea-surface temperature data would predict moderate bleaching in both 2016 and 2019 at Moorea. However, direct observations showed that there was only ecologically insignificant bleaching in 2016, with heating that was short in duration and restricted to shallow depths. The severe and prolonged marine heatwave in 2019 would have been overlooked if researchers only had access to sea-surface temperature data, and the resulting catastrophic coral bleaching may have been incorrectly ascribed to causes other than heating.

        “The present study highlights the need to consider environmental dynamics across depths relevant to threatened ecosystems, including those due to the passage of underwater ocean weather events.  This kind of analysis depends on long-term, in situ data measured across ocean depths, but such data is generally lacking,” Prof. Wyatt said.  

        “Our paper provides a valuable mechanistic example for assessing the future of coastal ecosystems in the context of changing ocean dynamics and climates.”

        This HKUST-led research was conducted in collaboration with a team of scientists from Scripps Institution of Oceanography at the University of California San Diego, the University of California Santa Barbara, California State University, Northbridge, and Florida State University. The data underlying this study were made possible by coupled long-term physical and ecological observations conducted at the Moorea Coral Reef Long-Term Ecological Research (LTER) site. The long-term analyses conducted here, and the concurrent monitoring of physical conditions and biological dynamics across the full range of depths of island and coastal marine communities, is a model for future research that aims to protect vulnerable living resources in the ocean. 

    [ad_2]

    Hong Kong University of Science and Technology

    Source link

  • Vaccine and prior SARS-CoV-2 infection confer long-lasting protection against omicron BA.5

    Vaccine and prior SARS-CoV-2 infection confer long-lasting protection against omicron BA.5

    [ad_1]

    Newswise — A new study led by Luís Graça, group leader at the Instituto de Medicina Molecular João Lobo Antunes (iMM, Lisbon) and full professor at the Medical School of the University of Lisbon, and Manuel Carmo Gomes, associate professor with aggregation at the Faculty of Sciences of the University of Lisbon (Ciências ULisboa), both members of the Direção Geral de Saúde (DGS) Technical Committee for Vaccination against COVID-19 (CTVC), and published today in the scientific journal Lancet Infectious Diseases*, shows that the protection conferred by hybrid immunity against the SARS-CoV-2 subvariant omicron BA.5, obtained by the infection of vaccinated people, lasts for at least eight months after the first infection.

    This study follows the results published in September by the same researchers in the New England Journal of Medicine** where they showed, by studying the widely vaccinated Portuguese population, that infection by the first omicron subvariants of SARS-CoV-2, circulating in January and February 2022, conferred considerable protection against the omicron BA.5 subvariant circulating in Portugal since June and which remains the predominant variant in many countries. However, the stability of the protection conferred by the so-called hybrid immunity, the immunity conferred by the combination of vaccination and infection, was not yet known. 

    “In September, we had observed that infection by the first omicron subvariants conferred protection for the BA.5 subvariant about four times higher than vaccinated people who were not infected on any occasion, showing the importance of hybrid immunity for protection against new infections. Now, we show that this protection conferred by vaccination together with previous infections is stable and maintained until at least eight months after the first infection”, explains Luís Graça, co-leader of the study. 

    As in the previous study, the researchers used the national COVID-19 case registry until September 2022, which is especially comprehensive due to the legal requirement to register all cases of SARS-CoV-2 infection at the time to gain access to sick leave during mandatory isolation days. “We used the national COVID-19 case registry to obtain the information of all cases of SARS-CoV-2 infections in the population over 12 years old residing in Portugal. These data from the Portuguese population allows us to conclude about hybrid immunity because vaccination had already covered 98% of this population by the end of 2021. The virus variant of each infection was determined considering the date of infection and the dominant variant at that time”, explains Manuel Carmo Gomes, co-leader of the study. 

    About the calculations performed with these data, João Malato, first author of the study, explains: “With these data, we calculated the relative risk of reinfection over time in people vaccinated with previous infections by the first omicron subvariants of SARS-CoV-2, allowing us to conclude on the level of protection against reinfection. We found that protection remains high 8 months after contact with the virus.” 

    “The protection afforded by hybrid immunity is initially about 90%, reducing after 5 months to about 70%, and showing a tendency to stabilize at a value of around 65% after 8 months, compared to the protection in vaccinated persons that were never infected by the virus. These results show that hybrid immunity conferred by infection with previous subvariants of SARS-CoV-2 in vaccinated people is quite stable”, adds Luís Graça about the protection conferred by hybrid immunity.

    This study shows that infection by previous subvariants of the SARS-CoV-2 virus, which causes COVID-19, has the ability to confer additional protection compared to the protection conferred by vaccination alone, and that this protection is stable.

     

    This work was developed at the Instituto de Medicina Molecular João Lobo Antunes (iMM, Lisboa) and the Direção Geral de Saúde, in colaboration with researchers from the Centro de Estatística e Aplicações da Universidade de Lisboa, the Faculdade de Ciências da Universidade de Lisboa and Los Alamos National Laboratory (USA). This work was funded by the Horizon 2020 research and innovationfrom the European Union, Fundação para a Ciência e a Tecnologia (FCT, Portugal) and the National Institute of Health.

     

    * João Malato, Ruy M Ribeiro, Eugénia Fernandes, Pedro P Leite, Pedro Casaca, Carlos Antunes, Válter R Fonseca, Manuel Carmo Gomes, Luís Graça. (2022) Stability of hybrid vs. vaccine immunity against BA.5 infection over 8 months. Lancet Infectious Diseases. 

    ** João Malato, Ruy M Ribeiro, Pedro P Leite, Pedro Casaca, Eugénia Fernandes, Carlos Antunes, Válter R Fonseca, Manuel C Gomes, Luís Graça. (2022) Risk of BA.5 Infection among Persons Exposed to Previous SARS-CoV-2 Variants. New England Journal of Medicine.387(10):953-954. Doi: 10.1056/NEJMc2209479. 

    [ad_2]

    Instituto de Medicina Molecular

    Source link

  • Lithium-sulfur batteries are one step closer to powering the future

    Lithium-sulfur batteries are one step closer to powering the future

    [ad_1]

    Newswise — With a new design, lithium-sulfur batteries could reach their full potential.

    Batteries are everywhere in daily life, from cell phones and smart watches to the increasing number of electric vehicles. Most of these devices use well-known batteries“>lithium-ion battery technology. And while lithium-ion batteries have come a long way since they were first introduced, they have some familiar drawbacks as well, such as short lifetimes, overheating and supply chain challenges for certain raw materials.

    Scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory are researching solutions to these issues by testing new materials in battery construction. One such material is sulfur. Sulfur is extremely abundant and cost effective and can hold more energy than traditional ion-based batteries.

    In a new study, researchers advanced sulfur-based battery research by creating a layer within the battery that adds energy storage capacity while nearly eliminating a traditional problem with sulfur batteries that caused corrosion.

    “These results demonstrate that a redox-active interlayer could have a huge impact on Li-S battery development. We’re one step closer to seeing this technology in our everyday lives.” — Wenqian Xu, a beamline scientist at APS

    A promising battery design pairs a sulfur-containing positive electrode (cathode) with a lithium metal negative electrode (anode). In between those components is the electrolyte, or the substance that allows ions to pass between the two ends of the battery.

    Early lithium-sulfur (Li-S) batteries did not perform well because sulfur species (polysulfides) dissolved into the electrolyte, causing its corrosion. This polysulfide shuttling effect negatively impacts battery life and lowers the number of times the battery can be recharged.

    To prevent this polysulfide shuttling, previous researchers tried placing a redox-inactive interlayer between the cathode and anode. The term ​“redox-inactive” means the material does not undergo reactions like those in an electrode. But this protective interlayer is heavy and dense, reducing energy storage capacity per unit weight for the battery. It also does not adequately reduce shuttling. This has proved a major barrier to the commercialization of Li-S batteries.

    To address this, researchers developed and tested a porous sulfur-containing interlayer. Tests in the laboratory showed initial capacity about three times higher in Li-S cells with this active, as opposed to inactive, interlayer. More impressively, the cells with the active interlayer maintained high capacity over 700 charge-discharge cycles.

    “Previous experiments with cells having the redox-inactive layer only suppressed the shuttling, but in doing so, they sacrificed the energy for a given cell weight because the layer added extra weight,” said Guiliang Xu, an Argonne chemist and co-author of the paper. ​“By contrast, our redox-active layer adds to energy storage capacity and suppresses the shuttle effect.”

    To further study the redox-active layer, the team conducted experiments at the 17-BM beamline of Argonne’s Advanced Photon Source (APS), a DOE Office of Science user facility. The data gathered from exposing cells with this layer to X-ray beams allowed the team to ascertain the interlayer’s benefits.

    The data confirmed that a redox-active interlayer can reduce shuttling, reduce detrimental reactions within the battery and increase the battery’s capacity to hold more charge and last for more cycles. ​“These results demonstrate that a redox-active interlayer could have a huge impact on Li-S battery development,” said Wenqian Xu, a beamline scientist at APS. ​“We’re one step closer to seeing this technology in our everyday lives.”

    Going forward, the team wants to evaluate the growth potential of the redox-active interlayer technology. ​“We want to try to make it much thinner, much lighter,” Guiliang Xu said.

    paper based on the research appeared in the Aug. 8 issue of Nature Communications. Khalil Amine, Tianyi Li, Xiang Liu, Guiliang Xu, Wenqian Xu, Chen Zhao and Xiao-Bing Zuo contributed to the paper.

    This research was sponsored by the DOE’s Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Office Battery Materials Research Program and the National Research Foundation of Korea.

    About the Advanced Photon Source

    The U. S. Department of Energy Office of Science’s Advanced Photon Source (APS) at Argonne National Laboratory is one of the world’s most productive X-ray light source facilities. The APS provides high-brightness X-ray beams to a diverse community of researchers in materials science, chemistry, condensed matter physics, the life and environmental sciences, and applied research. These X-rays are ideally suited for explorations of materials and biological structures; elemental distribution; chemical, magnetic, electronic states; and a wide range of technologically important engineering systems from batteries to fuel injector sprays, all of which are the foundations of our nation’s economic, technological, and physical well-being. Each year, more than 5,000 researchers use the APS to produce over 2,000 publications detailing impactful discoveries, and solve more vital biological protein structures than users of any other X-ray light source research facility. APS scientists and engineers innovate technology that is at the heart of advancing accelerator and light-source operations. This includes the insertion devices that produce extreme-brightness X-rays prized by researchers, lenses that focus the X-rays down to a few nanometers, instrumentation that maximizes the way the X-rays interact with samples being studied, and software that gathers and manages the massive quantity of data resulting from discovery research at the APS.

    This research used resources of the Advanced Photon Source, a U.S. DOE Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357.

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

    The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.

    [ad_2]

    Argonne National Laboratory

    Source link

  • How climate change impacts the Indian Ocean dipole, leading to severe droughts and floods

    How climate change impacts the Indian Ocean dipole, leading to severe droughts and floods

    [ad_1]

    Newswise — PROVIDENCE, R.I. [Brown University] — With a new analysis of long-term climate data, researchers say they now have a much better understanding of how climate change can impact and cause sea water temperatures on one side of the Indian Ocean to be so much warmer or cooler than the temperatures on the other — a phenomenon that can lead to sometimes deadly weather-related events like megadroughts in East Africa and severe flooding in Indonesia.

    The analysis, described in a new study in Science Advances by an international team of scientists led by researchers from Brown University, compares 10,000 years of past climate conditions reconstructed from different sets of geological records to simulations from an advanced climate model.

    The findings show that about 18,000 to 15,000 years ago, as a result of melted freshwater from the massive glacier that once covered much of North America pouring into the North Atlantic, ocean currents that kept the Atlantic Ocean warm weakened, setting off a chain of events in response. The weakening of the system ultimately led to the strengthening of an atmospheric loop in the Indian Ocean that keeps warmer water on one side and cooler water on the other.

    This extreme weather pattern, known as a dipole, prompts one side (either east or west) to have higher-than-average rainfall and the other to have widespread drought. The researchers saw examples of this pattern in both the historical data they studied and the model’s simulation. They say the findings can help scientists not only better understand the mechanisms behind the east-west dipole in the Indian Ocean, but can one day help to produce more effective forecasts of drought and flood in the region.

    “We know that in the present-day gradients in the temperature of the Indian Ocean are important to rainfall and drought patterns, especially in East Africa, but it’s been challenging to show that those gradients change on long time-scales and to link them to long-term rainfall and drought patterns on both sides of the Indian Ocean,” said James Russell, a study author and professor of Earth, environmental, and planetary sciences at Brown. “We now have a mechanistic basis to understand why some of the longer-term changes in rainfall patterns in the two regions have changed through time.”

    In the paper, the researchers explain the mechanisms behind how the Indian Ocean dipole they studied formed and the weather-related events it led to during the period they looked at, which covered the end of the last Ice Age and the start of the current geological epoch.

    The researchers characterize the dipole as an east-west dipole where the water on the western side — which borders modern day East African countries like Kenya, Ethiopia and Somalia — is cooler than the water on eastern side toward Indonesia. They saw that the warmer water conditions of the dipole brought greater rainfall to Indonesia, while the cooler water brought much drier weather to East Africa.

    That fits into what is often seen in recent Indian Ocean dipole events. In October, for example, heavy rain led to floods and landslides in Indonesian islands of Java and Sulawesi, leaving four people dead and impacting over 30,000 people. On the opposite end, Ethiopia, Kenya and Somalia experienced intense droughts starting in 2020 that threatened to cause famine.

    The changes the authors observed 17,000 years ago were even more extreme, including the complete drying of Lake Victoria — one of the largest lakes on Earth.

    “Essentially, the dipole intensifies dry conditions and wet conditions that could result in extreme events like multi-year or decades-long dry events in East Africa and flooding events in South Indonesia,” said Xiaojing Du, a Voss postdoctoral researcher in the Institute at Brown for Environment and Society and Brown’s Department of Earth, Environmental and Planetary Sciences, and the study’s lead author. “These are events that impact people’s lives and also agriculture in those regions. Understanding the dipole can help us better predict and better prepare for future climate change.”

    The dipole the researchers studied formed from the interactions between the heat transport system of the Atlantic Ocean and an atmospheric loop, called a Walker Circulation, in the tropical Indian Ocean. The lower part of the atmospheric loop flows east to west across much of the region at low altitudes near the ocean surface, and the upper part flows west to east at higher altitudes. The higher air and lower air connect in one big loop.

    Interruption and weakening of the Atlantic Ocean heat transport, which works like a conveyor belt made of ocean and wind currents, was brought on by massive melting of the Laurentide ice sheet that once covered most of Canada and the northern U.S. The melting cooled the Atlantic and consequent wind anomalies triggered the atmospheric loop over the tropical Indian Ocean to become more active and extreme. That then led to increased precipitation in the east side of the Indian Ocean (where Indonesia sits) and reduced precipitation in the west side, where East Africa sits.

    The researchers also show that during the period they studied, this effect was amplified by a lower sea level and the exposure of nearby continental shelves.

    The scientists say more research is needed to figure out exactly what effect the exposed continental shelf and lower sea level has on the Indian Ocean’s east-west dipole, but they’re already planning to expand the work to investigate the question. While this line of the work on lower sea levels won’t play into modeling future conditions, the work they’ve done investigating how the melting of ancient glaciers impacts the Indian Ocean dipole and the heat transport system of the Atlantic Ocean may provide key insights into future changes as climate change brings about more melting.

    “Greenland is currently melting so fast that it’s discharging a lot of freshwater into the North Atlantic Ocean in ways that are impacting the ocean circulation,” Russell said. “The work done here has provided a new understanding of how changes in the Atlantic Ocean circulation can impact Indian Ocean climate and through that rainfall in Africa and Indonesia.”

    The study was supported with funding from the Institute at Brown for Environment and Society and the National Science Foundation.

    [ad_2]

    Brown University

    Source link

  • Study discovers triple immunotherapy combination as possible treatment for pancreatic cancer

    Study discovers triple immunotherapy combination as possible treatment for pancreatic cancer

    [ad_1]

    Newswise — HOUSTON ― Researchers at The University of Texas MD Anderson Cancer Center have discovered a novel immunotherapy combination, targeting checkpoints in both T cells and myeloid suppressor cells, that successfully reprogrammed the tumor immune microenvironment (TIME) and significantly improved anti-tumor responses in preclinical models of pancreatic cancer.

    In this study, published today in Nature Cancer, researchers used comprehensive immune profiling in mouse and human pancreatic cancers to systematically identify mechanisms of immunotherapy resistance and investigate potential therapeutic targets. They found that neutralizing several distinct immunosuppressive mechanisms of the TIME dramatically improved survival rates in laboratory models, pointing to a potential treatment option for this notoriously lethal and unresponsive cancer.  

    “This triple combination therapy led to an unprecedented curative response in our models,” said corresponding author Ronald DePinho, M.D., professor of Cancer Biology. “The prevailing view has been that pancreatic cancer is impervious to immunotherapy, but this preclinical study shows that it can be vulnerable to the right combination therapy. Moreover, the presence of these targets in human pancreatic cancer specimens raises the exciting possibility that such therapeutic combinations could one day help our patients.”

    Pancreatic cancer is one of the leading causes of cancer death in the United States, partially because 80% of cases are diagnosed at an advanced stage. Pancreatic cancer is also considered to be “non-immunogenic,” meaning it is unresponsive to commonly used anti-PD-1 and anti-CTLA-4 immune checkpoint inhibitors. This is due in part to the immunosuppressive conditions in the TIME, but the mechanisms behind this resistance are not fully understood.

    The researchers used high-dimensional immune profiling and single-cell RNA sequencing to study how the TIME is affected by a variety of immunotherapies. They identified specific immune checkpoint proteins, 41BB and LAG, that were highly expressed in exhausted T cells.

    In testing antibodies targeting these checkpoints, the researchers observed that models treated with a 41BB agonist and LAG3 antagonist in combination had slower tumor progression, higher levels of anti-tumor immunity indicators and significantly improved survival rates compared to treatment with either antibody alone or with other checkpoint inhibitors. Notably, these preclinical studies faithfully mirrored the human data in their lack of efficacy of anti-PD1 or anti-CTLA-4 therapy.

    The researchers also confirmed these two therapeutic targets are present in human pancreatic cancer samples, with 81% and 93% of patients analyzed having T cells with 41BB and LAG3 expression, respectively. 

    Because this dual-therapy combination did not completely eliminate established tumors, the investigators also examined efforts to reprogram the TIME to further sensitize tumors to immunotherapy. At baseline, the TIME contained an abundance of myeloid-derived suppressor cells (MDSCs) expressing CXCR2, a protein associated with recruiting immunosuppressive cells. Inhibiting CXCR2 alone decreased MDSC migration and blocked tumor growth, but it was not curative. This prompted the investigators to consider a combination targeting 41BB, LAG3 and CXCR2.

    It was this triple combination that resulted in complete tumor regression and improved overall survival in 90% of preclinical models. In a more stringent lab model that develops multiple spontaneously arising tumors with higher treatment resistance, the combination achieved complete tumor regression in over 20% of cases.

    “These are encouraging results, especially considering the lack of effective immunotherapy options in pancreatic cancer,” DePinho said. “By targeting multiple synergistic mechanisms that get in the way of the immune response, we can give T cells a fighting chance to attack these tumors. Of course, we still need to see how this combination translates into a safe and effective regimen in the clinic, and we invite other researchers to build upon these results. We are optimistic that pancreatic cancers, and hopefully other non-immunogenic cancers, can ultimately be rendered vulnerable to combination immunotherapy.”

    The authors point out that these particular immunotherapy agents currently are undergoing clinical trials as monotherapies, suggesting potential opportunities to rapidly translate this triple combination into clinical studies.

    This work was supported by the National Institutes of Health/National Cancer Institute (P01 CA117969, RO1CA240526, RO1CA236864, R01CA231349, R01CA220236, P50CA221707),  the Elsa U. Pardee Foundation, MD Anderson’s Advanced Scholar Program, the Eleanor Russo Fund for Pancreatic Research, Ralph A. Loveys Family Charitable Foundation, the Cultural & Charitable Club of Somerset Run, the New Jersey Health Foundation, the Sheikh Ahmed Bin Zayed Al Nahyan Center for Pancreatic Cancer Research, and MD Anderson’s Pancreatic Cancer Moon Shot®. A full list of collaborating authors and their disclosures can be found with the full paper here.

     

    – 30 –

    [ad_2]

    University of Texas M. D. Anderson Cancer Center

    Source link

  • Wristwatch device gives therapists opportunity to guide PTSD patients through treatment

    Wristwatch device gives therapists opportunity to guide PTSD patients through treatment

    [ad_1]

    Newswise — Sights, smells and sounds of everyday life can supply the triggers that take someone with PTSD right back to the scarring scene they’re trying to forget. 

    With PTSD, or post-traumatic stress disorder, a honking horn, a crowded coffeehouse or a sharp scent can bring back traumatic memories that can raise the heart rate, increase muscle tension and lead to anxiety and depression. These reactions occur even without the presence of danger, but they pose their own threat by causing strains on relationships at home and work, igniting the need to avoid certain situations and contributing to mood changes.

    PTSD can happen to anyone at any age, according to the National Institutes of Health, and treatment options include medications as well as therapy. Researchers at MUSC Health recently published a paper in the Journal of Psychiatric Research where they worked with medical device company Zeriscope to test a device called Bio Ware, which is designed to enhance the effects of prolonged exposure therapy, a common, evidence-based therapy for patients with PTSD.

    And with between 11 and 30% of veterans experiencing symptoms of PTSD, the research team looked at using Bio Ware with service members at the Ralph H. Johnson VA Medical Center specifically. 

    With in vivo exposures, which are a key component of prolonged exposure therapy, patients are tasked with putting themselves in safe but uncomfortable or triggering situations outside of their therapy sessions, as a form of homework. If they have a fear of crowded spaces, for example, their therapist may ask them to go to the grocery store at a busy time and then share their reaction at the next therapy session. If the service member is stressed by loud spaces and avoids them, their therapist may send them to a loud sporting event, for example, in an effort to help them learn to feel more comfortable in those situations and not have to avoid them in the future.  

    When done properly, in vivo exposures have proven successful and helpful to patients, but with so much relying on the patient and their interpretation of their own stressors, Sudie Back, Ph.D., a professor in the department of psychiatry at MUSC Health and principal investigator for the NIMH-funded study, sees room for error. 

    “What I find so exciting about this new Bio Ware device,” she said. “Is that when used alongside evidence-based, exposure treatment methods for PTSD, we’ve seen significantly better results for our patients.” Back and her team saw significant decreases in both PTSD symptoms and depression symptoms with their patients who used the new technology.

    As a wearable device, the Bio Ware system includes a discreet button-shaped camera attached to the patient’s clothing, a watch-sized tool around their wrist and a Bluetooth headphone in their ear so their therapists can be virtually with them in the experience or situation that causes them stress. The clinician can see immediate recordings of the patient’s heart rate, breathing and emotional distress, and they can guide them through the experience by either pushing them to do more or pulling them back to do less, to optimize the in vivo exposure.

    According to Back, “This is the first time, to my knowledge, that we’ve been able to virtually go with patients during their in vivo exposures and have instant access to their physiological data in the moment to really help them get the most out of those exercises, which I believe will translate into them seeing significant reductions in their PTSD symptoms.”

    Bill Harley, the co-founder and CEO of Zeriscope, compares it to working out on your own versus with a personal trainer.

    “Communicating with patients while simultaneously seeing their biophysics is incredibly helpful,” he said. “A lot of healing happens in the in vivo exposures, and Bio Ware enriches that experience.”

    The “special sauce” created with Bio Ware lies in the autonomic nervous system according to Robert Adams, M.D., the president and co-founder of Zeriscope and a professor of neurology at MUSC Health. Previously developed watches aimed for something similar, but they only collected pulse information. This system goes a level deeper, he says, by directly questioning the autonomic nervous system.

    The autonomic nervous system controls physiologic reactions like heart rate, blood pressure and breathing. By using the same technology used in lie detector tests, physicians can take this galvanic skin response, change the patient’s triggering experience accordingly and watch how the actions that they direct the patient to do impact the autonomic nervous system.

    One of these days, they know they gotta get goin’,

    Out of the door and down the street all alone.

    Adams thinks the line from the Grateful Dead song “Truckin’” summarizes the need for Bio Ware. “It’s an expression of what exposure therapy really is. You’ve got to go back out into the real world on your own, but we can help.”

     

    ###

    About MUSC

    Founded in 1824 in Charleston, MUSC is the state’s only comprehensive academic health system, with a unique mission to preserve and optimize human life in South Carolina through education, research and patient care. Each year, MUSC educates more than 3,000 students in six colleges – Dental Medicine, Graduate Studies, Health Professions, Medicine, Nursing and Pharmacy – and trains more than 850 residents and fellows in its health system. MUSC brought in more than $327.6 million in research funds in fiscal year 2021, leading the state overall in research funding. MUSC also leads the state in federal and National Institutes of Health funding, with more than $220 million. For information on academic programs, visit musc.edu.

    [ad_2]

    Medical University of South Carolina

    Source link

  • Delaying antibiotics for neutropenic fever may not affect survival of cancer inpatients

    Delaying antibiotics for neutropenic fever may not affect survival of cancer inpatients

    [ad_1]

    Newswise — December 29, 2022 — In cancer patients with neutropenic fever, delaying antibiotic treatment past 60 minutes from the time of fever detection does not reduce the short-term chance of survival, according to a study in the American Journal of Medical Quality. The journal is published in the Lippincott portfolio by Wolters Kluwer.

    Neutropenia—low levels of white blood cells called neutrophils, which fight infection—develops in more than 80% of patients who receive chemotherapy for a blood cancer. It occurs because chemotherapy destroys neutrophils along with tumor cells.

    A fever in a patient with neutropenia is considered a medical emergency, according to Adam Binder, MD, of Thomas Jefferson University Hospital in Philadelphia, and colleagues. The fever signals a severe decrease in neutrophils and therefore a compromised ability of the immune system to ward off infections. Neutropenic fever is defined as a temperature of at least 101°, or a sustained temperature of at least 100.4° for an hour or more.

    The Infectious Disease Society of America and the American Society of Clinical Oncology have both published guidelines for treating outpatients who have neutropenic fever. Both organizations call for administering an intravenous antibiotic within 60 minutes after the fever is detected. The recommendation about antibiotics is also often applied to the treatment of hospital inpatients, but there’s no clear evidence that’s appropriate.

    Comparing inpatients who did or did not receive antibiotics during the recommended treatment window

    Dr. Binder and his colleagues looked back at data on 187 patients at their hospital who had developed neutropenic fever. Their main goal was to see whether delays in antibiotic treatment affected short-term survival.

    Only 14% of patients received antibiotics within 60 minutes of developing neutropenic fever. Their survival rate 6 months later wasn’t significantly better than the survival rate of patients who received antibiotics later than recommended.

    Further analysis identified several factors that had a statistically significant association with the risk of death:

    • Patients with insurance had a 72% lower risk of death than those without insurance
    • Patients with at least one other major medical condition had a 2.7 times greater risk of death than those with blood cancer alone
    • Patients who were treated with antibiotics within 40 minutes actually had a 5.7 greater risk of death than those who didn’t receive antibiotics so quickly

    A possible explanation for the last finding, the researchers say, is that patients who received antibiotics within 40 minutes “had other symptoms that yielded a concerning clinical picture, thus leading to a timelier administration of antibiotics, but ultimately a worse clinical outcome.”

    Guidelines for treatment of outpatients may not apply to hospitalized patients

    Even a delay of more than 4 hours wasn’t long enough to affect survival, Dr. Binder and his colleagues determined. That result is consistent with information from previous studies of inpatients, they say.

    The authors believe existing treatment guidelines are appropriate for patients with neutropenic fever who are treated in a physician’s office or an emergency department, but other factors must be considered for patients who have been admitted to a hospital. “Unlike neutropenic fever patients presenting to the emergency department, where true time to antibiotic administration may often be many hours or even days before arrival, a few hours long [delay] in the hospital may not be sufficiently long enough to cause significant patient harm.”

    Read [Delay in Time to Antibiotics for De Novo Inpatient Neutropenic Fever May Not Impact Overall Survival for Patients With a Cancer Diagnosis]

    ###

    About the American Journal of Medical Quality

    The American Journal of Medical Quality (AJMQ) is the official journal of the American College of Medical Quality. AJMQ is focused on keeping readers informed of the resources, processes, and perspectives contributing to quality health care services. This peer-reviewed journal presents a forum for the exchange of ideas, strategies, and methods in improving the delivery and management of health care.

    About the American College of Medical Quality

    The American College of Medical Quality (ACMQ) is the organization for healthcare professionals responsible for providing leadership in quality and safety outcomes, who want or need the tools, experience, and expertise to improve the quality and safety of patient care. Membership in ACMQ provides a gateway to resources, programs, and professional development opportunities and a greater recognition of quality issues by the entire healthcare field.

    About Wolters Kluwer

    Wolters Kluwer (WKL) is a global leader in professional information, software solutions, and services for the clinicians, nurses, accountants, lawyers, and tax, finance, audit, risk, compliance, and regulatory sectors. We help our customers make critical decisions every day by providing expert solutions that combine deep domain knowledge with advanced technology and services.

    Wolters Kluwer reported 2021 annual revenues of €4.8 billion. The group serves customers in over 180 countries, maintains operations in over 40 countries, and employs approximately 20,000 people worldwide. The company is headquartered in Alphen aan den Rijn, the Netherlands.

    Wolters Kluwer provides trusted clinical technology and evidence-based solutions that engage clinicians, patients, researchers and students in effective decision-making and outcomes across healthcare. We support clinical effectiveness, learning and research, clinical surveillance and compliance, as well as data solutions. For more information about our solutions, visit https://www.wolterskluwer.com/en/health and follow us on LinkedIn and Twitter @WKHealth.

    For more information, visit www.wolterskluwer.com, follow us on TwitterFacebookLinkedIn, and YouTube.

    [ad_2]

    Wolters Kluwer Health: Lippincott

    Source link

  • New Bacterial Therapy Approach to Treat Lung Cancer

    New Bacterial Therapy Approach to Treat Lung Cancer

    [ad_1]

     

    Newswise — New York, NY—December 23, 2022—Lung cancer is the deadliest cancer in the United States and around the world. Many of the currently available therapies have been ineffective, leaving patients with very few options. A promising new strategy to treat cancer has been bacterial therapy, but while this treatment modality has quickly progressed from laboratory experiments to clinical trials in the last five years, the most effective treatment for certain types of cancers may be in combination with other drugs. 

    Columbia Engineering researchers report that they have developed a preclinical evaluation pipeline for characterization of bacterial therapies in lung cancer models. Their new study, published December 13, 2022, by Scientific Reports, combines bacterial therapies with other modalities of treatment to improve treatment efficacy without any additional toxicity. This new approach was able to rapidly characterize bacterial therapies and successfully integrate them with current targeted therapies for lung cancer.

    “We envision a fast and selective expansion of our pipeline to improve treatment efficacy and safety for solid tumors,” said first author Dhruba Deb, an associate research scientist who studies the effect of bacterial toxins on lung cancer in Professor Tal Danino’s lab in Biomedical Engineering, “As someone who has lost loved ones to cancer, I would like to see this strategy move from the bench to bedside in the future.”

    The team used RNA sequencing to discover how cancer cells were responding to bacteria at the cellular and molecular levels. They built a hypothesis on which molecular pathways of cancer cells were helping the cells to be resistant to the bacteria therapy. To test their hypothesis, the researchers blocked these pathways with current cancer drugs and showed that combining the drugs with bacterial toxins is more effective in eliminating lung cancer cells. They validated the combination of bacteria therapy with an AKT-inhibitor as an example in mouse models of lung cancer.

    “This new study describes an exciting drug development pipeline that has been previously unexplored in lung cancer – the use of toxins derived from bacteria,” said Upal Basu Roy, executive director of research, LUNGevity Foundation, USA. “The preclinical data presented in the manuscript provides a strong rationale for continued research in this area, thereby opening up the possibility of new treatment options for patients diagnosed with this lethal disease.”

    Deb plans to expand his strategy to larger studies in preclinical models of difficult-to-treat lung cancers and collaborate with clinicians to make a push for the clinical translation. 

    ###

    About the Study

    Journal: Scientific Reports

    The study is titled: “Design of combination therapy for engineered bacterial therapeutics in non-small cell lung cancer.”

    Authors are: Dhruba Deb 1, Yangfan Wu 1, Courtney Coker 1, Tetsuhiro Harimoto 1, Ruoqi Huang 1 & Tal Danino 1,2,3

    1 Department of Biomedical Engineering, Columbia Engineering
    2 Herbert Irving Comprehensive Cancer Center, Columbia University
    3 Data Science Institute, Columbia University

    The study was funded by the Pershing Square Foundation (PSF) PSSCRA CU20-0730 (T.D.), Cancer Research Institute (CRI) CRI 3446 (T.D.) and NIH-NIBIB RO1 EB029750 (T.D.). 

    The authors declare no financial or other conflicts of interest.

    ###

    LINKS:

    Paper: https://www.nature.com/articles/s41598-022-26105-1   

    DOI: 10.1038/ s41598- 022- 26105-1  

    [ad_2]

    Columbia University School of Engineering and Applied Science

    Source link