ReportWire

Tag: Newswise

  • McMaster University experts available to speak on pneumonia cases in China

    McMaster University experts available to speak on pneumonia cases in China

    [ad_1]

    Clusters of undiagnosed pneumonia in children have been reported in China. The cases are under investigation by the World Health Organization, as China says there are no “unusual or novel pathogens.”

    Matthew Miller, director of McMaster’s Michael G. DeGroote Institute for Infectious Disease Research and executive director of Global Nexus School for Pandemic Prevention & Response, is available to speak on the current situations. You can reach out to Miller directly, by emailing [email protected].

    Dawn Bowdish, associate professor in the Department of Medicine and a Canada research chair in aging and immunity is also available to comment. You can contact Bowdish directly, by emailing [email protected].

    If you need any other assistance, please contact Adam Ward, media relations officer with McMaster’s Faculty of Health Sciences at [email protected].

    [ad_2]

    McMaster University

    Source link

  • Rough draft of Darwin’s Origin of species goes online

    Rough draft of Darwin’s Origin of species goes online

    [ad_1]

    Newswise — On the 164th anniversary of Charles Darwin’s Origin of species, the Darwin Online project at the National University of Singapore (NUS) will launch all the surviving draft pages of one of the most influential scientific books in history. After his book was published, the unsentimental Darwin discarded the hundreds of pages of the original handwritten draft of his epoch-making book into the Darwin family’s scrap paper pile. His children used some sheets for drawings and others were torn in half by one of Darwin’s son who used the blank back sides for mathematical exercises.

    In the end, almost all of the draft pages were destroyed. Towards the end of Darwin’s life, his theory of evolution was more widely accepted and there was intense interest in the original draft of Origin of Species. Some were rescued from the piles of scrap paper and old notes and, over decades, many were given away as gifts especially by his children after his death. These draft pages are now dispersed around the world and some have probably been lost forever.

    Discovering Darwin’s manuscripts

    Today, the rough drafts of Darwin’s Origin of species are some of the most precious and valuable pieces of paper in the history of science worth almost a million dollars each. The last one to sell at auction, in 2018, went for £490,000 (approximately USD$ 600,000). The United Kingdom’s Minister for Arts, Heritage and Tourism placed an export bar on Darwin’s manuscript, due to its cultural and national significance, in hopes of keeping it in the country.

    So far, about 50 sheets were known to survive. This launch of the drafts by Darwin scholar, Dr John van Wyhe from the NUS Department of Biological Sciences, includes seven draft pages not found in previous lists with three draft pages recently rediscovered – bringing the total to 59. This collection of draft pages includes unprecedented details about each sheet and its history. For example, one was donated by Darwin’s daughter Henrietta Litchfield to a Red Cross auction during WWI for the war wounded. It was purchased anonymously by cotton merchant and aviation pioneer Sir Alfred Paton who donated it to his old school, Clifton College. It was later sold at auction in 1999 for £39,500 to an anonymous buyer “in the Americas” and has never been seen again. Fortunately, it was photocopied by Clifton College and a photograph was printed in the auction catalogue.

    Uncovering the mysteries behind Darwin’s drafts

    Darwin’s handwriting is notoriously difficult to read. All of the drafts have been transcribed and edited showing where the text appears in the published book so they may be compared. The drafts make it possible to see in detail how Darwin originally composed and revised many of his arguments. The drafts total 11,700 words (7.7% of Origin of species) and contain many sentences that were never published, offering fascinating insights into Darwin’s thinking as he composed the book that changed the world. What would have happened if he had published the original version of some of his arguments? In one crossed out sentence, Darwin wrote that “An instinct may almost be called an empty trick.”

    In a famous passage of the Origin of species, Darwin argued that natural selection could gradually transform an animal like a bear into something like a whale. He was mocked and criticised by reviewers so severely that he deleted the passage from all later editions. What would have happened if he had published the passage as originally written?

    In one of the drafts, this never-printed paragraph was revealed as follows:

    “In N. America a bear has been seen swimming for hours with widely open mouth, thus catching the minute crustaceans swimming on the surface. Even in so extreme a case as this, if the supply of minute crustaceans were constant, & there did not in the region exist better adapted competitors, I can see no difficulty in a race of Bears being rendered by natural selection more & more aquatic in habits & structure, with larger & larger mouth, till a creature was produced as monstrous in size & structure as a whale though feeding on prey so minute.”

    Darwin later made very extensive corrections to the first and second proofs which makes the text of the first draft differ even more from the published book. His son Francis recalled that “my mother looked over the proofs of the ‘Origin.’”

    The drafts can be viewed for free via a detailed illustrated introduction here. The link will be made live after the embargo is lifted.

    The drafts join the world’s largest collection of Darwin’s writings, both publications and handwritten manuscripts, Darwin Online.

    [ad_2]

    National University of Singapore (NUS)

    Source link

  • Separating out signals recorded at the seafloor

    Separating out signals recorded at the seafloor

    [ad_1]

    Newswise — Blame it on plate tectonics. The deep ocean is never preserved, but instead is lost to time as the seafloor is subducted. Geologists are mostly left with shallower rocks from closer to the shoreline to inform their studies of Earth history.

    “We have only a good record of the deep ocean for the last ~180 million years,” said David Fike, the Glassberg/Greensfelder Distinguished University Professor of Earth, Environmental, and Planetary Sciences in Arts & Sciences at Washington University in St. Louis. “Everything else is just shallow-water deposits. So it’s really important to understand the bias that might be present when we look at shallow-water deposits.”

    One of the ways that scientists like Fike use deposits from the seafloor is to reconstruct timelines of past ecological and environmental change. Researchers are keenly interested in how and when oxygen began to build up in the oceans and atmosphere, making Earth more hospitable to life as we know it.

    For decades they have relied on pyrite, the iron-sulfide mineral known as “fool’s gold,” as a sensitive recorder of conditions in the marine environment where it is formed. By measuring the bulk isotopic composition of sulfur in pyrite samples — the relative abundance of sulfur atoms with slightly different mass — scientists have tried to better understand ancient microbial activity and interpret global chemical cycles.

    But the outlook for pyrite is not so shiny anymore. In a pair of companion papers published Nov. 24 in the journal Science, Fike and his collaborators show that variations in pyrite sulfur isotopes may not represent the global processes that have made them such popular targets of analysis.

    Instead, Fike’s research demonstrates that pyritte responds predominantly to local processes that should not be taken as representative of the whole ocean. A new microanalysis approach developed at Washington University helped the researchers to separate out signals in pyrite that reveal the relative influence of microbes and that of local climate.

    For the first study, Fike worked with Roger Bryant, who completed his graduate studies at Washington University, to examine the grain-level distribution of pyrite sulfur isotope compositions in a sample of recent glacial-interglacial sediments. They developed and used a cutting-edge analytical technique with the secondary-ion mass spectrometer (SIMS) in Fike’s laboratory.

    “We analyzed every individual pyrite crystal that we could find and got isotopic values for each one,” Fike said. By considering the distribution of results from individual grains, rather than the average (or bulk) results, the scientists showed that it is possible to tease apart the role of the physical properties of the depositional environment, like the sedimentation rate and the porosity of the sediments, from the microbial activity in the seabed.

    “We found that even when bulk pyrite sulfur isotopes changed a lot between glacials and interglacials, the minima of our single grain pyrite distributions remained broadly constant,” Bryant said. “This told us that microbial activity did not drive the changes in bulk pyrite sulfur isotopes and refuted one of our major hypotheses.”

    “Using this framework, we’re able to go in and look at the separate roles of microbes and sediments in driving the signals,” Fike said. “That to me represents a huge step forward in being able to interpret what is recorded in these signals.”

    In the second paper, led by Itay Halevy of the Weizmann Institute of Science and co-authored by Fike and Bryant, the scientists developed and explored a computer model of marine sediments, complete with mathematical representations of the microorganisms that degrade organic matter and turn sulfate into sulfide and the processes that trap that sulfide in pyrite.

    “We found that variations in the isotopic composition of pyrite are mostly a function of the depositional environment in which the pyrite formed,” Halevy said. The new model shows that a range of parameters of the sedimentary environment affect the balance between sulfate and sulfide consumption and resupply, and that this balance is the major determinant of the sulfur isotope composition of pyrite.

    “The rate of sediment deposition on the seafloor, the proportion of organic matter in that sediment, the proportion of reactive iron particles, the density of packing of the sediment as it settles to the seafloor — all of these properties affect the isotopic composition of pyrite in ways that we can now understand,” he said.

    Importantly, none of these properties of the sedimentary environment are strongly linked to the global sulfur cycle, to the oxidation state of the global ocean, or essentially any other property that researchers have traditionally used pyrite sulfur isotopes to reconstruct, the scientists said.

    “The really exciting aspect of this new work is that it gives us a predictive model for how we think other pyrite records should behave,” Fike said. “For example, if we can interpret other records — and better understand that they are driven by things like local changes in sedimentation, rather than global parameters about ocean oxygen state or microbial activity — then we can try to use this data to refine our understanding of sea level change in the past.”

    [ad_2]

    Washington University in St. Louis

    Source link

  • Telescope Array detects second highest-energy cosmic ray ever

    Telescope Array detects second highest-energy cosmic ray ever

    [ad_1]

    Newswise — In 1991, the University of Utah Fly’s Eye experiment detected the highest-energy cosmic ray ever observed. Later dubbed the Oh-My-God particle, the cosmic ray’s energy shocked astrophysicists. Nothing in our galaxy had the power to produce it, and the particle had more energy than was theoretically possible for cosmic rays traveling to Earth from other galaxies. Simply put, the particle should not exist.

    The Telescope Array has since observed more than 30 ultra-high-energy cosmic rays, though none approaching the Oh-My-God-level energy. No observations have yet revealed their origin or how they are able to travel to the Earth.

    On May 27, 2021, the Telescope Array experiment detected the second-highest extreme-energy cosmic ray. At 2.4 x 1020eV, the energy of this single subatomic particle is equivalent to dropping a brick on your toe from waist height. Led by the University of Utah (the U) and the University of Tokyo, the Telescope Array consists of 507 surface detector stations arranged in a square grid that covers 700 km2 (~270 miles2) outside of Delta, Utah in the state’s West Desert. The event triggered 23 detectors at the north-west region of the Telescope Array, splashing across 48 km2 (18.5 mi2). Its arrival direction appeared to be from the Local Void, an empty area of space bordering the Milky Way galaxy.

    “The particles are so high energy, they shouldn’t be affected by galactic and extra-galactic magnetic fields. You should be able to point to where they come from in the sky,” said John Matthews, Telescope Array co-spokesperson at the U and co-author of the study. “But in the case of the Oh-My-God particle and this new particle, you trace its trajectory to its source and there’s nothing high energy enough to have produced it. That’s the mystery of this—what the heck is going on?” 

    In their observation that published on Nov. 24, 2023, in the journal Science, an international collaboration of researchers describe the ultra-high-energy cosmic ray, evaluate its characteristics, and conclude that the rare phenomena might follow particle physics unknown to science. The researchers named it the Amaterasu particle after the sun goddess in Japanese mythology. The Oh-My-God and the Amaterasu particles were detected using different observation techniques, confirming that while rare, these ultra-high energy events are real.

    “These events seem like they’re coming from completely different places in the sky. It’s not like there’s one mysterious source,” said John Belz, professor at the U and co-author of the study. “It could be defects in the structure of spacetime, colliding cosmic strings. I mean, I’m just spit-balling crazy ideas that people are coming up with because there’s not a conventional explanation.”

    Natural particle accelerators

    Cosmic rays are echoes of violent celestial events that have stripped matter to its subatomic structures and hurled it through universe at nearly the speed of light. Essentially cosmic rays are charged particles with a wide range of energies consisting of positive protons, negative electrons, or entire atomic nuclei that travel through space and rain down onto Earth nearly constantly.

    Cosmic rays hit Earth’s upper atmosphere and blasts apart the nucleus of oxygen and nitrogen gas, generating many secondary particles. These travel a short distance in the atmosphere and repeat the process, building a shower of billions of secondary particles that scatter to the surface. The footprint of this secondary shower is massive and requires that detectors cover an area as large as the Telescope Array. The surface detectors utilize a suite of instrumentation that gives researchers information about each cosmic ray; the timing of the signal shows its trajectory and the amount of charged particles hitting each detector reveals the primary particle’s energy.

    Because particles have a charge, their flight path resembles a ball in a pinball machine as they zigzag against the electromagnetic fields through the cosmic microwave background. It’s nearly impossible to trace the trajectory of most cosmic rays, which lie on the low- to middle-end of the energy spectrum. Even high-energy cosmic rays are distorted by the microwave background. Particles with Oh-My-God and Amaterasu energy blast through intergalactic space relatively unbent. Only the most powerful of celestial events can produce them.   

    “Things that people think of as energetic, like supernova, are nowhere near energetic enough for this. You need huge amounts of energy, really high magnetic fields to confine the particle while it gets accelerated,” said Matthews.

    Ultra-high-energy cosmic rays must exceed 5 x 1019 eV. This means that a single subatomic particle carries the same kinetic energy as a major league pitcher’s fast ball and has tens of millions of times more energy than any human-made particle accelerator can achieve. Astrophysicists calculated this theoretical limit, known as the Greisen–Zatsepin–Kuzmin (GZK) cutoff, as the maximum energy a proton can hold traveling over long distances before the effect of interactions of the microwave background radiation take their energy. Known source candidates, such as active galactic nuclei or black holes with accretion disks emitting particle jets, tend to be more than 160 million light years away from Earth. The new particle’s 2.4 x 1020 eV and the Oh-My-God particle’s 3.2 x 1020 eV easily surpass the cutoff.

    Researchers also analyze cosmic ray composition for clues of its origins. A heavier particle, like iron nuclei, are heavier, have more charge and are more susceptible to bending in a magnetic field than a lighter particle made of protons from a hydrogen atom. The new particle is likely a proton. Particle physics dictates that a cosmic ray with energy beyond the GZK cutoff is too powerful for the microwave background to distort its path, but back tracing its trajectory points towards empty space.

    “Maybe magnetic fields are stronger than we thought, but that disagrees with other observations that show they’re not strong enough to produce significant curvature at these ten-to-the-twentieth electron volt energies,” said Belz. “It’s a real mystery.” 

    Expanding the footprint 

    The Telescope Array is uniquely positioned to detect ultra-high-energy cosmic rays. It sits at about 1,200 m (4,000 ft), the elevation sweet-spot that allows secondary particles maximum development, but before they start to decay. Its location in Utah’s West Desert provides ideal atmospheric conditions in two ways: the dry air is crucial because humidity will absorb the ultraviolet light necessary for detection; and the region’s dark skies are essential, as light pollution will create too much noise and obscure the cosmic rays.

    Astrophysicists are still baffled by the mysterious phenomena. The Telescope Array is in the middle of an expansion that that they hope will help crack the case. Once completed, 500 new scintillator detectors will expand the Telescope Array will sample cosmic ray-induced particle showers across 2,900 km2  (1,100 mi2 ), an area nearly the size of Rhode Island. The larger footprint will hopefully capture more events that will shed light on what’s going on.

    [ad_2]

    University of Utah

    Source link

  • When baby stars fledge

    When baby stars fledge

    [ad_1]

    Newswise — A team of astrophysicists led by Núria Miret-Roig from the University of Vienna found that two methods for determining the age of stars measure different things: Isochronous measurement thereby determines the birth date of stars, while dynamical tracking provides information on when stars “leave their nest”, about 5.5 million years later in the star clusters studied. The study, which makes it possible to determine the earliest stages of a star’s life, is currently published in the scientific journal “Nature Astronomy”.

    The age of stars is a fundamental parameter in astrophysics, but it is still relatively difficult to measure. The best approximations to date have been for so-called star clusters, i.e. groups of stars of the same age with a common origin. The age of six relatively close and young star clusters has now been analysed as part of a study at the Institute of Astrophysics at the University of Vienna. It was found that two of the most reliable methods for determining the age of stars – isochronous measurement and dynamic tracing – were systematically and consistently different: The stars were each around 5.5 million years younger according to the dynamic tracing method than with the isochronous measurement.

    When the clock starts ticking

    “This indicates that the two measurement methods measure different things,” explains astrophysicist Núria Miret-Roig from the University of Vienna, first author of the study. According to the new study, the isochronous “clock” starts ticking from the time of star formation, but the “clock” of dynamic backtracking only starts ticking when a star cluster begins to expand after leaving its parent cloud. “This finding has significant implications for our understanding of star formation and stellar evolution, including planet formation and the formation of galaxies, and opens up a new perspective on the chronology of star formation. For example, the length of the so-called “embedded phase”, during which baby stars remain within the parental gas cloud, can be estimated,” explains João Alves, co-author and professor at the University of Vienna.

    Measuring how long baby stars stay in the nest

    “This age difference between the two methods represents a new and much-needed tool to quantify the earliest stages in a star’s life,” says Alves. “Specifically, we can use it to measure how long the baby stars take before they leave their nest.” The measurements were made possible by the high-resolution data from the Gaia special mission in conjunction with ground-based radial velocities (e.g. from the APOGEE catalogue). “This combination allows us to trace the positions of stars back to their birthplace with the accuracy of 3D velocities,” explains Miret-Roig. New and upcoming spectroscopic surveys such as WEAVE, 4MOST and SDSS-V will make this investigation possible for the entire solar neighbourhood.

    Puzzling difference

    “Astronomers have been using isochronous ages for as long as we have known how stars work, but these ages depend on the particular stellar model we use,” says Miret-Roig. “The high-quality data from the Gaia satellite has now allowed us to measure ages dynamically, independently of the stellar models, and we were excited to synchronise the two clocks.” During the calculations, however, a consistent and puzzling difference between the two age determination methods emerged. “And eventually we reached a point where we could no longer blame the discrepancy on observational errors – that’s when we realised that the two clocks were most likely measuring two different things,” says the astrophysicist.

    For the study, the research team analysed six nearby and young star clusters (up to 490 light years away and 50 million years old). The time scale of the embedded phase was found to be around 5.5 million years (plus/minus 1.1 million years) and could depend on the mass of the star cluster and the amount of stellar feedback.

    Applying this new technique to other young and nearby star clusters promises new insights into the star formation process and the drifting apart of stars, Miret-Roig hopes: “Our work paves the way for future research into star formation and provides a clearer picture of how stars and star clusters evolve. This is an important step in our endeavour to understand the formation of the Milky Way and other galaxies.”

    This publication has been co-funded by the European Union (ERC, ISM-FLOW, 101055318, PI: J. Alves). However, the views and opinions expressed are solely those of the author(s) and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.

    [ad_2]

    University of Vienna

    Source link

  • Environment-friendly electrochemical refrigerant compressor contributing to the achievement of carbon neutrality realizes sustainable building of the future with new energy technology

    Environment-friendly electrochemical refrigerant compressor contributing to the achievement of carbon neutrality realizes sustainable building of the future with new energy technology

    [ad_1]

    Newswise — In line with the Korean government’s recent efforts to achieve the goal of “going carbon neutral by 2050,” the energy transition from fossil fuels to new and renewable sources of energy has been gaining speed. In this context, the joint research team led by Principal Researcher Young Kim of the Korea Institute of Machinery and Materials (KIMM), an institute under the jurisdiction of the Ministry of Science and ICT, and professors Min-sung Kim and Dong-kyu Kim of Chung-Ang University has successfully developed an environment-friendly refrigerant compressor using an electrochemical method instead of a mechanical method.

    In contrast to conventional refrigerants containing HFCs (hydrofluorocarbons) that destroy the ozone layer and cause global warning, environment-friendly refrigerants (ammonia, R1234yf, etc.) have very small environmental impacts. In accordance with the Kigali Amendment to the Montreal Protocol, advanced nations in Europe as well as the United States and Japan are in the process of transitioning to eco-friendly refrigerants until the complete phase-out of the use of HFCs in 2024. Using environment-friendly refrigerants can help to prevent environmental pollution and contribute to sustainable development.

    Meanwhile, mechanical compressors have several limitations such as problems with durability of parts due to rapid rotation, contamination of refrigerants caused by lubricants, and loud noise. As electrochemical compressors have no moving parts and do not require the use of lubricants, they can help to overcome the shortcomings of conventional mechanical compressors. Additionally, a constant flow rate can be provided at various pressure ratios with electrochemical compressors, and the high efficiency thereof can help to significantly increase the COP (coefficient of performance) of the heat pump.

    To maximize the energy-saving effect of the rooftop greenhouse, the research team developed a “optimized smart farm operating solution” capable of controlling every aspect of the system such as air conditioning, LED, and hydroponics system in accordance with external weather conditions, and plans to demonstrate this solution through the demonstrated rooftop greenhouse.

    The joint research team has secured the core technologies necessary for producing the environment-friendly electrochemical refrigerant compressor and for designing the system thereof, and has successfully implemented the test run. With the newly developed environment-friendly electrochemical refrigerant compressor, the desired flow rate and pressure can be obtained by stacking.

    Unlike conventional mechanical compressing, an electrochemical compressor compresses refrigerants through the movement of ions by charging the ion exchange membrane with DC (direct current) voltage, while using hydrogen as the carrier gas. Additionally, it also allows for isothermal compression by applying a multi-layer freezing technology where cells are accumulated in a stack configuration. To date, the refrigerants that have been successfully compressed are ammonia, a natural refrigerant, and R1234yf, an eco-friendly refrigerant. The joint research team designed cells capable of operating solidly even under repeated high-pressure conditions and also demonstrated a leak-free design to prevent the leakage of refrigerants at high pressure. Moreover, by designing a channel capable of producing high performance even at high voltage, the joint research team has succeeded in maximizing the compression efficiency of the electrochemical compressor.

    The electrochemical compressor is capable of offering the desired compression ratio regardless of the size thereof, and can provide a stable flow rate in accordance with the compression ratio, and has excellent efficiency. Therefore, it can be used not only for constructing high-efficiency plants and heat pumps but also for building small-scale systems. In particular, as an electrochemical ammonia compressor can be used for compressing ammonia even when the ammonia acts as a hydrogen carrier, it can also be used for constructing hydrogen infrastructure.

    Dr. Young Kim of the KIMM’s Department of Thermal Energy Solutions was quoted as saying, “The eco-friendly electrochemical refrigerant compressor is highly efficient and requires a small footprint, which makes it economically attractive.” Dr. Kim added, “We are planning to develop a heat pump system using this technology to contribute to the achievement of the goal of going carbon neutral by 2050.”

    Meanwhile, this research was conducted with the support of the project for the “development of a chemical absorption-type heat pump using an electrochemical compressor” led by the Korea Institute of Energy Technology Evaluation and Planning of the Ministry of Trade, Industry and Energy.

     

    ###

    The Korea Institute of Machinery and Materials (KIMM) is a non-profit government-funded research institute under the Ministry of Science and ICT. Since its foundation in 1976, KIMM is contributing to economic growth of the nation by performing R&D on key technologies in machinery and materials, conducting reliability test evaluation, and commercializing the developed products and technologies.

    This research was conducted with the support of the project for the “development of a chemical absorption-type heat pump using an electrochemical compressor” led by the Korea Institute of Energy Technology Evaluation and Planning of the Ministry of Trade, Industry and Energy.

    [ad_2]

    National Research Council of Science and Technology

    Source link

  • Research on adoptees’ parenthood experiences.

    Research on adoptees’ parenthood experiences.

    [ad_1]

    Newswise — Parenting is always challenging, but for adopted people becoming a mum or dad can be extra demanding, as well as extra special – according to research from the University of East Anglia.

    A new study is the first in to investigate the lived experiences of adopted people in the UK as they become parents.

    It finds that they are affected by issues that link back to their adoption and to difficult experiences in their past – related to loss, rejection, abuse and neglect.

    Because of these difficult early experiences, many adoptees experience significant challenges, particularly as teenagers and young adults.

    These included mental health problems, emotional and behavioural difficulties, education and employment, relationship problems, and substance misuse.

    But while many people were parenting under the pressure of also trying to manage these challenges, becoming a mum or dad was often a key turning point and a motivation to turn their lives around.

    Lead researcher Prof Beth Neil, from UEA’s School of Social Work, said: “Adoption is a life-changing event, and it is really important to understand how people are affected throughout their whole life – not just in childhood.

    “Becoming a parent is a key life experience, but the research on adopted people becoming parents is very limited and has not tended to include people adopted through the child protection system, or the experiences of adopted men as fathers.

    “We wanted to better understand the issues faced by people who are adopted, as they become parents themselves.”

    The team worked with 20 adopted men and 20 adopted women – who were interviewed about their experiences.

    Most of the participants were in their 20s and 30s and all had been adopted under the age of 12 – with two thirds having been adopted through the child protection system.

    Almost a quarter of the parents in the study were not living with their children – including some who had themselves lost their children to care or adoption.

    Prof Neil said: “We guided them to break down their life into key chapters and talk through the high points, the low points and the turning points that were most significant to them. We wanted to understand adopted people’s life stories in their own words.

    “What we found is that when adopted people become parents, lots of issues can come up that link back to their adoption and to difficult experiences in their past such as issues of loss, rejection, abuse and neglect.

    “For some, having their first child meant meeting the first person in their life that they had a biological connection to. Others were afraid they would not bond with their child or that their child would reject them.

    “Because many of the participants had a history of abuse and neglect, thinking about their birth parents often raised anxieties that they would parent their own child poorly.

    “The flipside of this was the determination to try and break cycles of abuse, and we saw that for many, becoming a parent was a positive turning point.

    “Because the often-difficult backgrounds of the parents, many reported problems in their teenage years and as young adults with mental health, education and employment, substance misuse, relationships with parents and partners.

    “Often these problems were ongoing when they became a mum or dad, threatening their parenting and playing into their biggest fear – that they might repeat negative cycles of neglect or abuse with their own children.

    “Sadly, many adoptees feared that asking for help and expressing worries would lead to scrutiny of their parenting.

    “Most people were managing well in their role as mum and dad, but a minority were still struggling with difficult problems, and a small number of parents had experienced their worst fear – the removal of their own children. For parents who were judged unable to look after their own children, not ‘breaking the cycle’ was devastating.”

    The team say that support for adopted adults with mental health problems is a particularly pressing need, as parental mental health problems are a strong mediating factor in the link between childhood adversity and compromised parenting.

    Where adoptees are still struggling with these issues when they become a parent, then support is needed at that life stage.

    But ideally, the adoption system needs to recognise the need to provide support to adoptive families much earlier on, to prevent the difficulties that often become particularly challenging during the teenage years.

    The study found that identity issues raised by both men and women were very similar. This is important because almost all previous research had focused just on mothers. But fathers also felt deeply about the impact of adoption on their life, and issues linked to adoption came up for them when they became dads.

    “This research highlights the need for more support for adopted people both in childhood and when they become parents themselves,” added Prof Neil.

    This study was funded by the Economic and Social Research Council (ESRC).

    ‘How do adopted adults see the significance of adoption and being a parent in their life stories? A narrative analysis of 40 life story interviews with male and female adoptees’ is published in the journal Children and Youth Services Review.

    [ad_2]

    University of East Anglia

    Source link

  • ¿Quiénes se benefician de la administración de estatinas?

    ¿Quiénes se benefician de la administración de estatinas?

    [ad_1]

    Newswise — ROCHESTER, Minnesota—Si usted corre el riesgo de enfermedad cardíaca, el equipo de atención médica podría utilizar la herramienta de la ecuación de cohorte agrupada para determinar su riesgo a largo plazo y si la administración de estatinas (medicamentos para reducir el colesterol) es una buena opción.

    El Dr. Francisco Lopez-Jimenez, cardiólogo de Mayo Clinic de Rochester, Minnesota, afirma que es importante saber quiénes se benefician más de la administración de estatinas.

    Las estatinas son medicamentos que reducen la cantidad de colesterol que produce el hígado.

    “El colesterol se forma en las placas que se acumulan y crecen en el interior de las arterias, a veces hasta el punto de que esas arterias se obstruyen”, explica el Dr. López-Jiménez.

    Y esa obstrucción puede derivar en una enfermedad cardíaca. Sin embargo, ¿se pueden administrar las estatinas a todas las personas? 

    “Los pacientes que más se beneficiarán de la administración de estatinas serán las personas con antecedentes de ataques cardíacos, accidentes cerebrovasculares y otras afecciones que se sabe que se producen por las placas de colesterol”, afirma.

    La alimentación también desempeña un papel importante. El Dr. Lopez-Jimenez recomienda comer menos carne procesada y más cereales, frutas y verduras.

    “Los cambios de mayor impacto que las personas pueden hacer para reducir el colesterol incluyen consumir menos productos de origen animal que no sean pescado y consumir menos grasas saturadas”, afirma.

    ¿Qué ocurre si el equipo de atención médica recomienda medicamentos además de cambios en el estilo de vida?

    “Tome los medicamentos indicados, verifique las cantidades, asegúrese de que todos esos factores estén bien controlados”, afirma el Dr. Lopez-Jimenez.

    ###

    Información sobre Mayo Clinic

    Mayo Clinic es una organización sin fines de lucro, dedicada a innovar la práctica clínica, la educación y la investigación, así como a ofrecer pericia, compasión y respuestas a todos los que necesitan recobrar la salud. Visite la Red Informativa de Mayo Clinic para leer más noticias sobre Mayo Clinic.

    [ad_2]

    Mayo Clinic

    Source link

  • من الذي يستفيد من تناول أدوية خافِضة للكوليسترول؟

    من الذي يستفيد من تناول أدوية خافِضة للكوليسترول؟

    [ad_1]

    Newswise — مدينة روتشستر، ولاية مينيسوتا—إذا كنت مهددًا بخطرالإصابة بمرض القلب فقد يستخدم فريق الرعاية الصحية أداة معادلة تقييم المخاطر المُشتركة بين الفئات العمرية(PCE)  لتحديد خطر إصابتك على المدى الطويل وما إذا كان تناول أدوية خافِضة للكوليسترول — أدوية خفض الكوليستيرول، خيار مناسب لك أم لا.

    يقول الدكتور فرانسيسكو لوبيز جيمينيز،طبيب القلب لدى مايو كلينك في مدينة روتشستر، ولاية مينيسوتا إنه من المهم أن نفهم من هم الأكثر استفادةً من تناول أدوية خافِضة للكوليسترول.

    أدوية خافِضة للكوليسترول هي أدوية تعمل على خفض مقدار الكوليستيرول الذي يصنعه الكبد.

    يقول الدكتور لوبيز جيمينيز: “الكوليستيرول يتراكم في اللويحات التي تتجمع وتنمو داخل الشرايين، ويصل الأمر أحيانًا إلى انسداد هذه الشرايين.”

    والشرايين المسدودة تؤدي إلى مرض القلب التاجي. ولكن هل تناول أدوية خافِضة للكوليسترول مناسبة للجميع؟ 

    يقول الدكتور جيمينيز: “المرضى الأكثر استفادةً من تناول أدوية خافِضة للكوليسترول هم الأشخاص الذين لديهم تاريخ الإصابة بالنوبات القلبية والسكتات الدماغية وغير ذلك من الحالات المعروف أنها تنشأ عن لويحات الكوليستيرول.”

    كما أن الحمية الغذائية لها دور مهم. يقول الدكتور لوبيز جيمينيز إنه يجب الإقلال من أكل اللحوم المُصنَّعة والإكثار من الحبوب والفاكهة والخضروات.

    ويقول أيضًا: “التغييرات الأكثر تأثيرًا التي يجب على الناس إجراؤها للحد من الكوليستيرول تشمل الإقلال من أكل المُنتجات الحيوانية بخلاف الأسماك والإقلال من تناول الدهون المُشبَّعة.

    وماذا إذا أوصاك فريق الرعاية الصحية بالأدوية إلى جانب تغييرات نمط الحياة؟

    يقول الدكتور لوبيز جيمينيز: “خُذ الأدوية، وافحص مستويات الكوليسترول لديك، وتأكد أن كل العوامل تحت السيطرة.”

    ###

    نبذة عن مايو كلينك

    مايو كلينك هي مؤسسة غير ربحية تلتزم بالابتكار في الممارسات السريرية والتعليم والبحث وتوفير التعاطف والخبرة لكل مَن يحتاج إلى الاستشفاء والرد على استفساراته. لمعرفة المزيد من أخبار مايو كلينك، تفضَّل بزيارة شبكة مايو كلينك الإخبارية.

    [ad_2]

    Mayo Clinic

    Source link

  • Quem se beneficia com a administração de estatinas?

    Quem se beneficia com a administração de estatinas?

    [ad_1]

    Newswise — ROCHESTER, Minnesota—Se você está sob o risco de ter uma doença cardíaca, a equipe de cuidados médicos pode usar a ferramenta de equação de coorte agrupada (PCE) para determinar o seu risco de longo prazo, e se a administração de estatinas (medicamentos para reduzir o colesterol) é uma boa opção.

    O Dr. Francisco Lopez-Jimenez, cardiologista da Mayo Clinic em Rochester, Minnesota, explica que é importante entender quem mais se beneficia com a administração de estatinas.

    As estatinas são medicamentos que reduzem a quantidade de colesterol produzida pelo fígado.

    “O colesterol se forma nas placas que acumulam e crescem dentro das artérias. Às vezes, o acúmulo chega ao ponto de as artérias ficarem bloqueadas”, explica o Dr. Lopez-Jimenez.

    E artérias bloqueadas podem ocasionar o surgimento de doença cardíaca. Mas, as estatinas podem ser usadas por todas as pessoas? 

    “Os pacientes que mais serão beneficiados com a administração de estatinas são aqueles com um histórico de ataques cardíacos, acidentes vasculares cerebrais e outras condições conhecidas decorrentes das placas de colesterol”, ele explica.

    A dieta também tem um papel importante. O Dr. Lopez-Jimenez recomenda consumir menos carne processada e mais grãos, frutas e vegetais.

    “As mudanças mais impactantes que as pessoas podem fazer para reduzir o colesterol incluem consumir menos produtos de origem animal, exceto peixes, e menos gordura saturada”, explica o Dr. Lopez-Jimenez.

    E se a equipe de cuidados médicos recomendar o uso de medicamentos além das mudanças do estilo de vida?

    “Tome os medicamentos, verifique as suas taxas e tenha a certeza de que todos os fatores estarão sob controle”, ele explica.

    ###

    Sobre a Mayo Clinic

    A Mayo Clinic é uma organização sem fins lucrativos comprometida com a inovação na prática clínica, educação e pesquisa, fornecendo compaixão, conhecimento e respostas para todos que precisam de cura. Visite a Rede de Notícias da Mayo Clinic para obter outras notícias da Mayo Clinic.

    [ad_2]

    Mayo Clinic

    Source link

  • Bristol researchers set to join leading experts at COP28 as world ‘stands on edge of burning bridge’ to tackle climate change

    Bristol researchers set to join leading experts at COP28 as world ‘stands on edge of burning bridge’ to tackle climate change

    [ad_1]

    Newswise — A team of University of Bristol experts are poised to join the 2023 United Nations Climate Change Conference, which will hold the world to account in addressing humanity’s most urgent and ambitious challenge.

    The annual two-week summit, starting in the United Arab Emirates on Tuesday, 30 November, is set to deliver the first-ever global stocktake of progress in achieving key international climate targets to reduce carbon emissions and limit global warming.

    Dr Matt Palmer, Associate Professor of Climate Science, is among a group of academics from the University of Bristol’s renowned Cabot Institute for the Environment, who will be attending to share their expertise and insights.

    “The world community stands on the edge of a burning bridge: we must act faster to reduce emissions if we are to avoid devastating impacts of climate change on humans, the environment, and vital ecosystems,” Dr Palmer said.

    “2023 is set to be the warmest year on record and saw a catalogue of unprecedented and damaging extreme climate events across the globe. Current emissions reduction pledges by nations fall well short of the 1.5C Paris Agreement warming target. Immediate concerted action is imperative to lessen future climate risks and this meeting is a crucial opportunity for the global community to review progress, recognise shortcomings, and commit to stepping up mitigation actions.”

    Dr Palmer has been a lead author on the UN’s Intergovernmental Panel on Climate Change (IPCC) report, covering sea-level rise and ocean warming, and he will be presenting an event focused on the latest observations on climate change.

    Wide-ranging experts in hot topics including climate change policy, emissions, climate modelling, adapting to a warming world, food systems, and ensuring the shift to a net zero economy is fair, are joining the gathering.

    The conference will help harness joint global efforts on climate action and identify changes needed to bridge gaps preventing being on track to meet agreed goals.

    Delivering climate resilient, net zero food systems is a major global challenge which will come under discussion.

    Dr Pete Falloon, Associate Professor in Climate Resilient Food Systems, is attending in this capacity, leading an event in the UK Pavilion spanning partners and youth farmers from the Global North and South amongst others.

    He said: “Droughts, flooding, high temperatures and rising sea levels are increasingly threatening the security and resilience of our food systems worldwide. Food systems are also a key part of the pathway to net zero, given they are responsible for around a third of global emissions. We critically need to transform our food systems so they are well adapted to climate change but also deliver on net zero goals.

    “My hope is that by bringing together scientists, young farmers and policy makers together, we will use climate science and services as a platform to accelerate food system change, innovation and practice to reduce hunger and ensure a more sustainable future.”

    Dr Katharina Richter, a specialist in decolonial environmental politics and equitable development, hopes negotiations will consolidate previous multilateral plans to help emerging economy countries have swift access to financing to mitigate and adapt to the climate crisis.

    “This year, extreme weather events in Africa, including drought and flooding, are thought to have been exacerbated by climate change and, tragically, have killed more than 15,000 people already. To prevent further loss of life, it’s absolutely critical developing countries can access climate finance quickly and unconditionally,” Dr Richter said.

    “I will therefore be watching closely to see how G77 and Alliance of Small Island States proposals are met by the international community, especially details on operationalising last year’s negotiation highlight: the Loss and Damage Fund.”

    Technology and the transition to a green economy are further important areas to be negotiated.

    “Rich and oil-producing countries must honour their emission-related responsibilities and commit to phasing out fossil fuels entirely. Clean energy technology will be key to replacing fossil fuels. Without commitments to demand-side reductions by rich nations, however, a business-as-usual energy transition will continue to create sacrifice zones in indigenous, biodiverse, and/or water scarce territories of the Global South,” Dr Richter added.

    “I will therefore also be looking out for how green technology supply chains are addressed in the negotiations, including outcomes for developing countries where critical raw materials are extracted.”

    Climate justice specialist Dr Alix Dietzel, who also attended last year, leads work to help ensure the journey towards net zero is fully inclusive and equitable.

    Dr Dietzel said: “I’ll be interested to see who is able to attend and who will have their voices heard at the negotiations and whether this represents fair and equal decision making. Substantial commitments to mitigation targets, adaptation planning, and loss and damage funding are vital requirements of the just transition to climate change.

    “I hope the global community rises to such pressing challenges and that pledges are fair to all those most affected by climate change, who may be under-represented.”

    Incorporating the voice of Indigenous groups will play a pivotal role in realising such aspirations.

    Dr Karen Tucker, an expert in the politics of Indigenous knowledge, added: “Indigenous peoples are some of the most knowledgeable actors in global climate politics. But this doesn’t mean their expertise or rights are always recognised in international negotiations.

    “I’ll therefore be paying attention to the ways in which Indigenous peoples and Indigenous knowledges are incorporated into negotiations, particularly relating to land use and nature.”

    Raising the ambition of climate policies by integrating cities in national climate policies could help deliver and step-up progress in meeting demanding targets.

    Energy and climate policy specialist Dr Colin Nolden is hosting an official event, which highlights the latest research development and cross-sectoral policy recommendations for ramping up climate action at urban level. It has a specific focus on using Article 6 of the Paris Agreement to generate investment, especially in the context of climate clubs and alliances.

    Dr Nolden said: “Article 6 of the Paris Agreement provides a mechanism not just for trading carbon credits but also for generating investment and lowering the cost of capital, ranging from district heating systems in the global north to clean cooking projects in the global south.”

    “Climate clubs and alliances, meanwhile, can increase emission mitigation ambition among participating countries if they include cross-border investment and trading arrangements for carbon emission reductions generated using Article 6.

    “If appropriate Article 6 market governance arrangements are agreed on at COP28, climate clubs and alliances, ideally spanning the Global North and South, have great potential to help implement effective and just net zero policies. I will be providing insights and pitching an idea on how to make this happen.”

    University of Bristol student Katie Riley, who is in the final year of her degree in politics and international relations, will be joining as an observer.

    The 21-year-old has been an environmental lobbyist for several years and recently published a book about experiences of youth in climate activism. At COP27 Katie was a UK communications delegate for the Future Leaders Network and this year she is on Generation Climate’s COP28 strategy delegation.

    “I mainly started because I saw a space for change and loved engaging within my community. But international politics is exciting, especially within COP, so I’m pleased to be developing my involvement more widely,” Katie said.

    “I also think it’s necessary for as many young people to have a platform within big conferences like this, as our generation will be most affected by the climate crises.”

    The University has been working closely with Mayor Marvin Rees and Bristol City Council to help the city achieve a just transition towards a more sustainable economy. This includes a shared commitment to deliver the UN’s Sustainable Development Goals (SDGs), which aim to deliver better health, education, economic growth, and equality while also tackling climate change.

    Notes to editors

    Here’s a full overview of experts from the University of Bristol Cabot Institute for the Environment who can help with media requests in the run up to and during COP28: https://environment.blogs.bristol.ac.uk/2023/10/30/are-you-a-journalist-looking-for-climate-experts-for-cop28-weve-got-you-covered/

    For more information and to request an expert comment and/or media interview, please contact Victoria Tagg, University of Bristol Media & PR Manager (Research): [email protected]

    [ad_2]

    University of Bristol

    Source link

  • Taste guides our eating pace from the first bite

    Taste guides our eating pace from the first bite

    [ad_1]

    Newswise — When you eagerly dig into a long-awaited dinner, signals from your stomach to your brain keep you from eating so much you’ll regret it – or so it’s been thought. That theory had never really been directly tested until a team of scientists at UC San Francisco recently took up the question.  
     
    The picture, it turns out, is a little different. 
     
    The team, led by Zachary Knight, PhD, a UCSF professor of physiology in the Kavli Institute for Fundamental Neuroscience, discovered that it’s our sense of taste that pulls us back from the brink of food inhalation on a hungry day. Stimulated by the perception of flavor, a set of neurons – a type of brain cell – leaps to attention almost immediately to curtail our food intake.  
     
    “We’ve uncovered a logic the brainstem uses to control how fast and how much we eat, using two different kinds of signals, one coming from the mouth, and one coming much later from the gut,” said Knight, who is also an investigator with the Howard Hughes Medical Institute and a member of the UCSF Weill Institute for Neurosciences. “This discovery gives us a new framework to understand how we control our eating.” 
     
    The study, which appears Nov. 22, 2023 in Nature, could help reveal exactly how weight-loss drugs like Ozempic work, and how to make them more effective. 
     
    New views into the brainstem 
     
    Pavlov proposed over a century ago that the sight, smell and taste of food are important for regulating digestion. More recent studies in the 1970s and 1980s have also suggested that the taste of food may restrain how fast we eat, but it’s been impossible to study the relevant brain activity during eating because the brain cells that control this process are located deep in the brainstem, making them hard to access or record in an animal that’s awake. 
     
    Over the years, the idea had been forgotten, Knight said.  
     
    New techniques developed by lead author Truong Ly, PhD, a graduate student in Knight’s lab, allowed for the first-ever imaging and recording of a brainstem structure critical for feeling full, called the nucleus of the solitary tract, or NTS, in an awake, active mouse. He used those techniques to look at two types of neurons that have been known for decades to have a role in food intake. 
     
    The team found that when they put food directly into the mouse’s stomach, brain cells called PRLH (for prolactin-releasing hormone) were activated by nutrient signals sent from the GI tract, in line with traditional thinking and the results of prior studies. 
     
    However, when they allowed the mice to eat the food as they normally would, those signals from the gut didn’t show up. Instead, the PRLH brain cells switched to a new activity pattern that was entirely controlled by signals from the mouth.  
     
    “It was a total surprise that these cells were activated by the perception of taste,” said Ly. “It shows that there are other components of the appetite-control system that we should be thinking about.” 
     
    While it may seem counterintuitive for our brains to slow eating when we’re hungry, the brain is actually using the taste of food in two different ways at the same time. One part is saying, “This tastes good, eat more,” and another part is watching how fast you’re eating and saying, “Slow down or you’re going to be sick.” 
     
    “The balance between those is how fast you eat,” said Knight. 
     
    The activity of the PRLH neurons seems to affect how palatable the mice found the food, Ly said. That meshes with our human experience that food is less appetizing once you’ve had your fill of it.  
     
    Brain cells that inspire weight-loss drugs 
     
    The PRLH-neuron-induced slowdown also makes sense in terms of timing. The taste of food triggers these neurons to switch their activity in seconds, from keeping tabs on the gut to responding to signals from the mouth.  
     
    Meanwhile, it takes many minutes for a different group of brain cells, called CGC neurons, to begin responding to signals from the stomach and intestines. These cells act over much slower time scales – tens of minutes – and can hold back hunger for a much longer period of time. 
     
    “Together, these two sets of neurons create a feed-forward, feed-back loop,” said Knight. “One is using taste to slow things down and anticipate what’s coming. The other is using a gut signal to say, ‘This is how much I really ate. Ok, I’m full now!’”  
     
    The CGC brain cells’ response to stretch signals from the gut is to release GLP-1, the hormone mimicked by Ozempic, Wegovy and other new weight-loss drugs.  
     
    These drugs act on the same region of the brainstem that Ly’s technology has finally allowed researchers to study. “Now we have a way of teasing apart what’s happening in the brain that makes these drugs work,” he said.  
     
    A deeper understanding of how signals from different parts of the body control appetite would open doors to designing weight-loss regimens designed for the individual ways people eat by optimizing how the signals from the two sets of brain cells interact, the researchers said. 
     
    The team plans to investigate those interactions, seeking to better understand how taste signals from food interact with feedback from the gut to suppress our appetite during a meal. 

    [ad_2]

    University of California, San Francisco (UCSF)

    Source link

  • Oxford experts say 1.5°C target still achievable with drastic action

    Oxford experts say 1.5°C target still achievable with drastic action

    [ad_1]

    University of Oxford 

    Smith School of Enterprise and the Environment news release

    22 November 2023

    “Not dead yet” – experts identify interventions that could rescue 1.5°C

     

    Newswise — To meet the goals of the Paris Agreement and limit global heating to 1.5°C, global annual emissions will need to drop radically over the coming decades. Today [22 Nov], a new paper from climate economists at the University of Oxford says that this goal could still be within our reach. They identify key “sensitive intervention points” that could unlock significant progress towards the Paris Agreement with the least risk and highest impact. These include:

    • Investing in clean energy technologies with consistent cost declines
    • Enacting central bank policies to reduce the value of polluting assets
    • Improving climate-related financial risk disclosure.

    ‘This is not to suggest that reaching the Paris goals will be straightforward, or easy, but like Achilles’ heel, our research points to the areas that could have an outsized impact,’ says lead author Dr Penny Mealy, associate at the Institute for New Economic Thinking, University of Oxford.

    ‘We need climate policies which are pragmatic and practical, designed with an understanding of where the economy and technologies are capable of quickly transforming our economies for the better. These are those policy areas. This is how we design policy for 1.5°C,’ affirms co-author Dr Pete Barbrook-Johnson of the Smith School of Enterprise and the Environment.

    The research also highlights the areas where interventions will be more difficult and less impactful, including nuclear fission, which would be slow to roll out and could have unintended consequences; and carbon capture and storage, which presents both high barriers and risks.

    To reach their conclusions, the authors devised a new framework for identifying sensitive intervention points, or SIPs, that have the characteristics necessary to radically decarbonize our global economy.

    SIPs include critical tipping points – like renewable energy becoming cheaper than coal; critical points in networks – like powerful political figures or important technologies, and critical points in time or “windows of opportunity” that might prime the existing systems for change, such as the Covid-19 pandemic. These intervention points must be assessed by the ease with which they can be implemented, their impact potential, and the potential for creating risks. The authors stress that, while the framework is highly applicable to climate change, it could also be applied to solving other economic and social problems.

    The ratings provided for each SIP intervention were applied subjectively based on discussions with experts, literature research, and modelling. The framework can and should be applied regularly to reassess priorities as new data and insights become available, the authors say.

    Co-author Dr Matt Ives, comments, ‘1.5°C is not dead yet, but targeted and speedy interventions that can bring about the non-linear change necessary to keep it alive. As COP28 nears, our research highlights key sensitive intervention points we can prioritise to help turn the tide, while providing a valuable framework for policymakers.’

    Sensitive intervention points: a strategic approach to climate action is published today, 22nd November, in the Oxford Review of Economic Policy.

    [ad_2]

    University of Oxford

    Source link

  • Cancer cells exploit cell competition to survive and spread

    Cancer cells exploit cell competition to survive and spread

    [ad_1]

    Newswise — Living cells compete with each other and try to adapt to the local environment. Cells that are unable to do so are eliminated eventually. This cellular competition is crucial as the surrounding normal epithelial cells use it to identify and eliminate mutant cancer cells.  Studies have reported that when activating mutants of “Ras” proteins are expressed in mammalian epithelial cells, they are pushed toward the lumen, excreted along with other bodily waste, and eliminated by competition. Epithelial cells containing Ras mutants have been reported to be removed in this manner in several organs, including the small intestine, stomach, pancreas, and lungs. This suggests that cell competition is an innate defense system orchestrated by epithelial cells to prevent the accumulation of incidentally produced cancerous cells and thereby suppress cancer formation.

    In general, mutations in multiple genes accumulate in a stepwise manner when normal cells become cancerous. However, it is not known how cell competition is affected by this process. For instance, human colorectal cancer develops when the adenomatous polyposis coli (APC) gene becomes dysfunctional and activates “Wnt signaling,” followed by the activation of Ras signaling.

    In a recent study, a team of researchers from Japan, led by Associate Professor Shunsuke Kon of the Department of Cancer Biology, Institute of Biomedical Research and Innovation, Tokyo University of Science (TUS), examined the effects of the accumulation of stepwise gene mutations on cell competition and investigated the role of cell competition in the actual cancer formation process. Their study was published in Nature Communications on November 3, 2023 with Mr. Kazuki Nakai, a third year PhD student at the Graduate School of Life Sciences in TUS, as the lead author.

    The study results showed that when Wnt signals were activated in epithelial cells, cell competition function was altered. Activated Ras mutant epithelial cells, which would normally be eliminated into the lumen, instead infiltrated diffusely into the tissue to form highly invasive cancerous tumors.

    As senior author Dr. Kon explains, “We discovered that in epithelial tissues where Wnt and Ras signals, which commonly occur in human colorectal cancer, are activated in stages, the function of cell competition is altered. It was revealed that the production of cancer cells that diffusely infiltrate into the interstitium is promoted.”

    Further, the research team identified an increased expression of matrix metalloproteinase 21 (MMP21) as one of the mechanisms underlying the production of diffusely invasive cancer cells in early colorectal cancer due to abnormal cell competition. This, in turn, was shown to be directly caused by activation of nuclear factor kappa B (NF-κB) signals via the innate immune system. Blocking NF-κB signaling restored the luminal elimination of Ras mutant epithelial cells. These findings raise some intriguing questions, such as “How do transformed cells sense the cellular content that leads to the NF-B-MMP21 pathway?” and “How do surrounding cells recognize transformed cells and prepare them for cellular extrusion?” These questions will almost certainly need to be addressed in the future.

    The results of this research show that cancer cells with accumulated, sequential genetic mutations, alter the function of cell competition and use it to enhance their own invasive ability. Instead of being eliminated to the lumen, they infiltrate into the tissue, producing high-grade cancer cells. While the research team did note that the cancer histopathology of the mice used in this study resembled diffuse-type cancer in humans, future research is needed to determine whether the NF-κB-MMP21 pathway is relevant to other cancers. For instance, investigating scirrhous gastric cancer, a typical diffuse-type cancer, would be particularly interesting. 

    Overall, these findings demonstrate that Wnt activation disrupts cell competition, and confers invasive properties on transformed cells to escape primary epithelial sites. Understanding how the molecular landscape is re-modeled to change the fate of cancer cells with high mutational burdens could be used for therapeutic purposes. This could be of interest to researchers focused on Wnt signaling or cancer research, such as those at the Koch Institute for Integrative Cancer Research at MIT and Cancer Research UK, who are working towards common goals.

    Dr. Kon concludes by saying, “This study further brings forth the prospect that cell competition constrains the order of appearance of mutations during tumor development, highlighting a link between cell competition and carcinogenesis. We hope that this will pave the way for the development of new cancer treatments from the standpoint of cell competition and infiltration for the benefit of our society.

     

    ***

    Reference                     

    DOI: https://doi.org/10.1038/s41467-023-42774-6

     

    About The Tokyo University of Science

    Tokyo University of Science (TUS) is a well-known and respected university, and the largest science-specialized private research university in Japan, with four campuses in central Tokyo and its suburbs and in Hokkaido. Established in 1881, the university has continually contributed to Japan’s development in science through inculcating the love for science in researchers, technicians, and educators.

    With a mission of “Creating science and technology for the harmonious development of nature, human beings, and society,” TUS has undertaken a wide range of research from basic to applied science. TUS has embraced a multidisciplinary approach to research and undertaken intensive study in some of today’s most vital fields. TUS is a meritocracy where the best in science is recognized and nurtured. It is the only private university in Japan that has produced a Nobel Prize winner and the only private university in Asia to produce Nobel Prize winners within the natural sciences field.

    Website: https://www.tus.ac.jp/en/mediarelations/

     

    About Dr. Shunsuke Kon from Tokyo University of Science

    Dr. Shunsuke Kon is a Junior Associate Professor in the Cancer Biology Department of the Research Institute for Biomedical Sciences. He obtained his Ph.D. from the Tohoku University Graduate School of Life Sciences in 2008. He was previously associated with the Institute of Genetic Medicine at Hokkaido University. His primary research interest has been in the field of tumor biology. He has more than 20 publications to his credit. In addition, he has received the Best Articles of the Year award.

    [ad_2]

    Tokyo University of Science

    Source link

  • Obesity may not be the only factor to link ultra-processed foods to higher risk of mouth, throat and oesophagus cancers

    Obesity may not be the only factor to link ultra-processed foods to higher risk of mouth, throat and oesophagus cancers

    [ad_1]

    Newswise — Eating more ultra-processed foods (UPFs) may be associated with a higher risk of developing cancers of upper aerodigestive tract (including the mouth, throat and oesophagus), according to a new study led by researchers from the University of Bristol and the International Agency for Research on Cancer (IARC).  The authors of this international study, which analysed diet and lifestyle data on 450,111 adults who were followed for approximately 14 years, say obesity associated with the consumption of UPFs may not be the only factor to blame. The study is published today [22 November] in the European Journal of Nutrition.

    Several studies have identified an association between UPF consumption and cancer, including a recent study which looked at the association between UPFs and 34 different cancers in the largest cohort study in Europe, the European Prospective Investigation into Cancer and Nutrition (EPIC) cohort.

    As more evidence emerges about the associations between eating UPFs and adverse health outcomes, researchers from the Bristol Medical School and IARC wanted to explore this further. Since many UPFs have an unhealthy nutritional profile, the team sought to establish whether the association between UPF consumption and head and neck cancer and oesophageal adenocarcinoma (a cancer of the oesophagus) in EPIC could be explained by an increase in body fat.

    Results from the team’s analyses showed that eating 10% more UPFs is associated with a 23% higher risk of head and neck cancer and a 24% higher risk of oesophageal adenocarcinoma in EPIC. Increased body fat only explained a small proportion of the statistical association between UPF consumption and the risk of these upper-aerodigestive tract cancers.

    Fernanda Morales-Berstein, a Wellcome Trust PhD student at the University of Bristol and the study’s lead author, explained: “UPFs have been associated with excess weight and increased body fat in several observational studies. This makes sense, as they are generally tasty, convenient and cheap, favouring the consumption of large portions and an excessive number of calories. However, it was interesting that in our study the link between eating UPFs and upper-aerodigestive tract cancer didn’t seem to be greatly explained by body mass index and waist-to-hip ratio.”

    The authors suggest that other mechanisms could explain the association. For example, additives including emulsifiers and artificial sweeteners which have been previously associated with disease risk, and contaminants from food packaging and the manufacturing process, may partly explain the link between UPF consumption and upper-aerodigestive tract cancer in this study.

    However, Fernanda Morales-Berstein and colleagues did add caution regarding their findings and suggest that the associations between UPF consumption and upper-aerodigestive tract cancers found in the study could be affected by certain types of bias. This would explain why they found evidence of an association between higher UPF consumption and increased risk of accidental deaths, which is highly unlikely to be causal.

    George Davey Smith, Professor of Clinical Epidemiology and Director of the MRC Integrative Epidemiology Unit at the University of Bristol, and co-author on the paper, said: “UPFs are clearly associated with many adverse health outcomes, yet whether they actually cause these, or whether underlying factors such as general health-related behaviours and socioeconomic position are responsible for the link, is still unclear, as the association with accidental deaths draws attention to.”

    Inge Huybrechts, Team head of the Lifestyle exposures and interventions team at IARC, added: “Cohorts with long-term dietary follow-up intake assessments, considering also contemporary consumption habits, are needed to replicate these study’s findings, as the EPIC dietary data were collected in the 1990s, when the consumption of UPFs was still relatively low. As such associations may potentially be stronger in cohorts including recent dietary follow-up assessments.”

    Further research is needed to identify other mechanisms, such as food additives and contaminants, which may explain the links observed. However, based on the finding that body fat did not greatly explain the link between UPF consumption and upper-aerodigestive tract cancer risk in this study, Fernanda Morales-Berstein, suggested: “Focussing solely on weight loss treatment, such as Semaglutide, is unlikely to greatly contribute to the prevention of upper-aerodigestive tract cancers related to eating UPFs.”

    Dr Helen Croker, Assistant Director of Research and Policy at World Cancer Research Fund, added: “This study adds to a growing pool of evidence suggesting a link between UPFs and cancer risk. The association between a higher consumption of UPFs and an increased risk of developing upper-aerodigestive tract cancer supports our Cancer Prevention Recommendations to eat a healthy diet, rich in wholegrains, vegetables, fruit, and beans.”

    The study was funded by the Wellcome Trust; Cancer Research UK; World Cancer Research Fund International; Institut National du Cancer; Horizon 2020 ‘Dynamic longitudinal exposome trajectories in cardiovascular and metabolic non-communicable diseases’ study; University of Bristol Vice Chancellor’s Fellowship; British Heart Foundation and the Medical Research Council.

    Paper

    ‘Ultra-processed foods, adiposity and risk of head and neck cancer and oesophageal adenocarcinoma in the European Prospective Investigation into Cancer and Nutrition study: a mediation analysis’ by Fernanda Morales‑Berstein et al. in the European Journal of Nutrition

    [ad_2]

    University of Bristol

    Source link

  • Skunks’ warning stripes less prominent where predators are sparse, study finds

    Skunks’ warning stripes less prominent where predators are sparse, study finds

    [ad_1]

    Newswise — Striped skunks are less likely to evolve with their famous and white markings where the threat of predation from mammals is low, scientists from the University of Bristol, Montana and Long Beach, California have discovered.

    Skunks’ iconic black and white colouration signals its toxic anal spray. However some skunks show very varied fur colour ranging from all black to thin or thick black and white bands to all white individuals. Variation is huge across the North American continent.

    Findings published today in Evolution, suggest that this is a result of relaxed selection, when environmental change eliminates or weakens the selection of a normally important trait – in this case black and white pelage.   

    Prof Tim Caro from Bristol’s School of Biological Sciences explained: “Warning coloration is an antipredator defence whereby a conspicuous signal advertises the ability of prey to escape predation, often because it is toxic or has spines or is pugnacious.”

    “Usually predators have to learn the significance of this signal and so it is predicted that warning colouration will look very similar across prey individuals of the same, as well as perhaps different, prey species to be an effective education tool. Yet some warningly coloured prey show rather different advertisements even within the same species.”

    Researcher Hannah Walker from the University of Montana documented the distribution of these different pelage colours across their range in North America using museum specimens. She plotted these against a menu of variables that the team thought might drive this variation in coloration.

    The team found that in locations in which skunks overlapped with rather few mammalian predators that might be capable of killing them, fur colour was varied even within the same litter.

    Where there were many species of predators that were a danger to them, they showed little variation.

    The team also examined owl and raptorial predators however while the effects were the same, they were not as evident. This is perhaps because birds have a poorer sense of smell and are less deterred by smelly anal defences. 

    “Our results indicate that relaxed predation pressure is key to warning signal variation in this species, whereas stronger pressure leads to signal conformity and stronger signals,” said Professor Caro.

    “We now know why not all skunks look alike, and perhaps why members of other warningly coloured species look different from each other.”

    Now the team plan to see if this occurs across other skunk species whose geographic ranges overlap in North America.

    Prof Caro concluded: “If relaxed selection operates within species, it should do so across prey species too. More broadly, this study provides another brick in the wall of explaining the evolution of coloration in nature.”

     

    Paper:

    ‘Predation risk drives aposematic signal conformity’ by Hannah Walker, Tim Caro et al in Evolution

    [ad_2]

    University of Bristol

    Source link

  • The Promising Future of Personalized Glaucoma Care

    The Promising Future of Personalized Glaucoma Care

    [ad_1]

    Newswise — In a recent article in The Lancet, David S. Friedman, MD, PhD, MPH, director of the Glaucoma Service at Mass Eye and Ear, and colleagues describe the current state of glaucoma care and what advances the future might bring to patients. In this Q+A, he discusses how far treatment has come and what can be expected in the near future.

    Glaucoma is currently the second-leading cause of blindness worldwide, and in the United States alone, about 3 million Americans live with glaucoma. Glaucoma is often referred to as the “sneak thief of sight” because many people do not realize they have it until the disease is severe. The damage from glaucoma is irreversible, so earlier detection and treatment is essential to avoid unnecessary blindness. 

    In the paper published last month in The Lancet, David S. Friedman, MD, PhD, MPH, director of the Glaucoma Service at Mass Eye and Ear and colleagues discuss the tremendous need for better detection and treatment of glaucoma. Dr. Friedman spoke with Focus to describe some of the current limitations in glaucoma care and what research is underway to improve care for patients.          

    What exactly is glaucoma?

    Glaucoma is a specific form of damage to the optic nerve. The optic nerve is the cable that connects your eye to your brain and if it is damaged, vision is not normal. Early in glaucoma there are usually few or no symptoms. As it gets worse, there can be some difficulty when going from bright to dark or from dark to bright. Later in the disease the side (peripheral) vision is lost, and it can be difficult to do many daily activities like moving around easily and reading.

    How does glaucoma impact the daily lives of your patients?

    Patients with glaucoma have several limitations: They are more afraid of falling and fall more frequently, they walk more slowly and reading is more difficult than for those without glaucoma. With more advanced disease they are more likely to stop driving. Severe glaucoma can lead to substantial limitations, but fortunately, most patients under care retain good useful vision.

    A photo representation of vision loss a person with glaucoma may experience. Credit: National Eye Institute

    Who is more at risk for glaucoma?

    Glaucoma is much more common with aging, so older people are more likely to have glaucoma than younger people. Glaucoma is also more common in certain groups with African Americans having nearly four times the rate of Whites. Hispanic populations, especially older ones, also have high rates that are similar to African Americans. 

    Can glaucoma be cured?

    Glaucoma cannot be cured. The damage to the nerve that occurs, at present, cannot be reversed. That said, our treatments, which can include eye drop medications, laser treatments and surgery, can help retain the remaining vision and the great majority of patients are mostly stable when under care. Many promising new therapies are being investigated that may lead to vision restoration in glaucoma patients, but none are presently available for care.

    What are some of the current knowledge gaps in glaucoma care?

    Research is uncovering more information about glaucoma than ever before, such as the role of certain genes in glaucoma.  Despite this incredible progress, we still do not have a perfect screening test that could be administered easily and accurately to identify glaucoma patients. Many do not realize they have glaucoma until the disease is advanced. We also struggle determining which patients are getting worse despite being treated for glaucoma.  Improvements here would allow for more rapid and targeted interventions for patients.

    There also are treatment gaps at present. The only effective treatment for glaucoma remains lowering eye pressure. Yet, about half of all patients with glaucoma have intraocular pressure in the normal range so factors other than eye pressure play an important role in why some people get glaucoma. We need treatments that protect the nerve through other means.

    Dr. David S. Friedman, performs an eye exam on a female patient
    Dr. David S. Friedman, performs an eye exam on a patient at Mass Eye and Ear.

    What are some innovative areas of research that excite you for the future?

    There are several areas of study that are quite exciting and suggest a brighter future ahead.

    Better testing of patients should be widely available in the coming years. There are several companies that have developed virtual reality headsets that can test vision and side vision and these likely will become a standard approach to monitoring glaucoma over time. 

    For treatment, improved delivery of eye medications should occur soon. Some approaches allow for long-term delivery of eye pressure lowering medications through implanted devices in the eye, or by wearing a contact lens that can deliver drug.  Perhaps even more exciting are novel approaches to protect the nerve or to regenerate the nerve including gene therapy and stem cell therapy.  These approaches are still in early development but hopefully can lead to clinical trials over the next few years. Gene therapies are currently used for other retinal conditions, such as forms of inherited retinal blindness. Research into stem cells is also promising, and some day we may be able to transplant these cells to replace the tissues damaged in glaucoma. 

    Another exciting approach being actively researched looks at how to apply our knowledge of glaucoma genetics to provide personalized care to patients. Some treatments may work better with patients with specific genes, for example. There are numerous studies underway, including several at Mass Eye and Ear, that are using genetics, artificial intelligence and advanced imaging to develop personalized risk scores for patients that could better predict how their glaucoma will progress, which might lead to better and more personalized care.

    How has glaucoma care evolved since you started practicing, and where do you see disease management evolving over the next 5 years?

    There have been major advances in how we image the nerve and the nerve fiber layer that have greatly improved our ability to monitor patients and diagnose glaucoma. This has been a dramatic change. We also have safer procedures for lowering eye pressure. While these are important advances that have benefitted our patients, much still needs to be done.

    We now have tremendous knowledge about the genetics of glaucoma, and this will transform how we care for patients in the coming years.

    There is also a lot more clinical trials evidence for how we should be treating patients with glaucoma.

    [ad_2]

    Massachusetts Eye and Ear

    Source link

  • These bats use their penis as an “arm” during sex but not for penetration

    These bats use their penis as an “arm” during sex but not for penetration

    [ad_1]

    Newswise — Mammals usually mate via penetrative sex, but researchers report November 20 in the journal Current Biology that a species of bat, the serotine bat, (Eptesicus serotinus) mates without penetration. This is the first time non-penetrative sex has been documented in a mammal. The bats’ penises are around seven times longer than their partners’ vaginas and have a “heart-shaped” head that is seven times wider than the vaginal opening. Both the penises’ size and shape would make penetration post-erection impossible, and the researchers show that, rather than functioning as a penetrative organ, the bats use their oversized penises like an extra arm to push the female’s tail sheath out of the way so that they can engage in contact mating—a behavior that resembles “cloacal kissing” in birds.

    “By chance, we had observed that these bats have disproportionately long penises, and we were always wondering ‘how does that work?’,” says first author Nicolas Fasel of the University of Lausanne. “We thought maybe it’s like in the dog where the penis engorges after penetration so that they are locked together, or alternatively maybe they just couldn’t put it inside, but that type of copulation hasn’t been reported in mammals until now.”

    Very little is known about how bats have sex, and most previous observations of bats mating have only perceived the backs of mating pairs. In this study, the researchers were able to observe the bats’ genitalia during copulation by using footage from cameras that were placed behind a grid that the bats could climb on.

    Fasel collaborated with a bat rehabilitation center in Ukraine that opportunistically filmed mating pairs and with a bat enthusiast and citizen scientist, Jan Jeucken, who filmed hours of footage of serotine bat in a church attic in the Netherlands. Altogether, the team analyzed 97 mating events—93 from the Dutch church and 4 from the Ukrainian bat rehabilitation center.

    The video recordings revealed that the bats do not engage in penetrative sex. The researchers did not observe penetration at any point during the recorded mating events and noted that the erectile tissues of the penis were enlarged before they made contact with the vulva. During mating, the male bats grasped their partners by the nape and moved their pelvises (and fully erect penises) in a probing fashion until they made contact with the female’s vulva, at which point they remained still and held the females in a long embrace. On average, these interactions lasted less than 53 minutes, but the longest event extended to 12.7 hours. Following copulation, the researchers observed that the female bats’ abdomens appeared wet, suggesting the presence of semen, but further research is needed to confirm that sperm was transferred during these putative mating events.

    The researchers also characterized the morphology of serotine bat genitalia by measuring the erect penises of live bats that were captured as part of other research studies (serotine and other vesper bats are conveniently known to get erections under anesthesia) and by performing necropsies on bats that died at bat rehabilitation centers. Their measurements showed that, when erect, serotine bat penises are around seven times longer and seven times wider than serotine bat vaginas, and about a fifth as long as the bats’ head-body length. The bats also have unusually long cervixes, which could help female bats select and store sperm.

    The researchers speculate that the bats may have evolved their oversized penises in order to push aside the female bats’ tail membranes, which females may use to avoid sex. “Bats use their tail membranes for flying and to capture the insects, and female bats also use them to cover their lower parts and protect themselves from males,” says Fasel, “but the males can then use these big penises to overcome the tail membrane and reach the vulva.”

    Next, the researchers plan to study bat mating behavior in more natural contexts, and they are also investigating penis morphology and mating behavior in other bat species. “We are trying to develop a bat porn box, which will be like an aquarium with cameras everywhere,” says Fasel.

     

    ###

    This research was supported by the National Science Centre of Poland, the Swiss National Science Foundation, and the Oleksandr Feldman Foundation.

    Current Biology, Fasel et al., “Mating without intromission in a bat, a novel copulatory pattern in mammals” https://cell.com/current-biology/fulltext/S0960-9822(23)01304-0

    Current Biology (@CurrentBiology), published by Cell Press, is a bimonthly journal that features papers across all areas of biology. Current Biology strives to foster communication across fields of biology, both by publishing important findings of general interest and through highly accessible front matter for non-specialists. Visit: http://www.cell.com/current-biology. To receive Cell Press media alerts, contact [email protected].

    [ad_2]

    Cell Press

    Source link

  • AI discovers formula for anticipating giant waves.

    AI discovers formula for anticipating giant waves.

    [ad_1]

    Newswise — Long considered myth, freakishly large rogue waves are very real and can split apart ships and even damage oil rigs. Using 700 years’ worth of wave data from more than a billion waves, scientists at the University of Copenhagen and University of Victoria have used artificial intelligence to find a formula for how to predict the occurrence of these maritime monsters. The new knowledge can make shipping safer.

    EMBARGOED CONTENT UNTIL MONDAY 20 NOVEMBER 2023 3 PM US EASTERN TIME
    Stories about monster waves, called rogue waves, have been the lore of sailors for centuries. But when a 26-metre-high rogue wave slammed into the Norwegian oil platform Draupner in 1995, digital instruments were there to capture and measure the North Sea monster. It was the first time that a rogue had been measured and provided scientific evidence that abnormal ocean waves really do exist.

    Since then, these extreme waves have been the subject of much study. And now, researchers from the University of Copenhagens Niels Bohr Institute have used AI methods to discover a mathematical model that provides a recipe for how – and not least when – rogue waves can occur.

    With the help of enormous amounts of big data about ocean movements, researchers can predict the likelihood of being struck by a monster wave at sea at any given time.

    “Basically, it is just very bad luck when one of these giant waves hits. They are caused by a combination of many factors that, until now, have not been combined into a single risk estimate. In the study, we mapped the causal variables that create rogue waves and used artificial intelligence to gather them in a model which can calculate the probability of rogue wave formation,” says Dion Häfner.

    Häfner is a former PhD student at the Niels Bohr Institute and first author of the scientific study, which has just been published in the prestigious journal Proceedings of the National Academy of Sciences (PNAS). 

    Rogue waves happen every day

    In their model, the researchers combined available data on ocean movements and the sea state, as well as water depths and bathymetric information. Most importantly, wave data was collected from buoys in 158 different locations around US coasts and overseas territories that collect data 24 hours a day. When combined, this data – from more than a billion waves – contains 700 years’ worth of wave height and sea state information.

    The researchers analyzed the many types of data to find the causes of rogue waves, defined as being waves that are at least twice as high as the surrounding waves – including extreme rogue waves that can be over 20 meters high. With machine learning, they transformed it all into an algorithm that was then applied to their dataset.

    “Our analysis demonstrates that abnormal waves occur all the time. In fact, we registered 100,000 waves in our dataset that can be defined as rogue waves. This is equivalent around 1 monster wave occurring every day at any random location in the ocean. However, they arent all monster waves of extreme size,” explains Johannes Gemmrich, the studys second author.

    Artificial intelligence as a scientist

    In the study, the researchers were helped by artificial intelligence. They used several AI methods, including symbolic regression which gives an equation as output, rather than just returning a single prediction as traditional AI methods do.

    By examining more than 1 billion waves, the researchers’ algorithm has analyzed its own way into finding the causes of rogue waves and condensed it into equation that describes the recipe for a rogue wave. The AI learns the causality of the problem and communicates that causality to humans in the form of an equation that researchers can analyze and incorporate into their future research.

    “Over decades, Tycho Brahe collected astronomical observations from which Kepler, with lots of trial and error, was able to extract Kepler’s Laws. Dion used machines to do with waves what Kepler did with planets. For me, it is still shocking that something like this is possible,” says Markus Jochum.

    Phenomenon known since the 1700s

    The new study also breaks with the common perception of what causes rogue waves. Until now, it was believed that the most common cause of a rogue wave was when one wave briefly combined with another and stole its energy, causing one big wave to move on.

    However, the researchers establish that the most dominant factor in the materialization of these freak waves is what is known as “linear superposition”. The phenomenon, known about since the 1700s, occurs when two wave systems cross over each other and reinforce one another for a brief period of time.

    “If two wave systems meet at sea in a way that increases the chance to generate high crests followed by deep troughs, the risk of extremely large waves arises. This is knowledge that has been around for 300 years and which we are now supporting with data,” says Dion Häfner. 

    Safer shipping

    The researchers’ algorithm is good news for the shipping industry, which at any given time has roughly 50,000 cargo ships sailing around the planet. Indeed, with the help of the algorithm, it will be possible to predict when this “perfect” combination of factors is present to elevate the risk of a monster wave that could pose a danger for anyone at sea.

    “As shipping companies plan their routes well in advance, they can use our algorithm to get a risk assessment of whether there is a chance of encountering dangerous rogue waves along the way. Based on this, they can choose alternative routes,” says Dion Häfner.

    Both the algorithm and research are publicly available, as are the weather and wave data deployed by the researchers. Therefore, Dion Häfner says that interested parties, such as public authorities and weather services, can easily begin calculating the probability of rogue waves. And unlike many other models created using artificial intelligence, all of the intermediate calculations in the researchers’ algorithm are transparent.

    “AI and machine learning are typically black boxes that don’t increase human understanding. But in this study, Dion used AI methods to transform an enormous database of wave observations into a new equation for the probability of rogue waves, which can be easily understood by people and related to the laws of physics,” concludes Professor Markus Jochum, Dions thesis supervisor and co-author.

    Links:

    Read the scientific paper “Machine-Guided Discovery of a Real-World Rogue Wave Model” published in PNAS: https://www.pnas.org/cgi/doi/10.1073/pnas.2306275120

    Read the Wikipedia-list of registered rogue waves: https://en.wikipedia.org/wiki/List_of_rogue_waves

    Dion Häfner’s research continues at Pasteur Labs.

    [ad_2]

    University of Copenhagen, Faculty of Science

    Source link

  • AI system self-organises to develop features of brains of complex organisms

    AI system self-organises to develop features of brains of complex organisms

    [ad_1]

    Newswise — Cambridge scientists have shown that placing physical constraints on an artificially-intelligent system – in much the same way that the human brain has to develop and operate within physical and biological constraints – allows it to develop features of the brains of complex organisms in order to solve tasks.

    As neural systems such as the brain organise themselves and make connections, they have to balance competing demands. For example, energy and resources are needed to grow and sustain the network in physical space, while at the same time optimising the network for information processing. This trade-off shapes all brains within and across species, which may help explain why many brains converge on similar organisational solutions.

    Jascha Achterberg, a Gates Scholar from the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU) at the University of Cambridge said: “Not only is the brain great at solving complex problems, it does so while using very little energy. In our new work we show that considering the brain’s problem solving abilities alongside its goal of spending as few resources as possible can help us understand why brains look like they do.”

    Co-lead author Dr Danyal Akarca, also from the MRC CBSU, added: “This stems from a broad principle, which is that biological systems commonly evolve to make the most of what energetic resources they have available to them. The solutions they come to are often very elegant and reflect the trade-offs between various forces imposed on them.”

    In a study published today in Nature Machine Intelligence, Achterberg, Akarca and colleagues created an artificial system intended to model a very simplified version of the brain and applied physical constraints. They found that their system went on to develop certain key characteristics and tactics similar to those found in human brains.

    Instead of real neurons, the system used computational nodes. Neurons and nodes are similar in function, in that each takes an input, transforms it, and produces an output, and a single node or neuron might connect to multiple others, all inputting information to be computed.

    In their system, however, the researchers applied a ‘physical’ constraint on the system. Each node was given a specific location in a virtual space, and the further away two nodes were, the more difficult it was for them to communicate. This is similar to how neurons in the human brain are organised.

    The researchers gave the system a simple task to complete – in this case a simplified version of a maze navigation task typically given to animals such as rats and macaques when studying the brain, where it has to combine multiple pieces of information to decide on the shortest route to get to the end point.

    One of the reasons the team chose this particular task is because to complete it, the system needs to maintain a number of elements – start location, end location and intermediate steps – and once it has learned to do the task reliably, it is possible to observe, at different moments in a trial, which nodes are important. For example, one particular cluster of nodes may encode the finish locations, while others encode the available routes, and it is possible to track which nodes are active at different stages of the task.

    Initially, the system does not know how to complete the task and makes mistakes. But when it is given feedback it gradually learns to get better at the task. It learns by changing the strength of the connections between its nodes, similar to how the strength of connections between brain cells changes as we learn. The system then repeats the task over and over again, until eventually it learns to perform it correctly.

    With their system, however, the physical constraint meant that the further away two nodes were, the more difficult it was to build a connection between the two nodes in response to the feedback. In the human brain, connections that span a large physical distance are expensive to form and maintain.

    When the system was asked to perform the task under these constraints, it used some of the same tricks used by real human brains to solve the task. For example, to get around the constraints, the artificial systems started to develop hubs – highly connected nodes that act as conduits for passing information across the network.

    More surprising, however, was that the response profiles of individual nodes themselves began to change: in other words, rather than having a system where each node codes for one particular property of the maze task, like the goal location or the next choice, nodes developed a flexible coding scheme. This means that at different moments in time nodes might be firing for a mix of the properties of the maze. For instance, the same node might be able to encode multiple locations of a maze, rather than needing specialised nodes for encoding specific locations. This is another feature seen in the brains of complex organisms.

    Co-author Professor Duncan Astle, from Cambridge’s Department of Psychiatry, said: “This simple constraint – it’s harder to wire nodes that are far apart – forces artificial systems to produce some quite complicated characteristics. Interestingly, they are characteristics shared by biological systems like the human brain. I think that tells us something fundamental about why our brains are organised the way they are.”

     

    Understanding the human brain

    The team are hopeful that their AI system could begin to shed light on how these constraints, shape differences between people’s brains, and contribute to differences seen in those that experience cognitive or mental health difficulties.

    Co-author Professor John Duncan from the MRC CBSU said: “These artificial brains give us a way to understand the rich and bewildering data we see when the activity of real neurons is recorded in real brains.”

    Achterberg added: “Artificial ‘brains’ allow us to ask questions that it would be impossible to look at in an actual biological system. We can train the system to perform tasks and then play around experimentally with the constraints we impose, to see if it begins to look more like the brains of particular individuals.”

     

    Implications for designing future AI systems

    The findings are likely to be of interest to the AI community, too, where they could allow for the development of more efficient systems, particularly in situations where there are likely to be physical constraints.

    Dr Akarca said: “AI researchers are constantly trying to work out how to make complex, neural systems that can encode and perform in a flexible way that is efficient. To achieve this, we think that neurobiology will give us a lot of inspiration. For example, the overall wiring cost of the system we’ve created is much lower than you would find in a typical AI system.”

    Many modern AI solutions involve using architectures that only superficially resemble a brain. The researchers say their works shows that the type of problem the AI is solving will influence which architecture is the most powerful to use.

    Achterberg said: “If you want to build an artificially-intelligent system that solves similar problems to humans, then ultimately the system will end up looking much closer to an actual brain than systems running on large compute cluster that specialise in very different tasks to those carried out by humans. The architecture and structure we see in our artificial ‘brain’ is there because it is beneficial for handling the specific brain-like challenges it faces.”

    This means that robots that have to process a large amount of constantly changing information with finite energetic resources could benefit from having brain structures not dissimilar to ours.

    Achterberg added: “Brains of robots that are deployed in the real physical world are probably going to look more like our brains because they might face the same challenges as us. They need to constantly process new information coming in through their sensors while controlling their bodies to move through space towards a goal. Many systems will need to run all their computations with a limited supply of electric energy and so, to balance these energetic constraints with the amount of information it needs to process, it will probably need a brain structure similar to ours.”

    The research was funded by the Medical Research Council, Gates Cambridge, the James S McDonnell Foundation, Templeton World Charity Foundation and Google DeepMind.

    Reference

    Achterberg, J & Akarca, D et al. Spatially embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings. Nature Machine Intelligence; 20 Nov 2023; DOI: 10.1038/s42256-023-00748-9

    [ad_2]

    University of Cambridge

    Source link