ReportWire

Tag: All Journal News

  • Increased electronic health record time associated with enhanced quality outcomes in primary care

    Increased electronic health record time associated with enhanced quality outcomes in primary care

    [ad_1]

    In the United States, the electronic health record (EHR) has become increasingly prevalent in the day-to-day practice of physicians, with primary care physicians (PCPs) spending the most time in the EHR. Yet, the association between time spent in the EHR and quality of ambulatory care was unclear before Brigham and Women’s researchers investigated this critical intersection. In their cross-sectional study of 291 primary care physicians, the team tracked ambulatory quality measures (year-end, PCP panel-level achievement of targets for hemoglobin A1C level control, lipid management, hypertension control, diabetes screening, and breast cancer screening) and found a significant, positive relationship between EHR time and some of these measures — panel-wide hemoglobin A1C level control, hypertension control, and breast cancer screening. These associations suggest that extra time spent in the EHR may benefit certain care outcomes, particularly for doctors who spend less than half their time seeing patients.  

    “Although increased EHR time is associated with burnout, it may represent a level of thoroughness or communication that enhances certain outcomes,” said lead author Lisa Rotenstein, MD, MBA, of the Primary Care Center of Excellence at the Brigham. “It may be useful for future studies to characterize payment models, workflows, and technologies that enable high-quality ambulatory care delivery while minimizing EHR burden.”

    Read more in JAMA Network Open.

    [ad_2]

    Brigham and Women’s Hospital

    Source link

  • New machine-learning technique for classifying key immune cells has implications for a suite of diseases

    New machine-learning technique for classifying key immune cells has implications for a suite of diseases

    [ad_1]

    Newswise — Eesearchers from Trinity College Dublin have developed a new, machine learning-based technique to accurately classify the state of macrophages, which are key immune cells. Classifying macrophages is important because they can modify their behaviour and act as pro- or anti-inflammatory agents in the immune response. As a result, the work has a suite of implications for research and has the potential to one day make major societal impact. 

    For example, this new approach could be of use to drug designers looking to create therapies targeting diseases and auto-immune conditions such as diabetes, cancer and rheumatoid arthritis – all of which are impacted by cellular metabolism and macrophage function. 

    Because classifying macrophages allows scientists to directly distinguish between macrophage states – based only on their metabolic response under certain conditions – this new information could be used as a diagnosis tool, or to highlight the role of a particular cell type in a disease environment. 

    The landmark research, which used human macrophages in experiments, was led by Michael Monaghan, Associate Professor in Biomedical Engineering at Trinity. The work brought together biomedical engineers, computer scientists and immunologists and has just been published in leading journal eLife. Professor Monaghan comments: 

    “Currently, there are no other methods that employ artificial intelligence-based, machine learning approaches to macrophage classification. A number of different techniques are currently used to classify macrophages, but all of these have significant drawbacks. 

    “Our method uses a 2-photon fluorescence lifetime imaging microscope (2P-FLIM), which is unique to Trinity and to Ireland. 2P-FLIM does not require sample pre-treatment, can be used to follow changes in metabolism non-invasively and in real-time – which opens the door to tracking disease progression and/or physiological response to therapies — and it also requires a lower number of cells compared with conventional techniques.”

    Nuno Neto, PhD Candidate in the School of Engineering, added: 

    “It is becoming increasingly clear that to solve many of society’s greatest problems, we need to take multi-disciplinary approaches to harness the expertise of people working in different fields. 

    “Trinity is rightly known as a leader in immunometabolism research, with many of our scientists focusing on how it regulates immune cell response, and how immune cell metabolism is impacted in diseases. This study benefits from that expertise, but also bridges the use of advanced computer science approaches and utilises an advanced microscope from the Biomedical Engineering Department with a regime never reported previously. It thus serves as a prime example of inter-departmental collaboration in a multidisciplinary field.”

    Nuno Neto’s Doctoral Studies are supported by a Trinity College Dublin Provost’s PhD Award and Professor Monaghan is a Funded Investigator in the Science Foundation Ireland (SFI) Centres AMBER and CÚRAM. Trinity’s FLIM Core Unit directed by Professor Monaghan was established using an SFI Infrastructure Programme: Category D Opportunistic Funds Call. 

    [ad_2]

    Trinity College Dublin

    Source link

  • Despite commitments, Brazil’s beef sector tainted by purchases from protected lands in Amazon basin

    Despite commitments, Brazil’s beef sector tainted by purchases from protected lands in Amazon basin

    [ad_1]

    Newswise — MADISON – Depending on where it’s from, your next steak could come with a side of illegal deforestation.

    That’s because despite improvements by meatpackers to keep their supply chains free of cattle grazed on protected or illegally deforested lands, many slaughterhouses in Brazil — the world’s top beef exporter — continue to purchase illegally pastured animals on a large scale.

    A new study published Oct. 18 in the journal Conservation Letters underscores the depth of the problem. Researchers from the University of Wisconsin–Madison and Vrije University Amsterdam found that over a 5-year period, millions of cattle slaughtered for beef spent at least part of their lives grazing in protected areas of the Brazilian Amazon, including on indigenous lands.

    “Protected areas are the cornerstone of Brazil’s conservation efforts and are arguably the most effective way that we have to conserve forests and the biodiversity inside of them,” says Holly Gibbs, a UW–Madison professor of geography and senior author of the study. “That meatpackers are continuing to buy from properties in areas that are under strict protection is alarming.”

    Ranchers and slaughterhouses in Brazil are required to share information about where animals are transported, primarily for the purpose of monitoring their health. When coupled with property records, this information is also useful for identifying where cattle have grazed, including if they grazed inside protected areas.

    Gibbs and her colleagues were able to reveal the tainted beef supply by tying animal movement data to property records that they then cross-referenced with maps of protected areas in the Brazilian states of Mato Grosso, Pará and Rondônia.

    These three states form a crescent around the southern and eastern portions of the Amazon basin — a region where the expansion of agriculture is fueling deforestation and biodiversity loss at an accelerating pace. Historically, cattle ranching has been linked to about 80% of deforestation in the Amazon basin.

    The researchers found that between 2013 and 2018, more than 1 million cattle were sold directly from protected areas within the three states to slaughterhouses, despite meatpackers’ highly publicized commitments to avoid such purchases.  

    Another 2.2 million were indirectly linked to protected areas, meaning the animals spent a portion of their lives in protected zones before meatpackers purchased them. Often these cattle grazed in protected areas and then were transported to fattening farms outside of those areas before the meatpackers purchased them.

    While a majority of these cattle were tied to “sustainable-use” areas where ranching is sometimes permitted under certain conditions, more than a quarter, or around 900,000, were tied to regions that are strictly protected, including indigenous lands. Commercial grazing is illegal in these areas. Additionally, about half of the ranches in protected areas tied to commercial grazing were at least partially deforested in the last several years.

    The analysis of cattle movements ends in 2018 because it depends on Brazil’s previously transparent public recordkeeping.

    “At the start of 2019, this critical information became less available,” Gibbs says.

    Meanwhile, satellite imagery analyzed by the Brazilian space agency indicates that deforestation rates increased by nearly 50% from 2018 to 2020, with nearly three-quarters of the loss occurring in the states covered in this study.

    While the state of Pará continues to make cattle movement data within its borders publicly available, a more holistic accounting of illegal cattle grazing in the Amazon basin will remain elusive as long as Brazil’s federal government keeps a lid on the nationwide data, Gibbs says.

    This rollback in transparency hampers efforts by slaughterhouses to monitor their indirect suppliers, says Lisa Rausch, a co-author of the paper and scientist at UW–Madison’s Nelson Institute for Environmental Studies, where Gibbs holds a joint appointment.

    “Many slaughterhouses have gotten the message that being associated with deforestation is bad for their business, but they cannot address this issue without increased availability of information about their suppliers,” says Rausch.

    Similarly, public audits of slaughterhouse compliance that go beyond the state of Pará, currently the only state with audits, could help distinguish between companies that are trying to improve and those that are not, according to Rausch.

    “There is an appetite among retailers and investors — the parts of the value chain that slaughterhouses are responsive to — for more information about slaughterhouses’ performances, but right now that information is lacking,” she says.

    At the same time, the lack of public data could make it easier for slaughterhouses to continue breaking their commitments to avoid cattle pastured in protected areas. Gibbs says making cattle movement data transparent once again is critical for ensuring Brazilian slaughterhouses can continue to make progress toward their public commitments.

    “This is further evidence that we need more demand by investment banks, retailers and consumers for improved cattle traceability, transparency and accountability,” Gibbs says.

    ###

    [ad_2]

    University of Wisconsin-Madison

    Source link

  • Increased Risk for Stillbirth Passed Down Through Fathers, Male Relatives

    Increased Risk for Stillbirth Passed Down Through Fathers, Male Relatives

    [ad_1]

    Newswise — (Salt Lake City) – Newly published research is the first to show that stillbirth can be inherited and tends to be passed down through male members of the family. That risk preferentially comes from the mother’s or father’s male relatives—their brothers, fathers, grandfathers, uncles, or male cousins. But the odds of a couple losing a baby to stillbirth are even greater when the condition comes from the father’s side of the family.

    “Stillbirth is one of those problems that is so tragic and life-changing,” says Jessica Page, MD, an assistant professor in the Department of Obstetrics & Gynecology at University of Utah Health and Intermountain Healthcare. “It is especially frustrating when you don’t have a good answer for why it happens. This knowledge may give us the opportunity to change how we risk stratify people and reduce their risk through prevention.”

    The results, published in BJOG, come from the largest study of its kind to examine the health histories of families over multiple generations. Page is co-author on the study led by Tsegaselassie Workalemahu, PhD, and senior author Robert Silver, MD, both of whom are faculty in the Department of Obstetrics & Gynecology at U of U Health.

    “Studying pregnancy provides the opportunity to improve the health of future generations,” Workalemahu explains. He says that understanding patterns of stillbirth in families may help genetic counselors advise their patients about their risk. It is also an important step toward identifying specific genes that increase the risk of stillbirth, which could one day lead to better diagnosis and prevention.

    Searching for risk factors of stillbirth

    In the U.S., stillbirth is more common than many people realize, occurring in 1 in 165 births among babies that are 20 weeks or older. The risk goes up when the mother has certain health conditions such as gestational hypertension, preeclampsia, or diabetes, but the causes of as many as 1 in 3 cases still go unexplained.

    To understand other risk factors contributing to stillbirth, the scientists examined 9,404 stillbirth cases and 18,808 live birth controls between 1978 and 2019 that were represented in the Utah Population Database, a genealogical resource linked to health, birth, and death records. They found that 390 families had an excess number of stillbirths over multiple generations, suggesting there are genetic causes of stillbirth.

    By comparing incidence of stillbirth among first-, second-, and third-degree relatives of babies from affected families with the equivalent relatives from unaffected families, the scientists identified familial risk in related individuals. Their analysis revealed that an increased risk for stillbirth was passed down through male family members, a trend that had not been seen before.

    “We were able to evaluate multigenerational trends in fetal death as well as maternal and paternal lineages to increase our ability to detect a familial aggregation of stillbirth,” Workalemahu says. “Not many studies have examined inherited genetic risk for stillbirth because of a lack of data. The Utah Population Database (UPDB) allows for a more rigorous evaluation than has been possible in the past.”

    The population in the UPDB has similar genetic diversity to the U.S. population of northern European descent and may not be generalizable to the general U.S. population. Future studies will need to determine whether the trends hold true among people of different races and ethnicities.

    “Stillbirth rate reduction has been slow in the U.S. and we think many stillbirths may be potentially preventable,” Page says. “This is motivating us to look for those genetic factors so we can achieve more dramatic rate reduction.”

    # # #

    The research published as, “Familial aggregation of stillbirth: A pedigree analysis of a matched case–control study” and is supported by grants from the National Institutes of Health.

    About University of Utah Health

    University of Utah Health provides leading-edge and compassionate care for a referral area that encompasses Idaho, Wyoming, Montana, and much of Nevada. A hub for health sciences research and education in the region, U of U Health has a $458 million research enterprise and trains the majority of Utah’s physicians and health care providers at its Colleges of Health, Nursing, and Pharmacy and Schools of Dentistry and Medicine. With more than 20,000 employees, the system includes 12 community clinics and five hospitals. U of U Health is recognized nationally as a transformative health care system and provider of world-class care.

     

     

    [ad_2]

    University of Utah Health

    Source link

  • Pioneering research directly dates the earliest milk use in prehistoric Europe

    Pioneering research directly dates the earliest milk use in prehistoric Europe

    [ad_1]

    Newswise — A new study has shown milk was used by the first farmers from Central Europe in the early Neolithic era around 7,400 years ago, advancing humans’ ability to gain sustenance from milk and establishing the early foundations of the dairy industry.

    The international research, led by the University of Bristol and published today in Proceedings of the National Academy of Sciences (PNAS), deployed a pioneering technique to date dairy fat traces preserved in the walls of pottery vessels from the 54th Century BC. This method targets fatty acids from animal fat residues, making it uniquely suited to pinpointing the introduction of new foodstuffs in prehistoric times.

    Lead author Dr Emmanuelle Casanova, who conducted the research while completing her PhD in archaeological chemistry at the University of Bristol, said: “It is amazing to be able to accurately date the very beginning of milk exploitation by humans in prehistoric times. The development of agropastoralism transformed prehistoric human diet by introducing new food commodities, such as milk and milk products, which continues to the present day.”

    These settlers of South East, East, and West of Europe were the earliest Neolithic farming groups in Central Europe, known as the Linearbandkeramik (LBK) culture. The findings of this research showed some of the very first settlers in the region were using milk at scale.

    This work was part of the European Research Council (ERC) NeoMilk project led by Professor Richard Evershed FRS of the School of Chemistry at the University of Bristol. His team analysed more than 4,300 pottery vessels from 70 LBK settlements for their food residues. The results revealed considerable variation in milk use across the region, with only 65 percent sites presenting evidence of dairy fats in ceramics vessels, suggesting milk use, while common, was not universally adopted by these early farmers.

    Focussing on the sites and ceramics with dairy residues, the researchers produced around 30 new radiocarbon dates to chart the advent of dairy exploitation by LBK farmers. These new dates correspond to the earliest LBK settlements during the middle of the 6th Millenium BC.

    Co-lead author Professor Evershed said: “This research is hugely significant as it provides new insights into the timing of major changes in human food procurement practices, as they evolved across Europe. It provides clear evidence that dairy foods were in widespread circulation in the Early Neolithic, despite variations in the scale of activity.”

    The study was conducted in collaboration with chemists from the University of Bristol and archaeologists from the Universities of Gdańsk, Paris 1, Strasbourg, Leiden, and Adam Mickiewicz, the Dobó István Castle Museum, Historic England, and the LVR-State Service for Archaeological Heritage, which directed excavations of the studied sites.

    Notes to editors

    Dr Emmanuelle Casanova, currently a post-doctoral research fellow at the National Museum of Natural History in Paris, and Professor Richard Evershed are available for interview and advance copies of the study are available on request.

    [ad_2]

    University of Bristol

    Source link

  • Statewide pandemic restrictions not related to psychological distress

    Statewide pandemic restrictions not related to psychological distress

    [ad_1]

    Newswise — Despite concerns that stay-at-home orders and other government efforts to stem the spread of COVID-19 at the start of the pandemic would cause lasting harm to people’s mental health, research published by the American Psychological Association found that state restrictions in the first six months of the pandemic were not related to worse mental health.

    Instead, people with personal exposure to the virus and those who consumed several hours of COVID-19-related media a day were the most likely to experience distress, loneliness and symptoms of traumatic stress.

    The findings were published in the journal Health Psychology.

    “For the past several decades, our team has been examining the psychological impact of large-scale disasters on the population. In February 2020, we realized that the novel coronavirus, as it was called at the time, was likely to have an effect on the U.S. population in the months to come,” said senior author Roxane Cohen Silver, PhD, a distinguished professor of psychological science, medicine and public health at the University of California Irvine. “We were particularly interested in the potential negative mental health effects of the associated restrictions placed on individuals throughout the pandemic, despite their potential for minimizing the spread of illness.”

    The researchers surveyed a nationally representative sample of more than 6,500 participants at the start of the pandemic from March 18 to April 18, 2020, then surveyed almost 5,600 of the same participants approximately six months later from Sept. 26 to Oct. 16 to measure how their mental health and exposure to the virus changed over the course of the pandemic.

    Respondents answered questions about symptoms of distress, loneliness and traumatic stress (acute and post-traumatic stress) they experienced in the prior week; whether they had contracted COVID-19; how many people they knew who had contacted the virus or died because of COVID-19; and how many hours on average they spent daily over the past week consuming pandemic-related news on traditional media, online news sources and social media platforms. The researchers then compared their responses with data about the spread of COVID-19 and government mitigation efforts, such as school closures and stay-at-home orders in each respondent’s state.

    Researchers found that, overall, participants experienced more loneliness and symptoms of global distress, such as depression and anxiety, over the course of the six months, but their distress was not significantly related to state-level restrictions. Instead, personal experiences with COVID (degree of illness, losses), along with the amount of media about the pandemic to which individuals were exposed, were stronger predictors of psychological symptoms than state-level restrictions (mask mandates, closures, etc.) or case rates or death rates.

    Participants who responded that they had contracted COVID-19 in the first six months of the pandemic were the most likely to report poor mental health. Knowing someone who died because of COVID-19 or someone who had contracted COVID-19 were also significantly related to distress, loneliness, and symptoms of traumatic stress, according to Rebecca Thompson, PhD, the report’s first author and postdoctoral scholar at UC Irvine.

    “Because a strong predictor of distress in our study was personal bereavement – knowing someone who had been very sick or died was far more stressful than the presence of state-level restrictions – future waves of COVID-19 and other potential pandemics should be met by targeted interventions to prevent loss of life,” Thompson said. “Given this work, we would likely expect similar distress responses in future pandemics, highlighting the importance of public health initiatives to curb the spread of illness in our communities.”

    Greater hours of exposure to pandemic-related media coverage was also significantly related to increased symptoms of distress over time.

    “For the first year of the pandemic, it was all bad news all the time,” Silver said. “Repeated exposure to that content was unlikely to have psychological benefits.”

    In the case of future disasters or traumatic events, Silver recommends that individuals monitor the degree to which they immerse themselves in bad news (e.g., avoid “doomscrolling”) and consider specific times to check the news throughout the day.

    “One can stay informed without becoming overwhelmed with a constant onslaught of bad news,” said Silver.

    Article: “Psychological Responses to U.S. Statewide Restrictions and COVID-19 Exposures: A Longitudinal Study,” by Rebecca R. Thompson, PhD, Nickolas M. Jones, PhD, Apphia M. Freeman, BA, E. Alison Holman, PhD, Dana Rose Garfin, PhD, and Roxane Cohen Silver, PhD, University of California Irvine. Health Psychology, published Oct. 17, 2022.

    Contact: Roxane Cohen Silver, PhD, can be contacted at [email protected].

    [ad_2]

    American Psychological Association (APA)

    Source link

  • Immune marker suPAR high in patients with heart failure, predicts risk and death

    Immune marker suPAR high in patients with heart failure, predicts risk and death

    [ad_1]

    Newswise — For years, cardiologists have zeroed in on a hormone called BNP as a gold standard to determine if patients with heart failure are at risk of severe illness or death. It’s released by the heart in response to when the cardiac tissue stretches due to pressure.

    While the B-type natriuretic peptide, or BNP, is a “downstream” indicator of heart failure, researchers have been looking for biomarkers focused on what leads to heart failure, such as myocardial injury or inflammation.

    A new study from Michigan Medicine and the Emory Clinical Cardiovascular Research Institute finds that levels of soluble urokinase plasminogen activator receptor, or suPAR, an immune protein known to play a role in kidney disease, are high in patients with heart failure and predict both heart failure and death. Beyond that, when suPAR is combined with BNP, the ability to predict such risks gets even stronger. The findings are published in the Journal of Cardiac Failure.

    “Several markers have been examined for heart failure and its adverse outcomes, but few have ever shown to be additive to BNP, or sometimes better than BNP, which is what we find here,” said Salim Hayek, M.D., an assistant professor of internal medicine and medical director of the University of Michigan Health Frankel Cardiovascular Center clinics.

    “BNP is marker that varies dramatically depending on the patient’s fluid status. A more stable marker, such as suPAR, that is linked to the pathophysiology of heart failure could be more useful in identifying patients at higher, long-term risk of disease progression or death.”

    The research team used the Emory cardiovascular biobank to measure both plasma suPAR and BNP levels in over 3,400 participants undergoing heart imaging, following them for more than six years.

    Results reveal that suPAR levels were 17% higher in patients with heart failure than those without across the different subgroups, including patients with ischemic or non-ischemic cardiomyopathy. Levels of the protein carried more than two times risk for all-cause death, cardiovascular death and hospitalization for heart failure.

    Additionally, when suPAR was found to be increased in patients without heart failure, they were over 3.5 times more likely to develop the condition.

    “We see that suPAR has a major role in cardiovascular disease as a marker of immune activation, which likely reflects an upstream process of stress and inflammation that can cause heart failure,” said Hayek, who is also an assistant professor of cardiovascular and internal medicine at U-M Medical School.

    “SuPAR is also known to cause kidney disease – an important component of the pathophysiology of heart failure. This may explain why suPAR levels are strongly predictive of long-term outcomes in these patients.”

    A growing body of research links suPAR and poor outcomes for an array of conditions, from coronary artery disease to cancer and kidney dysfunction. The common pathway of disease in these conditions is a persistent activation of the immune system, which is reflected in high suPAR levels, says Hayek, whose research focuses on understanding the link between inflammation, cardiovascular and kidney diseases.

    “On the practical side, there is a potential for suPAR to be among the biomarkers that we measure to create a strategy for personalizing care for individual patients,” said senior author Arshed Ali Quyyumi, M.D., FACC, Director of the Emory Clinical Cardiovascular Institute and professor of medicine in the Division of Cardiology at Emory University School of Medicine.

    “For example, we could use it to differentiate between admitted patients who are at low and high risk of worsening heart failure. Then we could better allocate post-discharge resources to those at higher risk, which would lessen the cost burden of managing disease. There are many potential opportunities to use suPAR to improve care.”

    Additional authors include Ayman Samman Tahhan, M.D., Yi-an Ko, Ph.D., Ayman Alkhoder, M.D., Shuai Zheng, Ph.D., Ravila Bhimani, Joy Hartsfield, Jonathan Kim, M.D., Peter Wilson, M.D., Leslee Shaw, Ph.D., all of Emory University, Changli Wei, Ph.D., Jochen Reiser, M.D., Ph.D., both of Rush University.

    Paper cited: “Soluble Urokinase Plasminogen Activator Receptor Levels and Outcomes in Patients with Heart Failure,” Journal of Cardiac FailureDOI: 10.1016/j.cardfail.2022.08.010

    [ad_2]

    Michigan Medicine – University of Michigan

    Source link

  • Goats and Sheep Battle in Climate Crisis

    Goats and Sheep Battle in Climate Crisis

    [ad_1]

    Newswise — A new study from the Wildlife Conservation Society (WCS), Colorado State University, and the National Park Service indicates previously unknown high altitude contests between two of America’s most sensational mammals – mountain goats and bighorn sheep – over access to minerals previously unavailable due to the past presence of glaciers which, now, are vanishing due to global warming. 

    The study also points to other coveted resources such as desert water and shade in brutal environs from Africa, Asia, and North America; species in these extreme environments contest access to these biologically important resources but such interactions have not previously been catalogued by individual species, their size, or their status as ‘native’ or ‘exotic’. 

    “While humans continue to be justifiably concerned about the climate-induced havoc we’re wreaking planet-wide, much has remained unknown about species aggression among our mammalian brethren” said Joel Berger, the lead author and Senior Scientist for WCS and the Barbara Cox-Anthony Chair of Wildlife Conservation at Colorado State University. 

    The findings from this work were distilled from fragmentary information dating backwards some four decades and included species as different as marmots and baboons, oryx and elephants, and rhinos, along with wild (i.e., feral) horses which displaced native pronghorn, mule deer, and elk from desert waters.

    The study revealed that mountain goats with their saber-like horns emerged victorious over bighorn sheep in more than 98 percent of contests at three sites along a 900-mile gradient of above-treeline mountainous habitat from Colorado to Alberta, Canada. While mountain goats are a native species in northwestern North America, they are exotic in Colorado and Wyoming, including the Greater Yellowstone Ecosystem, where they were introduced. Concerns there and elsewhere have focused on the extent to which goats may displace or outcompete native bighorns. Although it remains unknown if interactions to access resources have increased over time as our climate degrades, human activity has both increased and decreased access by wildlife to restricted resources such as minerals and water through road building and by the creation of artificial water sources. 

    The study appears in the journal Frontiers in Ecology and Evolution. Co-authors, Mark Biel, Chief biologist at Glacier National Park in Montana, and PhD candidate Forest Hayes at CSU, pointed out that high elevation aggression between species, whether passive or active, highlight the importance of limited resources, but it’s been well known that both bighorns and mountain goats will travel up to fifteen miles or more to access these limited resources.  Desert elephants travel distances even more impressive – up to 40 miles – to drink from distant waterholes in Namibia. 

    “It’s been exciting to gather data in wind, snow, and cold on goats and sheep in both Glacier and at Mt. Evans, Colorado, which reaches to more than 14,000 feet,” offered Forest Hayes where “our observations both at close range and from distances of more than a mile provided unique opportunities for detecting and understanding ecological interactions.” 

    Berger, Biel, and Hayes suggest a possible role of climate challenge through ground water depletion in desert areas but recognize humans may be a more immediate threat as water use for people increasingly jeopardizes the fragility of biodiversity in these systems.  “If we can’t offer species other than ourselves a chance, we’re just cooking our fates along similarly destructive paths” offered Berger. 

    Associated partners and funders for this project were Colorado State University and the Wildlife Conservation Society, Glacier National Park Conservatory, Denver Zoological Society, Denver Mountain Parks, and Frederick Dulude-de Broin at LaVal University. 

    ###

    WCS (Wildlife Conservation Society)

    MISSION: WCS saves wildlife and wild places worldwide through science, conservation action, education, and inspiring people to value nature. To achieve our mission, WCS, based at the Bronx Zoo, harnesses the power of its Global Conservation Program in nearly 60 nations and in all the world’s oceans and its five wildlife parks in New York City, visited by 4 million people annually. WCS combines its expertise in the field, zoos, and aquarium to achieve its conservation mission. Visit: newsroom.wcs.org Follow: @WCSNewsroom. For more information: 347-840-1242.

     

    [ad_2]

    Wildlife Conservation Society

    Source link

  • Livers have the potential to function for more than 100 years

    Livers have the potential to function for more than 100 years

    [ad_1]

    Key takeaways 

    • Understanding the characteristics of livers that live to 100 could potentially expand the donor pool by using older liver donors more often. 
    • New surgical techniques and advances in immunosuppression lead to better outcomes for patients receiving a liver from an older donor.  
    • Optimizing both donor and recipient factors allow for much greater longevity for certain livers. 

    Newswise — SAN DIEGO: There is a small, but growing, subset of livers that have been transplanted and have a cumulative age of more than 100 years, according to researchers from University of Texas (UT) Southwestern Medical Center, Dallas, and TransMedics, Andover, Massachusetts. They studied these livers to identify characteristics to determine why these organs are so resilient, paving the way for considering the potential expanded use of older liver donors. The research team presented their findings at the Scientific Forum of the American College of Surgeons (ACS) Clinical Congress 2022. 

    The researchers used the United Network for Organ Sharing (UNOS) STARfile to identify livers that had a cumulative age (total initial age at transplant plus post-transplant survival) of at least 100 years. Of 253,406 livers transplanted between 1990-2022, 25 livers met the criteria of being centurion livers—those with a cumulative age over 100 years. 

    “We looked at pre-transplant survival—essentially, the donor’s age—as well as how long the liver went on to survive in the recipient,” said lead study author Yash Kadakia, a medical student at UT Southwestern Medical School. “We stratified out these remarkable livers with over 100-year survival and identified donor factors, recipient factors, and transplant factors involved in creating this unique combination where the liver was able to live to 100 years.” 

    Centurion livers came from older donors 

    For these centurion livers, the average donor age was significantly higher, 84.7 years compared with 38.5 years for non-centurion liver transplants. The researchers noted that for a liver to make it to 100, they expected to find an older average donor age as well as healthier donors. Notably, the donors from the centurion group had lower incidence of diabetes and fewer donor infections. 

    “We previously tended to shy away from using livers from older donors,” said study coauthor Christine S. Hwang, MD, FACS, associate professor of surgery, UT Southwestern Medical Center. “If we can sort out what is special amongst these donors, we could potentially get more available livers to be transplanted and have good outcomes.”  

    There are 11,113 patients on liver transplant waiting list as of September 22, 2022.* As Dr. Hwang noted, using older liver donors more often could potentially expand the liver donor pool. 

    Further study details 

    Centurion liver donors had lower transaminases, which are enzymes that play a key role in the liver. Elevated transaminases can cause problems in liver transplantation. Additionally, the recipients of centurion livers had significantly lower MELD scores (17 for the centurion group, 22 for the non-centurion group). A higher MELD score indicates that a patient is more urgently in need of a transplant.  

    “The donors were optimized, the recipients were optimized, and it takes that unique intersection of factors to result in a really good outcome,” Mr. Kadakia said. 

    The researchers found that no grafts in the centurion group were lost to primary nonfunction or vascular or biliary complications. There was notably no significant difference in rates of rejection at 12 months between the centurion group and the non-centurion group. Further, outcomes for the centurion group had significantly better allograft and patient survival.  

    “The existence of allografts over 100 years old is revealing of the dramatic resilience of the liver to senescent events,” the study authors concluded.  

    “Livers are incredibly resilient organs,” said Mr. Kadakia. “We’re using older donors, we have better surgical techniques, we have advances in immunosuppression, and we have better matching of donor and recipient factors. All these things allow us to have better outcomes.” 

    Study coauthors are Malcolm MacConmara, MBBCh, FACS; Madhukar S. Patel, MD; Jigesh A. Shah, DO; Steven I. Hanish, MD, FACS; and Parsia A. Vagefi, MD, FACS. 

    Citation: Kadakia Y, et al. Centurion Livers — Making It to 100 with A Transplant, Scientific Forum, American College of Surgeons Clinical Congress 2022. 

    ________________________  

    * Data. Organ Procurement & Transplantation Network. Accessed September 23, 2022. Available at: https://optn.transplant.hrsa.gov/data/ (.)  

    # # # 

    About the American College of Surgeons 

    The American College of Surgeons is a scientific and educational organization of surgeons that was founded in 1913 to raise the standards of surgical practice and improve the quality of care for all surgical patients. The College is dedicated to the ethical and competent practice of surgery. Its achievements have significantly influenced the course of scientific surgery in America and have established it as an important advocate for all surgical patients. The College has more than 84,000 members and is the largest organization of surgeons in the world. “FACS” designates that a surgeon is a Fellow of the American College of Surgeons. 

    [ad_2]

    American College of Surgeons (ACS)

    Source link

  • New palliative care screening tool for surgical ICU patients may facilitate decision-making processes, reduce burden on families, medical staff

    New palliative care screening tool for surgical ICU patients may facilitate decision-making processes, reduce burden on families, medical staff

    [ad_1]

    Key takeaways 

    • Critically ill patients in the Surgical Intensive Care Unit (SICU) may benefit from palliative care, focusing on quality of life, when aggressive medical interventions will not improve outcomes or extend life. 
    • Across hospital systems, models and access to palliative care vary; identifying patients can be difficult, often occurring late in SICU stays.  
    • Using three key questions, a new screening tool, developed using a quality improvement process, helped the medical team identify which SICU patients may benefit from palliative care or goals of care consultations within seconds; all patients in the SICU could be screened in about 30 seconds. 

    Newswise — SAN DIEGO: To aid in decision-making processes and increase awareness around palliative care in the Surgical Intensive Care Unit (SICU), a research team at the University of North Carolina at Chapel Hill (UNC-Chapel Hill) has developed a screening tool to identifywithin secondspatients who may benefit from palliative care consultations or goals of care discussions. Their research findings, presented at the Scientific Forum during the American College of Surgeons (ACS) Clinical Congress 2022, show that the screening tool successfully identified SICU patients who were later deemed candidates for palliative care by their medical team.  

    As a general and trauma surgeon, Trista Day Snyder Reid, MD, MPH, FACS, an assistant professor of surgery at UNC Health, and the study’s medical advisor, explained that she often witnesses medical teams and families make agonizing decisions for patients in the SICU. When aggressive medical interventions will not improve outcomes or extend life, palliative care treatment, which focuses on symptom management and supportive communication, may improve a patient’s quality of life. Unfortunately, a medical team may hesitate to collectively identify appropriate patients or may refer patients to palliative care late in their SICU stay, increasing the burden and stress on the patient and their families.  

    “One of the things that we found at our institution was sometimes we would involve palliative care, but it would happen way down the line when the patient had been in the SICU for a long time already,” Dr. Reid said. “We want palliative care discussions to be happening sooner. And even if we’re not involving palliative care, we want goals of care discussions to happen early so the family has a rapport with the medical team and understands that their family member is really sick.” 

    Across hospital systems, screening criteria and access to palliative care vary. Research has shown that offering palliative care consultations early in an ICU stay can improve quality of life and even reduce the lengths of stay in the ICU.1 However, integrating palliative care into hospital systems remains challenging due to a lack of resources and insufficient training, among other factors.2 

    Study details  

    The UNC researchers initially developed a screening tool with 12 “yes/no” questions with input from SICU and palliative care physicians, nurses, and advanced practice providers. Fourth-year medical students at UNC-Chapel Hill completed the questionnaire after receiving feedback from the SICU medical team. Any question where the team answered “yes” was deemed a positive indicator that the patient would benefit from a palliative care consultation with a specialist or a goals of care discussion with the surgical team.  

    Three iterations of the screening tool were developed using the Plan-Do-Study-Act (PDSA) method before selecting three questions that the researchers found best correlated with a positive indicator: 

    1. Any team member (nursing, physician, pharmacist, etc.) expresses concern the patient may need palliative care. 
    2. ICU or surgical team answers ‘no’ to the question: “Would you be surprised if this patient died?” 
    3. Comorbidities: irreversible, progressive, or untreatable, severely impairing function. 

    If yes was answered to any of the three checklist questions, the researchers believed the patient would likely benefit from a palliative care consultation or goals of care discussion. 

    Key findings 

    • Screening tools from 282 patients in the SICU were recorded.  
    • Of those 282 patients, the screening tool successfully identified 22 patients, all of whom eventually received referrals for palliative care. 
    • Each patient could be screened in about three seconds; all patients in the SICU could be screened in about 30 seconds. 
    • The tool did not increase the burden on the palliative care team at UNC Health. 

    “The hope is that by using this screening tool, decisions traditionally made very late in the patient’s SICU stay, could be made much earlier,” said lead author Victoria Herdman, MD. Dr. Herdman was a fourth-year UNC-Chapel Hill medical student at the time of the study and is now completing her residency in cardiothoracic surgery at the University of Kentucky College of Medicine. “Physicians, physician assistants, nurse practitioners and nurses know early on who needs palliative care but sometimes that’s hard to jump into early in the stay. This screening tool is a way to guide everyone into it easier.” 

    The research was performed at a single site, but the team hopes to evaluate the tool within other ICU populations at UNC Health using a Quality Improvement process, possibly using an electronic medical record system or implementing it during daily rounds discussions with only one question. The study team also plans future research to analyze patient demographics to determine which marginalized populations are often left out of palliative care discussions. Simply discussing palliative care more often and educating team members and families, they said, can make a difference. 

    I think as surgeons we tend to have a lot of ownership of our patients because they’re trusting us with their bodies. But I think that may also bias us a little bit in terms of palliative care. We hear the words ‘palliative care’ and may say, ‘Oh, no, no, no! We don’t want that. That’s like giving up on our patient,’” Dr. Reid said. “But the truth is, I think a lot of surgeons don’t truly understand the definition of palliative care—that the goal is to align what the patient wants with your treatments. Our long-term hope is to make discussions of palliative care more commonplace and to change the culture so that people feel comfortable involving palliative care, or at a minimum having a goals of care discussion, so that patients and their families understand all the possible treatment options.” 

    The study was supported by the UNC Institute for Healthcare Quality Improvement.  

    Study coauthors are Casey Olm-Shipman, MD, MS; Winnie Lau, MD; Kyle Lavin, MD; Marshall W. Fritz, BS; and Geoffrey Orme-Evans, JD, MPH. 

    Dr. Herdman and Dr. Reid have no disclosures to report.    

    Citation: Herdman V, et al. Surgical Intensive Care Unit (SICU) Palliative Care Screening-Tool: A Quality Improvement (QI) Project, Scientific Forum, American College of Surgeons Clinical Congress 2022. 

    ________________________  

    1Rotundo E, Braunreuther E, Dale M, et al. Retrospective Review of Trauma ICU Patients With and Without Palliative Care Intervention. J Am Coll Surg 2022; 235(2): 278-284. 

    2Aslakson RA, Curtis JR, Nelson, JE, et al. The changing role of palliative care in the ICU. Crit Care Med 2014: 42(11):2418. 

    # # #  

    About the American College of Surgeons 

    The American College of Surgeons (ACS) is a scientific and educational organization of surgeons that was founded in 1913 to raise the standards of surgical practice and improve the quality of care for all surgical patients. The College is dedicated to the ethical and competent practice of surgery. Its achievements have significantly influenced the course of scientific surgery in America and have established it as an important advocate for all surgical patients. The College has more than 84,000 members and is the largest organization of surgeons in the world. “FACS” designates that a surgeon is a Fellow of the American College of Surgeons.   

    [ad_2]

    American College of Surgeons (ACS)

    Source link

  • How fluctuating oxygen levels may have accelerated animal evolution

    How fluctuating oxygen levels may have accelerated animal evolution

    [ad_1]

    Newswise — Oxygen levels in the Earth’s atmosphere are likely to have “fluctuated wildly” one billion years ago, creating conditions that could have accelerated the development of early animal life, according to new research.  

    Scientists believe atmospheric oxygen developed in three stages, starting with what is known as the Great Oxidation Event around two billion years ago, when oxygen first appeared in the atmosphere. The third stage, around 400 million years ago, saw atmospheric oxygen rise to levels that exist today.  

     
    What is uncertain is what happened during the second stage, in a time known as the Neoproterozoic Era, which started about one billion years ago and lasted for around 500 million years, during which time early forms of animal life emerged.   

     
    The question scientists have tried to answer is - was there anything extraordinary about the changes to oxygen levels in the Neoproterozoic Era that may have played a pivotal role in the early evolution of animals – did oxygen levels suddenly rise or was there a gradual increase?  

     
    Fossilised traces of early animals - known as Ediacaran biota, multi-celled organisms that required oxygen - have been found in sedimentary rocks that are 541 to 635 million years old.  

      

    To try and answer the question, a research team at the University of Leeds supported by the Universities of Lyon, Exeter and UCL, used measurements of the different forms of carbon, or carbon isotopes, found in limestone rocks taken from shallow seas. Based on the isotope ratios of the different types of carbon found, the researchers were able to calculate photosynthesis levels that existed millions of years ago and infer atmospheric oxygen levels.  

     
    As a result of the calculations, they have been able to produce a record of oxygen levels in the atmosphere over the last 1.5 billion years, which tells us how much oxygen would have been diffusing into the ocean to support early marine life. 

     
    Dr Alex Krause, a biogeochemical modeller who completed his PhD in the School of Earth and Environment at Leeds and was the lead scientist on the project, said the findings give a new perspective on the way oxygen levels were changing on Earth.  

     
    He added: “The early Earth, for the first two billion years of its existence, was anoxic, devoid of atmospheric oxygen. Then oxygen levels started to rise, which is known as the Great Oxidation Event.   

     
    “Up until now, scientists had thought that after the Great Oxidation Event, oxygen levels were either low and then shot up just before we see the first animals evolve, or that oxygen levels were high for many millions of years before the animals came along. 

     
    “But our study shows oxygen levels were far more dynamic. There was an oscillation between high and low levels of oxygen for a long time before early forms of animal life emerged. We are seeing periods where the ocean environment, where early animals lived, would have had abundant oxygen – and then periods where it does not.  

    Dr Benjamin Mills, who leads the Earth Evolution Modelling Group at Leeds and supervised the project, said: “This periodic change in environmental conditions would have produced evolutionary pressures where some life forms may have become extinct and new ones could emerge.”  

     
    Dr Mills said the oxygenated periods expanded what are known as “habitable spaces” – parts of the ocean where oxygen levels would have been high enough to support early animal life forms.  

     
    He said: “It has been proposed in ecological theory that when you have a habitable space that is expanding and contracting, this can support rapid changes to the diversity of biological life.  

     
    “When oxygen levels decline, there is severe environmental pressure on some organisms which could drive extinctions. And when the oxygen-rich waters expand, the new space allows the survivors to rise to ecological dominance.  

     

    “These expanded habitable spaces would have lasted for millions of years, giving plenty of time for ecosystems to develop.”

    END

    [ad_2]

    University of Leeds

    Source link

  • Bumblebees have poor, but useful memories

    Bumblebees have poor, but useful memories

    [ad_1]

    Newswise — Bumblebees  don’t seem to keep memories for how sweet a flower was, but instead only remember if it was sweeter than another flower, according to researchers at Queen Mary University of London, along with an international team of scientists. 

    In new research in the journal eLife, bumblebees were first trained on two flowers, learning that one flower was sweeter than a second flower. Later, they learned that a third flower was sweeter than a fourth flower. Then bumblebees were given the choice between two of the flowers they hadn’t seen together before, for example the second and third or the first and third.  

    Over a series of experiments, bumblebees’ preferences during the tests indicated that they could only retain very basic ranking memories for the flowers for very long. The bumblebees could only remember that a flower had been better or worse during training phase. Bees couldn’t seem to remember for more than a few minutes how sweet or rewarding the flowers were on their own or even how much sweeter they were compared to other flowers.  

    Previous research shows that we humans actually keep memories for both absolute information (e.g. how sweet something is) and comparisons [Palminteri and Lebreton, 2021]. Starlings, a bird native to Europe, and the only other animal for which this question has been examined, similarly use a combination of absolute and comparative information when remembering options [Pompilio and Kacelnik, 2010].

    Ms Yonghe Zhou, co-lead author on the paper and currently a PhD student at Queen Mary University of London, says: “Our results reveal an intriguing divergent mechanism for how bumblebees retain and use information about options, compared to humans and birds.” 

    Prof Fei Peng, senior author currently at Southern Medical University, China, states “It may be that the different strategies used by bumblebees and humans may have evolved because of their different diets. Maybe because bumblebees evolved to mostly only eat flower nectar, they never needed to remember the details and could survive and thrive simply using simple comparisons.”  

    Ms Yonghe adds: “Despite what may seem to be a poor memory strategy, bumblebees do very well in finding the most profitable flowers. It’s fascinating to consider how different animals, in their own ecological niche, can be similarly successful using such different strategies.” 

     

    [ad_2]

    Queen Mary University of London

    Source link

  • New mitochondrial disease identified in identical twins

    New mitochondrial disease identified in identical twins

    [ad_1]

    Key Takeaways

    • In a set of identical twins, investigators have discovered a disease that affects the mitochondria, or the specialized compartments within cells that produce energy
    • Unlike in other mitochondrial diseases, mitochondria were hyperactive in these cases, so that even though the siblings had a high intake of calories, their body weights remained very low

    Newswise — BOSTON – In a set of identical twins, investigators led by researchers at Massachusetts General Hospital (MGH) and Children’s Hospital Philadelphia (CHOP) have identified a mitochondrial disease not previously reported.

    Diseases that affect mitochondria—specialized compartments within cells that contain their own DNA and convert the food we eat into energy needed to sustain life—typically interfere with mitochondrial function, but in these two patients, mitochondria were hyperactive.

    So, as reported in the New England Journal of Medicine, even though the siblings were eating far more calories than needed, their body weights remained very low.

    “This is a highly unusual mitochondrial phenotype. There are more than 300 rare genetic mitochondrial diseases, and nearly all of them are associated with an interruption of mitochondria,” says senior author Vamsi K. Mootha, MD, a Professor of Systems Biology and Medicine at MGH.

    Genome sequencing revealed a mutation in an enzyme called the mitochondrial ATP synthase, which is required by cells to generate the energy storage molecule ATP.

    Experiments indicated that this mutation creates “leaky” mitochondria that dissipate energy—a process called mitochondrial uncoupling.

    “We propose a new name—mitochondrial uncoupling syndrome—that presents with hypermetabolism and uncoupled mitochondria,” says Mootha. “These cases are very important for the field of rare disease genetics, mitochondrial biology, and metabolism.”

    The authors note that additional studies on mitochondrial uncoupling syndromes may provide insights into differences in energy metabolism in the general population.

    “These twins represent the first disorder of mitochondrial uncoupling where we have been able to find the genetic cause,” said Rebecca D. Ganetzky, MD, an attending physician in Mitochondrial Medicine program at CHOP and co-author of the study.

    “By discovering that pathogenic variants in the ATP synthase itself can cause mitochondrial uncoupling, these twins may be the first identified patients in a whole class of diseases of mitochondrial coupling.”

    Additional co-authors include Andrew L. Markhard, BA, Irene Yee, BS, Sheila Clever, MSc, Alan Cahill, PhD, Hardik Shah, MS, Zenon Grabarek, PhD, and Tsz-Leung To, PhD.

    This work was supported by the National Institutes of Health and others.

    About the Massachusetts General Hospital

    Massachusetts General Hospital, founded in 1811, is the original and largest teaching hospital of Harvard Medical School. The Mass General Research Institute conducts the largest hospital-based research program in the nation, with annual research operations of more than $1 billion and comprises more than 9,500 researchers working across more than 30 institutes, centers and departments. In July 2022, Mass General was named #8 in the U.S. News & World Report list of “America’s Best Hospitals.” MGH is a founding member of the Mass General Brigham healthcare system.

    [ad_2]

    Massachusetts General Hospital

    Source link

  • Gene activity in a test tube

    Gene activity in a test tube

    [ad_1]

    Newswise — Pathological processes are usually characterised by altered gene activity in the cells affected. So, gaining an accurate picture of gene activity can provide the key to the development of new, targeted therapies. Whether these therapies then work as we would want them to can also be verified by looking at genes and the processes they initiate.

    It is no wonder that research is focused on methods and techniques that provide detailed information about the genetic activity of individual cells. A research team at the University of Würzburg (JMU) has now developed a technique that is a significant improvement on the methods used to date. Scientists from the Institute for Molecular Infection Biology (IMIB) and the Helmholtz Institute for RNA-based Infection Research (HIRI) were involved. They have presented the results of their work in the current issue of the journal Nucleic Acids Research.

    Analysis of a synthetic transcriptome

    “We have developed a technique that can be used to analyse the translational landscape of a fully customisable synthetic transcriptome, in other words one outside the cell,” is how Jörg Vogel explains the central outcome of the study. Vogel heads the Institute for Molecular Infection Biology at JMU and is also the Director of HIRI as well as the principal author of the study. The new technique has been given the scientific name INRI-seq, which is short for in vitro Ribo-seq.

    A transcriptome is a collection of all the genes that are active in a cell at a given point in time. It consists of the sum of the existing mRNA – the transporters of the blueprints for proteins from the cell nucleus to the ribosomes. Ribosomes are the “protein factories” of the cell; this is where translation of the nucleotide sequence of the mRNA into the amino acid sequence of a protein takes place.

    Refinement of comparable methods

    In principle, INRI-seq is a refinement of comparable methods that pursue the same goal but provide less accurate results or have other disadvantages. For example, RNA sequencing (RNA-seq) determines the concentration of mRNA in cells, allowing conclusions to be drawn about their active genes. However, the final protein abundance does not always correlate with the respective mRNA concentrations.

    A more accurate technique is ribosome profiling (Ribo-seq). Over the past ten years, this has become one of the main methods for measuring protein synthesis directly in a transcriptome-wide manner. “While Ribo-seq has greatly advanced the study of translation-related processes, the method has not been without limitations,” says Jörg Vogel.

    Numerous limitations of Ribo-seq

    For instance, it is a major challenge to detect weakly expressed genes with Ribo-seq, preventing many genes from being recorded in common study designs. Similarly, a Ribo-seq study of microbes from important ecological habitats such as the human gut is difficult since many of them cannot be cultured in the laboratory.

    A further shortcoming, as Vogel explains, is the fact that “on the mechanistic level, Ribo-seq-based studies of molecules affecting translation, such as special antibiotics, can be hampered by cellular responses”. Since Ribo-seq is performed on living cells, it can be difficult to distinguish between direct and indirect effects on translation.

    To overcome some of these limitations, the scientists from Würzburg have developed INRI-seq for the global study of translation in a cell-free environment. INRI-seq uses a commercially available in vitro translation system combined with an in vitro-synthesised, fully customisable transcriptome that allows better control of individual mRNA levels. “With INRI-seq, for example, it is no longer necessary for translation-modulating substances to traverse cellular membranes or to extract ribosomes from a large number of living cells,” says Vogel, outlining the advantages of the technique. “You also need a lot less of the often expensive substance that you want to study, such as a new antibiotic that can only be produced on a small scale. INRI-seq therefore also saves time and money.”

    Higher success rate in the experiment

    The research team demonstrated how well the system works using a synthetically generated transcriptome of the bacterium Escherichia coli. Compared to a technically similar study on living cells, INRI-seq identified almost four times more sites where translation processes are initiated, demonstrating its high sensitivity.

    Therefore, Vogel and his team are in no doubt that “INRI-seq bears great potential as an alternative method for studying translation process and thus also substances that can influence these processes”.

    [ad_2]

    University of Wurzburg

    Source link

  • Viral infections are less frequent but more severe in people with Down syndrome due to oscillating immune response

    Viral infections are less frequent but more severe in people with Down syndrome due to oscillating immune response

    [ad_1]

    Newswise — Individuals with Down syndrome have less-frequent viral infections, but when present, these infections lead to more severe disease. New findings publishing on October 14 in the journal Immunity show that this is caused by increased expression of an antiviral cytokine type I interferon (IFN-I), which is partially coded for by chromosome 21. Elevated IFN-I levels lead to hyperactivity of the immune response initially, but the body overcorrects for this to reduce inflammation, leading to increased vulnerability later in the viral attack.

    “Usually too much inflammation means autoimmune disease, and immune suppression usually means susceptibility to infections,” says senior study author Dusan Bogunovic of the Icahn School of Medicine at Mount Sinai. “What is unusual is that individuals with Down syndrome are both inflamed and immunosuppressed, a paradox of sorts. Here, we discovered how this is possible.”

    Down syndrome is typically caused by triplication of chromosome 21. This syndrome affects multiple organ systems, causing a mixed clinical presentation that includes intellectual disability, developmental delays, congenital heart and gastrointestinal abnormalities, and Alzheimer’s disease in older individuals.

    Recently, it has become clear that atypical antiviral responses are another important feature of Down syndrome. Increased rates of hospitalization of people with Down syndrome have been documented for influenza A virus, respiratory syncytial virus, and severe acute respiratory syndrome due to coronavirus (SARS-CoV-2) infections.

    While people with Down syndrome show clear signs of immune disturbance, it has yet to be elucidated how a supernumerary chromosome 21 leads to dysregulation of viral defenses. To address this knowledge gap, the researchers compared fibroblasts and white blood cells derived from individuals with and without Down syndrome, at both the mRNA and protein levels. They focused on the potent antiviral cytokine IFN-I receptor subunits IFNAR1 and IFNAR2, which are located on chromosome 21.

    The researchers found that increased IFNAR2 expression was sufficient for the hypersensitivity to IFN-I observed in Down syndrome, independent of trisomy 21. But subsequently, the hyper-active IFN-I signaling cascade triggered excessive negative feedback via a protein called USP18, which is a potent IFNAR negative regulator. This process, in turn, suppressed further responses to IFN-I and antiviral responses. Taken together, the findings unveil oscillations of hyper- and hypo-responses to IFN-I in Down syndrome, predisposing to both lower incidence of viral disease and increased infection-related morbidity and mortality.

    “We have a lot more to do to completely understand the complexities of the immune system in Down syndrome,” says first author Louise Malle of the Icahn School of Medicine at Mount Sinai. “We have here, in part, explained the susceptibility to severe viral disease, but this is only the tip of the iceberg.”

    ###

    [ad_2]

    Cell Press

    Source link

  • New Guidelines for Hamstring Lengthening 
and Guide Growth Surgery

    New Guidelines for Hamstring Lengthening and Guide Growth Surgery

    [ad_1]

    Newswise — For the past several years, Robert Kay, MD, has been part of an international group of experts with an ambitious goal: shaping the future of orthopedic care for ambulatory children with cerebral palsy.

     “There’s so much variation in how surgeons approach these complex patients,” says Dr. Kay, Chief of Orthopedic Surgery and Director of the Jackie and Gene Autry Orthopedic Center at Children’s Hospital Los Angeles. “Our goal is to provide consensus guidelines—not one person’s opinion—to aid surgeons in their clinical decision-making and help ensure the best care for each child.”

     The group, which includes 16 surgeons from 15 leading centers in North America, Europe and Australia, recently published guidelines for two types of surgeries designed to address knee problems in children with cerebral palsy: hamstring surgery and anterior distal femoral hemiepiphysiodesis (also called “guided growth”). Both papers were published in the Journal of Children’s Orthopaedics.

     Dr. Kay — who served as lead author on the hamstring surgery paper and a senior author on the guided growth publication — shares some of the group’s key recommendations.

     Hamstring surgery

    Hamstring lengthening is one of the most common surgeries to address crouch gait, but indications for this procedure have changed in recent years, Dr. Kay says. The panel agreed that:

    • A thorough gait evaluation is key. “You can be fooled just on physical exam in thinking a patient’s hamstrings are tight,” Dr. Kay says. “Computerized hamstring modeling data from a gait lab are really helpful in determining whether a child is a good candidate for this surgery.”
    • Repeat hamstring lengthening has inferior results. A child’s crouch gait can sometimes recur after an initial hamstring lengthening. But the panel agreed that a repeat lengthening is often not successful. For example, Dr. Kay’s team published a paper several years ago showing that while a first hamstring lengthening had a 71% success rate in straightening a child’s knees, that success fell to just 28% with a second procedure.
    • Hamstring lengthening isn’t enough to correct knee flexion contractures. “When contractures are greater than 10 degrees, an isolated hamstring lengthening doesn’t sufficiently address the problem,” Dr. Kay says. “You need to think about doing an osseous procedure around the knee as well.”
    • Hamstring transfer remains controversia Although hamstring transfer surgery offers some theoretical advantages over hamstring lengthening—with the idea that it would lessen post-operative anterior pelvic tilt—the panel concluded that data do not yet show that it has better outcomes than hamstring lengthening.

     “Hamstring transfer is a much bigger, more complicated and painful surgery,” Dr. Kay says. “There may be a role for it down the road, but right now the data don’t support it.”

    • Surgeons should not lengthen lateral hamstrings. The group was unanimous in reporting that they “rarely, if ever” perform isolated lateral hamstring lengthening. There was also consensus that indications for combined medial and lateral hamstring lengthening are very limited in children who walk—those functioning at Gross Motor Function Classification System (GMFCS) levels I-III.
    • It’s important to minimize the number of surgical sessions for a child. Hamstring surgery is rarely performed in isolation. The group recommended simultaneously addressing contractures and bony deformities at other levels, as well as lever-arm dysfunction, in a single-event multilevel surgery.

     Guided growth surgery

    Anterior distal femoral hemiepiphysiodesis was first reported as a method to straighten out knee flexion contractures without cutting into the bone nearly 15 years ago, but it has become much more common in recent years as surgical techniques have evolved. The group agreed that:

    • The surgery is effective in children with a wide range of walking abilities. Guided growth can be indicated in children at GMFCS levels I-IV—from those who walk in all settings without help to those who only walk short distances at home with assistance.

    One advantage is that it’s a low-risk procedure, with small incisions, Dr. Kay adds. Patients are able to start walking the same day as surgery.

    • It’s best for contractures between 10 and 20 degrees. The panel could not reach consensus on whether the procedure should be done for knee flexion contractures outside that range, but members agreed that it is not indicated for those larger than 30 degrees.
    • Percutaneous screws are preferred over anterior plates. This newer technique—which Dr. Kay helped to popularize in recent years—results in far less postoperative pain than the older screw-and-plate constructs.
    • Children should have two years of remaining growth. “The surgery can be a very good option for adolescents who are still growing,” Dr. Kay says. He adds that guided growth is rarely done in children under 10.

     

     What’s next?

    The panel is now working on a consensus paper for foot and ankle surgeries in ambulatory children with cerebral palsy. Long term, the group hopes to do more prospective data collection and potentially create a registry to better track and study outcomes data from these procedures.

     “It’s really important for surgeons to work together to continually optimize care for these patients,” Dr. Kay says. “Improving a child’s ability to walk has a major impact on the quality of life for that child and family. This is something that will affect them for the rest of their lives.”

    About Children’s Hospital Los Angeles  Children’s Hospital Los Angeles is at the forefront of pediatric medicine, offering acclaimed care to children from across the world, the country and the greater Southern California region. Founded in 1901, Children’s Hospital Los Angeles is the largest provider of care for children in Los Angeles County, the No. 1 pediatric hospital in California and the Pacific region, and is consistently ranked in the top 10 in the nation on U.S. News & World Report’s Honor Roll of Best Children’s Hospitals. Clinical expertise spans the pediatric care continuum for newborns to young adults, from everyday preventive medicine to the most medically complex cases. Inclusive, compassionate, child- and family-friendly clinical care is led by physicians who are faculty members of the Keck School of Medicine of USC. Physicians translate the new discoveries, treatments and cures proven through the work of scientists in The Saban Research Institute of Children’s Hospital Los Angeles—among the top 10 children’s hospitals for National Institutes of Health funding—to bring answers to families faster. The hospital also is home to one of the largest training programs for pediatricians in the United States. To learn more, follow us on Facebook, Instagram, LinkedIn, YouTube and Twitter, and visit our blog at CHLA.org/blog

     

    [ad_2]

    Children’s Hospital Los Angeles

    Source link

  • Cancer deaths in Italy: environmental pollution plays an important role

    Cancer deaths in Italy: environmental pollution plays an important role

    [ad_1]

    Newswise — Today, cancer represents the second leading cause of death in the world after cardiovascular diseases. In the last decades of cancer research, lifestyle – especially physical inactivity, poor diet, obesity, alcoholism, and smoking – and random or genetic factors have been identified as major causes in the development of tumors. Nevertheless, there is a growing understanding of how environmental pollution is among the main factors inducing cancer proliferation.

    To further investigate this issue, a group of scholars from the University of Bologna, the University of Bari, and the CNR (National Research Council) used advanced artificial intelligence methods to analyze the relationship between cancer mortality, socio-economic factors, and environmental pollution sources in Italy on a regional and provincial level. The results and the analysis of the investigation have been published in the journal Science of the Total Environment, while the entire ten-year dataset with cancer mortality rates for all Italian municipalities has been published in the journal Nature Scientific Data, an open-access and user-friendly journal.

    “Contrary to what has been believed so far, our analysis showed that the distribution of cancer mortality among Italian citizens is neither random nor spatially well-defined,” explains Roberto Cazzolla Gatti, professor at the Department of Biological, Geological, and Environmental Sciences at the University of Bologna as well as first author of the study. “Instead, cancer mortality exceeds the national average especially where environmental pollution is higher, even if in these areas living habits are generally healthier.”

    Researchers took into consideration 35 environmental sources of pollution such as industries, pesticides, incinerators, and motor vehicle traffic. Among these, they found that air quality is the most important factor in terms of its association with the average cancer mortality rate. This is followed by the presence of sites to be reclaimed, urban areas, motor vehicle density and pesticides. Moreover, other specific sources of environmental pollution are relevant for specific types of tumour. For instance, cultivated areas are associated with tumours of the gastrointestinal system, roads and steelworks proximity with bladder tumours, industrial activities in urban areas with prostate tumour and lymphomas.

    The Italian province with the highest cancer mortality rate in the decade 2009-2018 was Lodi. It was followed by those of Naples, Bergamo, Pavia, Sondrio, and Cremona. The highest-ranked province in central Italy is Viterbo (11th position), followed by Rome (18th). In southern Italy, in addition to the province of Naples in second place, only that of Caserta (8th) is in the top 10 for cancer mortality. Anyone can check the ten-year mortality rate in its municipality by visiting the open-access dataset published by the authors of the study.

    “Of course, these results do not question the fact that a healthier lifestyle helps to reduce the risk of cancer, just as they don’t question the efforts to get to the genetic basis that may favour the onset of cancer,” adds Cazzolla Gatti. “Our results, however, give us good reason to believe that living in a highly polluted area can cancel out the benefits that come with a healthy lifestyle and induce the development of cancers with a higher frequency.”

    Every year in Italy, there are 400,000 new cases of malignant tumours, with an annual average of about three deaths per thousand people according to the Italian cancer registries. On both national and regional level, the analysis carried out by the scholars showed the relevance of the environment on the onset of tumours even if compared to other socio-economic factors and lifestyle. Moreover, it was possible to determine on a provincial level what potential sources of pollution could cause an excess of cancer mortality compared to the national average, thus also providing a focus on environmental factors that are mostly associated with specific types of cancer.

    “From a global health perspective, following the approach known as One Health, it is now evident that the quality of life of our species is closely dependent on that of the environment in which we live and of the entire planet,” explains Cazzolla Gatti. “It is, therefore, necessary to give the highest priority not only to research for cancer cures but also to the reduction and prevention of environmental contamination. These are essential actions in the difficult fight against cancer onset. We need to know how to cure our planet to be able to avoid getting sick.”

    According to the study, Italian regions with a relatively high cancer mortality rate have a relatively high degree of pollution, despite registering a relatively low frequency of the usual cancer risk factors such as being overweight, smoking, having a low income, high meat consumption, and low fruit and vegetable consumption. Furthermore, on a provincial level, for both malignant and benign tumors in general and for 16 out of 23 specific types of cancer, significant spatial associations were found with certain sources of pollution and explained more than half of the association between environment and cancer. This confirmed that, in most cases, being exposed to a contaminated environment has a significant impact on cancer mortality in Italy.

    “Data show good, albeit preliminary, evidence that a better lifestyle and greater attention to socio-economic and health issues can only partially reduce the risk of dying from cancer if the quality of the environment is overlooked,” explains Cazzolla Gatti. “This could explain why we have observed that people living in northern Italian regions – particularly in those located in the Po Valley, between the Lombardy and Veneto regions, which are highly industrialized areas – and exposed to very high levels of environmental pollution show a significant excess of cancer mortality compared to those who live in the central-southern regions (except for some other highly polluted areas, such as the so-called Terra dei Fuochi (Land of Fires) in the Campania region), even though they enjoy better health, have higher incomes, consume more food of plant origin than animal one, and have easier access to health care.”

    The entire ten-year database (2009-2018) on cancer mortality rates developed by researchers from ISTAT (Italian National Institute of Statistics) registers has been published with open access. In the database, 23 cancer macro-categories in Italy on a municipal, provincial, and regional level are considered. “We want to make a complete, up-to-date and ready-to-use data source on cancer mortality in Italy easily accessible to be consulted by interested bodies and local and national authorities, and to provide researchers useful data to carry out further studies,” Cazzolla Gatti concludes.

    The study was published in open access in the journal Science of the Total Environment under the title “The spatial association between environmental pollution and long-term cancer mortality in Italy”, while the entire dataset can be found in Nature Scientific Data. The authors of the study are Roberto Cazzolla Gatti (University of Bologna), Arianna Di Paola (CNR – National Research Council, Institute for BioEconomy), Alfonso Monaco (University of Bari ‘Aldo Moro’), Alena Velichevskaya (Tomsk State University, Russia), Nicola Amoroso (INFN – National Institute for Nuclear Physics, Bari Section), Roberto Bellotti (University of Bari ‘Aldo Moro’).

    [ad_2]

    Universita di Bologna

    Source link

  • Increased mitochondria and lipid turnover reduces risk for liver cancer

    Increased mitochondria and lipid turnover reduces risk for liver cancer

    [ad_1]

    Newswise — Alcohol consumption and hepatitis C viral infection are known risk factors for causing hepatocellular carcinoma, the most common form of liver cancer. Apart from these, obesity-associated nonalcoholic fatty liver disease has emerged as a major contributing factor for hepatocellular carcinoma in Western societies. The mechanisms underlying obesity-induced liver cancer are not well understood.

    A new study published this week in the journal Science Advances by University of Chicago researchers showed that in a mouse model, deletion of the BNIP3 protein resulted in decreased turnover of mitochondria and lipid droplets that led to the development of fatty liver and, ultimately, liver cancer. In human liver cancer, they also showed that loss of BNIP3 expression was linked to increased lipids and worse prognosis.

    “My lab is interested in mitochondria and the turnover of mitochondria in normal physiological settings, but also in cancer. In our studies, we work on a protein called BNIP3 that functions as a mitochondrial cargo receptor,” said Kay Macleod, PhD, senior author of the paper and a professor in the Ben May Department for Cancer Research at the University of Chicago Medicine Comprehensive Cancer Center. “Normally, this protein is significantly upregulated in the liver in response to fasting in mice, where it plays a role in protecting the liver from fat accumulation; however, eliminating this protein caused fatty liver. So we studied this further to understand the underlying mechanisms of how loss of BNIP3 leads to lipid accumulation in the normal liver as well as liver cancer.”

    To understand BNIP3 involvement in the prevention of lipid accumulation and fatty liver condition, liver cancer was induced using chemical carcinogens in two sets of mice, one with BNIP3 intact and the other with BNIP3-deleted. The research team observed that tumors developed earlier and grew faster in BNIP3-deleted mice. Moreover, these tumors were full of fat, whereas tumors in BNIP3-intact mice were smaller and didn’t have lipids in them. When these tumors were followed over time, BNIP3-intact mice also developed lipid accumulation similar to that of BNIP3-deleted mice. More interestingly, BNIP3 had been silenced, suggesting that there is a selection for loss of BNIP3 in liver cancer as the disease progresses.

    These findings were consistent with human liver cancer patient data that reported a better prognosis in patients who had BNIP3 and less lipids in their tumors compared to patients who had a very high expression of genes involved in lipid synthesis. The data again suggests that BNIP3 is acting to suppress tumorigenesis in hepatocellular carcinoma by preventing lipid accumulation. Then the next question is how does BNIP3 regulate lipids?

    When BNIP3 was reintroduced using lentivirus into hepatocellular carcinoma cells that lacked BNIP3, tumor cells stopped accumulating lipids, and they didn’t multiply or grow as fast as the ones that lacked BNIP3. The researchers showed that this was happening due to BNIP3 causing turnover of lipids with mitochondria in a degradative cellular process that they call “mitolipophagy.”

    Fatty liver is a growing health issue in Western societies because of diet. “Eating too much food and eating the wrong kind of food causes extra fat to be stored in the liver. When liver cells (hepatocytes) get overburdened with lipids, they undergo death, which leads to regenerative growth of liver cells. If this process is uninterrupted, it leads to hepatocellular carcinoma,” Macleod said.

    Next, her team wondered how reduced lipid droplet turnover prevents hepatocellular carcinoma. Lipid droplets store a variety of lipids that are used to make cell membranes. If a cell is growing or multiplying, it requires a lot more membrane. BNIP3 will limit the number of phospholipids in the cell thereby limiting the lipids required for new cell generation.

    “BNIP3 is both preventing initiation of tumors and also limiting progression of tumors that are already formed by preventing them from growing faster or becoming more aggressive,” Macleod said.

    This work suggests that for a hepatocellular carcinoma to actually to form, it has to get rid of BNIP3. This implies that if there was a way to somehow prevent BNIP3 from being silenced, this could limit liver tumor growth or prevent fatty liver in the first place.

    “I think the most exciting thing is that BNIP3 does more than just promote the turnover of mitochondria. By promoting the interaction and functionality of the mitochondria, it is actually regulating other organelles in the cell,” Macleod said.

    Much attention has been paid to tumor metabolism and how to target this process in cancers. Most of that tends to focus on amino acids and glucose metabolism but not as much on lipid biology. Understanding more about how lipid metabolism is deregulated in cancer has not been as heavily researched. The researchers’ future work focuses on understanding how BNIP3 is regulated in disease conditions as well as with age. Meanwhile, they are also interested in a number of other genes that play important roles in response to nutrient stress.

    The study, “Lipid droplet turnover at the lysosome inhibits growth of hepatocellular carcinoma in a BNIP3- 3 dependent manner” was supported by NIH R01 849 CA200310 and NIH T32 CA009594. Additional authors include Damian Berardi, Althea Bock-Hughes, Alexander Terry, Lauren Drake and Grazyna Bozek from the University of Chicago.

    [ad_2]

    University of Chicago Medical Center

    Source link

  • Study reveals new insights into how fast-moving glaciers may contribute to sea level rise

    Study reveals new insights into how fast-moving glaciers may contribute to sea level rise

    [ad_1]

    Newswise — Climate change is resulting in sea level rise as ice on land melts and oceans expand. How much and how fast sea levels will rise in the near future will depend, in part, on the frequency of glacier calving events. These occur when large chunks of ice detach from glaciers that terminate in the ocean (known as tidewater glaciers), and fall into coastal fjords as icebergs. The faster these glaciers flow over the ground towards the ocean, the more ice enters the ocean, increasing the rate of sea level rise.

    During the warmer summer months, the surface of Greenland’s glaciers can melt and form large lakes that may then drain through to the base of the glacier. Studies on the inland Greenland ice sheet have shown that this reduces friction between the ice and ground, causing the ice to slide faster for a few days. Up to now, however, it has been unclear whether such drainage events affect the flow speed of tidewater glaciers, and hence the rate of calving events.

    To investigate this, a research team from Oxford University’s Earth Sciences department, the Oxford University Mathematical Institute, and Columbia University used Global Positioning System (GPS) observations of the flow speed of Helheim Glacier—the largest single-glacier contributor to sea level rise in Greenland. The GPS captured a near perfect natural experiment: high-temporal-resolution observations of the glacier’s flow response to lake drainage.

    The results found that Helheim Glacier behaved very differently to the inland ice sheet, which shows a fast, downhill movement during lake drainage events. In contrast, Helheim Glacier exhibited a relatively small ‘pulse’ of movement where the glacier sped up for a short amount of time and then moved slower, resulting in no net increase in movement.

    Using a numerical model of the subglacial drainage system, the researchers discovered that this observation was likely caused by Helheim glacier having an efficient system of channels and cavities along its bed. This allows the draining waters to be quickly evacuated from the glacier bed without causing an increase in the total net movement.

    Although this appears positive news in terms of sea level rise implications, the researchers suspected that a different effect may occur for glaciers without an efficient drainage system where surface melt is currently low but will increase in future due to climate change (such as in Antarctica).

    They ran a mathematical model based on the conditions of colder, Antarctic tidewater glaciers. The results indicated that lake drainages under these conditions would produce a net increase in glacier movement. This was largely due to the less efficient winter-time subglacial drainage system not being able to evacuate flood waters quickly. As of yet, however, there are no in situ observations of Antarctic tidewater glacier responses to lake drainage.

    The study calls into question some common approaches for inferring glacial drainage systems based on glacier velocities recorded using satellite observations (which are currently used in sea level rise models).

    Lead author Associate Professor Laura Stevens (Department of Earth Sciences, Oxford University) said: ‘What we’ve observed here at Helheim is that you can have a big input of meltwater into the drainage system during a lake drainage event, but that melt input doesn’t result in an appreciable change in glacier speed when you average over the week of the drainage event.’

    With the highest temporal resolution of satellite-derived glacier speeds currently available being roughly one week, lake drainage events like the one captured in the Helheim GPS data usually go unnoticed.

    ‘These tidewater glaciers are tricky,’ Associate Professor Stevens added. ‘We have a lot more to learn about how meltwater drainage operates and modulates tidewater-glacier speeds before we can confidently model their future response to atmospheric and oceanic warming.’

     

    About the University of Oxford

    Oxford University has been placed number 1 in the Times Higher Education World University Rankings for the seventh year running, and ​number 2 in the QS World Rankings 2022. At the heart of this success are the twin-pillars of our ground-breaking research and innovation and our distinctive educational offer.

    Oxford is world-famous for research and teaching excellence and home to some of the most talented people from across the globe. Our work helps the lives of millions, solving real-world problems through a huge network of partnerships and collaborations. The breadth and interdisciplinary nature of our research alongside our personalised approach to teaching sparks imaginative and inventive insights and solutions.

    Through its research commercialisation arm, Oxford University Innovation, Oxford is the highest university patent filer in the UK and is ranked first in the UK for university spinouts, having created more than 200 new companies since 1988. Over a third of these companies have been created in the past three years. The university is a catalyst for prosperity in Oxfordshire and the United Kingdom, contributing £15.7 billion to the UK economy in 2018/19, and supports more than 28,000 full time jobs.

     

    [ad_2]

    University of Oxford

    Source link

  • ‘Smart plastic’ material is step forward toward soft, flexible robotics and electronics

    ‘Smart plastic’ material is step forward toward soft, flexible robotics and electronics

    [ad_1]

    Newswise — Inspired by living things from trees to shellfish, researchers at The University of Texas at Austin set out to create a plastic much like many life forms that are hard and rigid in some places and soft and stretchy in others­. Their success — a first, using only light and a catalyst to change properties such as hardness and elasticity in molecules of the same type — has brought about a new material that is 10 times as tough as natural rubber and could lead to more flexible electronics and robotics.

    The findings are published today in the journal Science.

    “This is the first material of its type,” said Zachariah Page, assistant professor of chemistry and corresponding author on the paper. “The ability to control crystallization, and therefore the physical properties of the material, with the application of light is potentially transformative for wearable electronics or actuators in soft robotics.”

    Scientists have long sought to mimic the properties of living structures, like skin and muscle, with synthetic materials. In living organisms, structures often combine attributes such as strength and flexibility with ease. When using a mix of different synthetic materials to mimic these attributes, materials often fail, coming apart and ripping at the junctures between different materials.

    Oftentimes, when bringing materials together, particularly if they have very different mechanical properties, they want to come apart,” Page said. Page and his team were able to control and change the structure of a plastic-like material, using light to alter how firm or stretchy the material would be.

    Chemists started with a monomer, a small molecule that binds with others like it to form the building blocks for larger structures called polymers that were similar to the polymer found in the most commonly used plastic. After testing a dozen catalysts, they found one that, when added to their monomer and shown visible light, resulted in a semicrystalline polymer similar to those found in existing synthetic rubber. A harder and more rigid material was formed in the areas the light touched, while the unlit areas retained their soft, stretchy properties.

    Because the substance is made of one material with different properties, it was stronger and could be stretched farther than most mixed materials.

    The reaction takes place at room temperature, the monomer and catalyst are commercially available, and researchers used inexpensive blue LEDs as the light source in the experiment. The reaction also takes less than an hour and minimizes use of any hazardous waste, which makes the process rapid, inexpensive, energy efficient and environmentally benign.

    The researchers will next seek to develop more objects with the material to continue to test its usability.

    “We are looking forward to exploring methods of applying this chemistry towards making 3D objects containing both hard and soft components,” said first author Adrian Rylski, a doctoral student at UT Austin.

    The team envisions the material could be used as a flexible foundation to anchor electronic components in medical devices or wearable tech. In robotics, strong and flexible materials are desirable to improve movement and durability.

    Henry L. Cater, Keldy S. Mason, Marshall J. Allen, Anthony J. Arrowood, Benny D. Freeman and Gabriel E. Sanoja of The University of Texas at Austin also contributed to the research.

    The research was funded by the National Science Foundation, the U.S. Department of Energy and the Robert A. Welch Foundation.

    [ad_2]

    University of Texas at Austin (UT Austin)

    Source link