ReportWire

Tag: Massachusetts Institute of Technology (MIT)

  • Chemists craft colorful organic molecules.

    Chemists craft colorful organic molecules.

    [ad_1]

    Newswise — CAMBRIDGE, MA — Chains of fused carbon-containing rings have unique optoelectronic properties that make them useful as semiconductors. These chains, known as acenes, can also be tuned to emit different colors of light, which makes them good candidates for use in organic light-emitting diodes.

    The color of light emitted by an acene is determined by its length, but as the molecules become longer, they also become less stable, which has hindered their widespread use in light-emitting applications.

    MIT chemists have now come up with a way to make these molecules more stable, allowing them to synthesize acenes of varying lengths. Using their new approach, they were able to build molecules that emit red, orange, yellow, green, or blue light, which could make acenes easier to deploy in a variety of applications.

    “This class of molecules, despite their utility, have challenges in terms of their reactivity profile,” says Robert Gilliard, the Novartis Associate Professor of Chemistry at MIT and the senior author of the new study. “What we tried to address in this study first was the stability problem, and second, we wanted to make compounds where you could have a tunable range of light emission.”

    MIT research scientist Chun-Lin Deng is the lead author of the paper, which appears today in Nature Chemistry.

    Colorful molecules

    Acenes consist of benzene molecules — rings made of carbon and hydrogen — fused together in a linear fashion. Because they are rich in sharable electrons and can efficiently transport an electric charge, they have been used as semiconductors and field-effect transistors (transistors that use an electric field to control the flow of current in a semiconductor).

    Recent work has shown that acenes in which some of the carbon atoms are replaced, or “doped,” with boron and nitrogen have even more useful electronic properties. However, like traditional acenes, these molecules are unstable when exposed to air or light. Often, acenes have to be synthesized within a sealed container called a glovebox to protect them from air exposure, which can lead them to break down. The longer the acenes are, the more susceptible they are to unwanted reactions initiated by oxygen, water, or light.

    To try to make acenes more stable, Gilliard decided to use a ligand that his lab has previously worked with, known as carbodicarbenes. In a study published last year, they used this ligand to stabilize borafluorenium ions, organic compounds that can emit different colors of light in response to temperature changes.

    For this study, Gilliard and his co-authors developed a new synthesis that allowed them to add carbodicarbenes to acenes that are also doped with boron and nitrogen. With the addition of the new ligand, the acenes became positively charged, which improved their stability and also gave them unique electronic properties.

    Using this approach, the researchers created acenes that produce different colors, depending on their length and the types of chemical groups attached to the carbodicarbene. Until now, most of the boron, nitrogen-doped acenes that had been synthesized could emit only blue light.

    “Red emission is very important for wide-ranging applications, including biological applications like imaging,” Gilliard says. “A lot of human tissue emits blue light, so it’s difficult to use blue-fluorescent probes for imaging, which is one of the many reasons why people are looking for red emitters.”

     

    Better stability

    Another important feature of these acenes is that they remain stable in both air and water. Boron-containing charged molecules with a low coordination number (meaning the central boron atom has few neighbors) are often highly unstable in water, so the acenes’ stability in water is notable and could make it feasible to use them for imaging and other medical applications.

    “One of the reasons why we’re excited about the class of compounds that we’re reporting in this paper is that they can be suspended in water. That opens up a wide range of possibilities,” Gilliard says.

    The researchers now plan to try incorporating different types of carbodicarbenes to see if they can create additional acenes with even better stability and quantum efficiency (a measure of how much light is emitted from the material).

    “We think it will be possible to make a lot of different derivatives that we haven’t even synthesized yet,” Gilliard says. “There are a lot of optoelectronic properties that can be dialed in that we have yet to explore, and we’re excited about that as well.”

    Gilliard also plans to work with Marc Baldo, an MIT professor of electrical engineering, to try incorporating the new acenes into a type of solar cell known as a single-fission-based solar cell. This type of solar cell can produce two electrons from one photon, making the cell much more efficient.

    These types of compounds could also be developed for use as light-emitting diodes for television and computer screens, Gilliard says. Organic light-emitting diodes are lighter and more flexible than traditional LEDs, produce brighter images, and consume less power.

    “We’re still in the very early stages of developing the specific applications, whether it’s organic semiconductors, light-emitting devices, or singlet-fission-based solar cells, but due to their stability, the device fabrication should be much smoother than typical for these kinds of compounds,” Gilliard says.

    ###

    The research was funded by the Arnold and Mabel Beckman Foundation and the National Science Foundation Major Research Instrumentation Program.

    [ad_2]

    Massachusetts Institute of Technology (MIT)

    Source link

  • Research Reveals Deep Neural Networks’ Unique Perception of the World.

    Research Reveals Deep Neural Networks’ Unique Perception of the World.

    [ad_1]

    Newswise — CAMBRIDGE, MA — Human sensory systems are very good at recognizing objects that we see or words that we hear, even if the object is upside down or the word is spoken by a voice we’ve never heard.

    Computational models known as deep neural networks can be trained to do the same thing, correctly identifying an image of a dog regardless of what color its fur is, or a word regardless of the pitch of the speaker’s voice. However, a new study from MIT neuroscientists has found that these models often also respond the same way to images or words that have no resemblance to the target.

    When these neural networks were used to generate an image or a word that they responded to in the same way as a specific natural input, such as a picture of a bear, most of them generated images or sounds that were unrecognizable to human observers. This suggests that these models build up their own idiosyncratic “invariances” — meaning that they respond the same way to stimuli with very different features.

    The findings offer a new way for researchers to evaluate how well these models mimic the organization of human sensory perception, says Josh McDermott, an associate professor of brain and cognitive sciences at MIT and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines.

    “This paper shows that you can use these models to derive unnatural signals that end up being very diagnostic of the representations in the model,” says McDermott, who is the senior author of the study. “This test should become part of a battery of tests that we as a field are using to evaluate models.”

    Jenelle Feather PhD ’22, who is now a research fellow at the Flatiron Institute Center for Computational Neuroscience, is the lead author of the open-access paper, which appears today in Nature Neuroscience. Guillaume Leclerc, an MIT graduate student, and Aleksander Mądry, the Cadence Design Systems Professor of Computing at MIT, are also authors of the paper.

    Different perceptions

    In recent years, researchers have trained deep neural networks that can analyze millions of inputs (sounds or images) and learn common features that allow them to classify a target word or object roughly as accurately as humans do. These models are currently regarded as the leading models of biological sensory systems.

    It is believed that when the human sensory system performs this kind of classification, it learns to disregard features that aren’t relevant to an object’s core identity, such as how much light is shining on it or what angle it’s being viewed from. This is known as invariance, meaning that objects are perceived to be the same even if they show differences in those less important features.

    “Classically, the way that we have thought about sensory systems is that they build up invariances to all those sources of variation that different examples of the same thing can have,” Feather says. “An organism has to recognize that they’re the same thing even though they show up as very different sensory signals.”

    The researchers wondered if deep neural networks that are trained to perform classification tasks might develop similar invariances. To try to answer that question, they used these models to generate stimuli that produce the same kind of response within the model as an example stimulus given to the model by the researchers.

    They term these stimuli “model metamers,” reviving an idea from classical perception research whereby stimuli that are indistinguishable to a system can be used to diagnose its invariances. The concept of metamers was originally developed in the study of human perception to describe colors that look identical even though they are made up of different wavelengths of light.

    To their surprise, the researchers found that most of the images and sounds produced in this way looked and sounded nothing like the examples that the models were originally given. Most of the images were a jumble of random-looking pixels, and the sounds resembled unintelligible noise. When researchers showed the images to human observers, in most cases the humans did not classify the images synthesized by the models in the same category as the original target example.

    “They’re really not recognizable at all by humans. They don’t look or sound natural and they don’t have interpretable features that a person could use to classify an object or word,” Feather says.

    The findings suggest that the models have somehow developed their own invariances that are different from those found in human perceptual systems. This causes the models to perceive pairs of stimuli as being the same despite their being wildly different to a human.

    Idiosyncratic invariances

    The researchers found the same effect across many different vision and auditory models. However, each of these models appeared to develop their own unique invariances. When metamers from one model were shown to another model, the metamers were just as unrecognizable to the second model as they were to human observers.

    “The key inference from that is that these models seem to have what we call idiosyncratic invariances,” McDermott says. “They have learned to be invariant to these particular dimensions in the stimulus space, and it’s model-specific, so other models don’t have those same invariances.”

    The researchers also found that they could induce a model’s metamers to be more recognizable to humans by using an approach called adversarial training. This approach was originally developed to combat another limitation of object recognition models, which is that introducing tiny, almost imperceptible changes to an image can cause the model to misrecognize it.

    The researchers found that adversarial training, which involves including some of these slightly altered images in the training data, yielded models whose metamers were more recognizable to humans, though they were still not as recognizable as the original stimuli. This improvement appears to be independent of the training’s effect on the models’ ability to resist adversarial attacks, the researchers say.

    “This particular form of training has a big effect, but we don’t really know why it has that effect,” Feather says. “That’s an area for future research.”

    Analyzing the metamers produced by computational models could be a useful tool to help evaluate how closely a computational model mimics the underlying organization of human sensory perception systems, the researchers say.

    “This is a behavioral test that you can run on a given model to see whether the invariances are shared between the model and human observers,” Feather says. “It could also be used to evaluate how idiosyncratic the invariances are within a given model, which could help uncover potential ways to improve our models in the future.”

    ###

    [ad_2]

    Massachusetts Institute of Technology (MIT)

    Source link

  • Targeting a coronavirus ion channel could yield new Covid-19 drugs

    Targeting a coronavirus ion channel could yield new Covid-19 drugs

    [ad_1]

    Newswise — CAMBRIDGE, MA — The genome of the SARS-CoV-2 virus encodes 29 proteins, one of which is an ion channel called E. This channel, which transports protons and calcium ions, induces infected cells to launch an inflammatory response that damages tissues and contributes to the symptoms of Covid-19.

    MIT chemists have now discovered the structure of the “open” state of this channel, which allows ions to flow through. This structure, combined with the “closed” state structure that was reported by the same lab in 2020, could help scientists figure out what triggers the channel to open and close. These structures could also guide researchers in developing antiviral drugs that block the channel and help prevent inflammation.

    “The E channel is an antiviral drug target. If you can stop the channel from sending calcium into the cytoplasm, then you have a way to reduce the cytotoxic effects of the virus,” says Mei Hong, an MIT professor of chemistry and the senior author of the study.

    MIT postdoc Joao Medeiros-Silva is the lead author of the study, which appears today in Science Advances. MIT postdocs Aurelio Dregni and Pu Duan and graduate student Noah Somberg are also authors of the paper.

    Open and closed

    Hong has extensive experience in studying the structures of proteins that are embedded in cell membranes, so when the Covid-19 pandemic began in 2020, she turned her attention to the coronavirus E channel.

    When SARS-CoV-2 infects cells, the E channel embeds itself inside the membrane that surrounds a cellular organelle called the ER-Golgi intermediate compartment (ERGIC). The ERGIC interior has a high concentration of protons and calcium ions, which the E channel transports out of ERGIC and into the cell cytoplasm. That influx of protons and calcium leads to the formation of multiprotein complexes called inflammasomes, which induce inflammation.

    To study membrane-embedded proteins such as ion channels, Hong has developed techniques that use nuclear magnetic resonance (NMR) spectroscopy to reveal the atomic-level structures of those proteins. In previous work, her lab used these techniques to discover the structure of an influenza protein known as the M2 proton channel, which, like the coronavirus E protein, consists of a bundle of several helical proteins.

    Early in the pandemic, Hong’s lab used NMR to analyze the structure of the coronavirus E channel at neutral pH. The resulting structure, reported in 2020, consisted of five helices tightly bundled together in what appeared to be the closed state of the channel.

    “By 2020, we had matured all the NMR technologies to solve the structure of this kind of alpha-helical bundles in the membrane, so we were able to solve the closed E structure in about six months,” Hong says.

    Once they established the closed structure, the researchers set out to determine the structure of the open state of the channel. To induce the channel to take the open conformation, the researchers exposed it to a more acidic environment, along with higher calcium ion levels. They found that under these conditions, the top opening of the channel (the part that would extend into the ERGIC) became wider and coated with water molecules. That coating of water makes the channel more inviting for ions to enter.

    That pore opening also contains amino acids with hydrophilic side chains that dangle from the channel and help to attract positively charged ions.

    The researchers also found that while the closed channel has a very narrow opening at the top and a broader opening at the bottom, the open state is the opposite: broader at the top and narrower at the bottom. The opening at the bottom also contains hydrophilic amino acids that help draw ions through a narrow “hydrophobic gate” in the middle of the channel, allowing the ions to eventually exit into the cytoplasm.

    Near the hydrophobic gate, the researchers also discovered a tight “belt,” which consists of three copies of phenylalanine, an amino acid with an aromatic side chain. Depending on how these phenylalanines are arranged, the side chains can either extend into the channel to block it or swing open to allow ions to pass through.

    “We think the side chain conformation of these three regularly spaced phenylalanine residues plays an important role in regulating the closed and open state,” Hong says.

    Viral targeting

    Previous research has shown that when SARS-CoV-2 viruses are mutated so that they don’t produce the E channel, the viruses generate much less inflammation and cause less damage to host cells.

    Working with collaborators at the University of California at San Francisco, Hong is now developing molecules that could bind to the E channel and prevent ions from traveling through it, in hopes of generating antiviral drugs that would reduce the inflammation produced by SARS-CoV-2.

    Her lab is also planning to investigate how mutations in subsequent variants of SARS-CoV-2 might affect the structure and function of the E channel. In the Omicron variant, one of the hydrophilic, or polar, amino acids found in the pore opening is mutated to a hydrophobic amino acid called isoleucine.

    “The E variant in Omicron is something we want to study next,” Hong says. “We can make a mutant and see how disruption of that polar network changes the structural and dynamical aspect of this protein.”

    ###

    The research was funded by the National Institutes of Health and the MIT School of Science Sloan Fund.

    [ad_2]

    Massachusetts Institute of Technology (MIT)

    Source link

  • Implant Offers Diabetes Control

    Implant Offers Diabetes Control

    [ad_1]

    Newswise — CAMBRIDGE, MA — One promising approach to treating Type 1 diabetes is implanting pancreatic islet cells that can produce insulin when needed, which can free patients from giving themselves frequent insulin injections. However, one major obstacle to this approach is that once the cells are implanted, they eventually run out of oxygen and stop producing insulin.

    To overcome that hurdle, MIT engineers have designed a new implantable device that not only carries hundreds of thousands of insulin-producing islet cells, but also has its own on-board oxygen factory, which generates oxygen by splitting water vapor found in the body.

    The researchers showed that when implanted into diabetic mice, this device could keep the mice’s blood glucose levels stable for at least a month. The researchers now hope to create a larger version of the device, about the size of a stick of chewing gum, that could eventually be tested in people with Type 1 diabetes.

    “You can think of this as a living medical device that is made from human cells that secrete insulin, along with an electronic life support-system. We’re excited by the progress so far, and we really are optimistic that this technology could end up helping patients,” says Daniel Anderson, a professor in MIT’s Department of Chemical Engineering, a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science (IMES), and the senior author of the study.

    While the researchers’ main focus is on diabetes treatment, they say that this kind of device could also be adapted to treat other diseases that require repeated delivery of therapeutic proteins.

    MIT Research Scientist Siddharth Krishnan is the lead author of the paper, which appears today in the Proceedings of the National Academy of Sciences. The research team also includes several other researchers from MIT, including Robert Langer, the David H. Koch Institute Professor at MIT and a member of the Koch Institute, as well as researchers from Boston Children’s Hospital.

    Replacing injections

    Most patients with Type 1 diabetes have to monitor their blood glucose levels carefully and inject themselves with insulin at least once a day. However, this process doesn’t replicate the body’s natural ability to control blood glucose levels.

    “The vast majority of diabetics that are insulin-dependent are injecting themselves with insulin, and doing their very best, but they do not have healthy blood sugar levels,” Anderson says. “If you look at their blood sugar levels, even for people that are very dedicated to being careful, they just can’t match what a living pancreas can do.”

    A better alternative would be to transplant cells that produce insulin whenever they detect surges in the patient’s blood glucose levels. Some diabetes patients have received transplanted islet cells from human cadavers, which can achieve long-term control of diabetes; however, these patients have to take immunosuppressive drugs to prevent their body from rejecting the implanted cells.

    More recently, researchers have shown similar success with islet cells derived from stem cells, but patients who receive those cells also need to take immunosuppressive drugs.

    Another possibility, which could prevent the need for immunosuppressive drugs, is to encapsulate the transplanted cells within a flexible device that protects the cells from the immune system. However, finding a reliable oxygen supply for these encapsulated cells has proven challenging.

    Some experimental devices, including one that has been tested in clinical trials, feature an oxygen chamber that can supply the cells, but this chamber needs to be reloaded periodically. Other researchers have developed implants that include chemical reagents that can generate oxygen, but these also run out eventually.

    The MIT team took a different approach that could potentially generate oxygen indefinitely, by splitting water. This is done using a proton-exchange membrane — a technology originally deployed to generate hydrogen in fuel cells — located within the device. This membrane can split water vapor (found abundantly in the body) into hydrogen, which diffuses harmlessly away, and oxygen, which goes into a storage chamber that feeds the islet cells through a thin, oxygen-permeable membrane.

    A significant advantage of this approach is that it does not require any wires or batteries. Splitting this water vapor requires a small voltage (about 2 volts), which is generated using a phenomenon known as resonant inductive coupling. A tuned magnetic coil located outside the body transmits power to a small, flexible antenna within the device, allowing for wireless power transfer. It does require an external coil, which the researchers anticipate could be worn as a patch on the patient’s skin.

    Drugs on demand

    After building their device, which is about the size of a U.S. quarter, the researchers tested it in diabetic mice. One group of mice received the device with the oxygen-generating, water-splitting membrane, while the other received a device that contained islet cells without any supplemental oxygen. The devices were implanted just under the skin, in mice with fully functional immune systems.

    The researchers found that mice implanted with the oxygen-generating device were able to maintain normal blood glucose levels, comparable to healthy animals. However, mice that received the nonoxygenated device became hyperglycemic (with elevated blood sugar) within about two weeks.

    Typically when any kind of medical device is implanted in the body, attack by the immune system leads to a buildup of scar tissue called fibrosis, which can reduce the devices’ effectiveness. This kind of scar tissue did form around the implants used in this study, but the device’s success in controlling blood glucose levels suggests that insulin was still able to diffuse out of the device, and glucose into it.

    This approach could also be used to deliver cells that produce other types of therapeutic proteins that need to be given over long periods of time. In this study, the researchers showed that the device could also keep alive cells that produce erythropoietin, a protein that stimulates red blood cell production.

    “We’re optimistic that it will be possible to make living medical devices that can reside in the body and produce drugs as needed,” Anderson says. “There are a variety of diseases where patients need to take proteins exogenously, sometimes very frequently. If we can replace the need for infusions every other week with a single implant that can act for a long time, I think that could really help a lot of patients.”

    The researchers now plan to adapt the device for testing in larger animals and eventually humans. For human use, they hope to develop an implant that would be about the size of a stick of chewing gum. They also plan to test whether the device can remain in the body for longer periods of time.

    “The materials we’ve used are inherently stable and long-lived, so I think that kind of long-term operation is within the realm of possibility, and that’s what we’re working on,” Krishnan says.

    “We are very excited about these findings, which we believe could provide a whole new way of someday treating diabetes and possibly other diseases,” Langer adds.

    ###

    The research was funded by JDRF, the Leona M. and Harry B. Helmsley Charitable Trust, and the National Institute of Biomedical Imaging and Bioengineering at the National Institutes of Health.

    [ad_2]

    Massachusetts Institute of Technology (MIT)

    Source link

  • Cutting-Edge Sensor Replicates Cell Membrane Functionalities

    Cutting-Edge Sensor Replicates Cell Membrane Functionalities

    [ad_1]

    Newswise — CAMBRIDGE, MA — Drawing inspiration from natural sensory systems, an MIT-led team has designed a novel sensor that could detect the same molecules that naturally occurring cell receptors can identify.

    In work that combines several new technologies, the researchers created a prototype sensor that can detect an immune molecule called CXCL12, down to tens or hundreds of parts per billion. This is an important first step to developing a system that could be used to perform routine screens for hard-to-diagnose cancers or metastatic tumors, or as a highly biomimetic electronic “nose,” the researchers say.

    “Our hope is to develop a simple device that lets you do at-home testing, with high specificity and sensitivity. The earlier you detect cancer, the better the treatment, so early diagnostics for cancer is one important area we want to go in,” says Shuguang Zhang, a principal research scientist in MIT’s Media Lab.

    The device draws inspiration from the membrane that surrounds all cells. Within such membranes are thousands of receptor proteins that detect molecules in the environment. The MIT team modified some of these proteins so that they could survive outside the membrane, and anchored them in a layer of crystallized proteins atop an array of graphene transistors. When the target molecule is detected in a sample, these transistors relay the information to a computer or smartphone.

    This type of sensor could potentially be adapted to analyze any bodily fluid, such as blood, tears, or saliva, the researchers say, and could screen for many different targets simultaneously, depending on the type of receptor proteins used.

    “We identify critical receptors from biological systems and anchor them onto a bioelectronic interface, allowing us to harvest all those biological signals and then transduce them into electrical outputs that can be analyzed and interpreted by machine-learning algorithms,” says Rui Qing, a former MIT research scientist who is now an associate professor at Shanghai Jiao Tong University.

    Qing and Mantian Xue PhD ’23, are the lead authors of the study, which appears today in Science Advances. Along with Zhang, Tomás Palacios, director of MIT’s Microsystems Laboratory and a professor of electrical engineering and computer science, and Uwe Sleytr, an emeritus professor at the Institute of Synthetic Bioarchitectures at the University of Natural Resources and Life Sciences in Vienna, are senior authors of the paper.

    Free from membranes

    Most current diagnostic sensors are based on either antibodies or aptamers (short strands of DNA or RNA) that can capture a particular target molecule from a fluid such as blood. However, both of these approaches have limitations: Aptamers can be easily broken down by body fluids, and manufacturing antibodies so that every batch is identical can be difficult.

    One alternative approach that scientists have explored is building sensors based on the receptor proteins found in cell membranes, which cells use to monitor and respond to their environment. The human genome encodes thousands of such receptors. However, these receptor proteins are difficult to work with because once removed from the cell membrane, they only maintain their structure if they are suspended in a detergent.

    In 2018, Zhang, Qing, and others reported a novel way to transform hydrophobic proteins into water-soluble proteins, by swapping out a few hydrophobic amino acids for hydrophilic amino acids. This approach is called the QTY code, after the letters representing the three hydrophilic amino acids — glutamine, threonine, and tyrosine — that take the place of hydrophobic amino acids leucine, isoleucine, valine, and phenylalanine.  

    “People have tried to use receptors for sensing for decades, but it is challenging for widespread use because receptors need detergent to keep them stable. The novelty of our approach is that we can make them water-soluble and can produce them in large quantities, inexpensively,” Zhang says.

    Zhang and Sleytr, who are longtime collaborators, decided to team up to try to attach water-soluble versions of receptor proteins to a surface, using bacterial proteins that Sleytr has studied for many years. These proteins, known as S-layer proteins, are found as the outermost surface layer of the cell envelope in many types of bacteria and archaea.

    When S-layer proteins are crystallized, they form coherent monomolecular arrays on a surface. Sleytr had previously shown that these proteins can be fused with other proteins such as antibodies or enzymes. For this study, the researchers, including senior scientist Andreas Breitwieser, who is also a co-author in the paper, used S-layer proteins to create a very dense, immobilized sheet of a water-soluble version of a receptor protein called CXCR4. This receptor binds to a target molecule called CXCL12, which plays important roles in several human diseases including cancer, and to an HIV coat glycoprotein, which is responsible for virus entry into human cells.

    “We use these S-layer systems to allow all these functional molecules to attach to a surface in a monomolecular array, in a very well-defined distribution and orientation,” Sleytr says. “It’s like a chessboard where you can arrange different pieces in a very precise manner.”

    The researchers named their sensing technology RESENSA (Receptor S-layer Electrical Nano Sensing Array).

    Sensitivity with biomimicry

    These crystallized S-layers can be deposited onto nearly any surface. For this application, the researchers attached the S-layer to a chip with graphene-based transistor arrays that Palacios’ lab had previously developed. The single-atomic thickness of the graphene transistors makes them ideal for the development of highly sensitive detectors.

    Working in Palacios’ lab, Xue adapted the chip so that it could be coated with a dual layer of proteins — crystallized S-layer proteins attached to water-soluble receptor proteins. When a target molecule from the sample binds to a receptor protein, the charge of the target changes the electrical properties of the graphene in a way that can be easily quantified and transmitted to a computer or smartphone connected to the chip.

    “We chose graphene as the transducer material because it has excellent electrical properties, meaning it can better translate those signals. It has the highest surface-to-volume ratio because it’s a sheet of carbon atoms, so every change on the surface, caused by the protein binding events, translates directly to the whole bulk of the material,” Xue says.

    The graphene transistor chip can be coated with S-layer-receptor proteins with a density of 1 trillion receptors per square centimeter with upward orientation. This allows the chip to take advantage of the maximum sensitivity offered by the receptor proteins, within the clinically relevant range for target analytes in human bodies. The array chip integrates more than 200 devices, providing a redundancy in signal detection that helps to ensure reliable measurements even in the case of rare molecules, such as the ones that could reveal the presence of an early-stage tumor or the onset of Alzheimer’s disease, the researchers say.

    Thanks to the use of QTY code, it is possible to modify naturally existing receptor proteins that could then be used, the researchers say, to generate an array of sensors in a single chip to screen virtually any molecule that cells can detect. “What we are aiming to do is develop the basic technology to enable a future portable device that we can integrate with cell phones and computers, so that you can do a test at home and quickly find out whether you should go to the doctor,” Qing says.

    ###

    [ad_2]

    Massachusetts Institute of Technology (MIT)

    Source link

  • New AI system can create proteins to fit design targets

    New AI system can create proteins to fit design targets

    [ad_1]

    Newswise — MIT researchers are using artificial intelligence to design new proteins that go beyond those found in nature.

    They developed machine-learning algorithms that can generate proteins with specific structural features, which could be used to make materials that have certain mechanical properties, like stiffness or elasticity. Such biologically inspired materials could potentially replace materials made from petroleum or ceramics, but with a much smaller carbon footprint.

    The researchers from MIT, the MIT-IBM Watson AI Lab, and Tufts University employed a generative model, which is the same type of machine-learning model architecture used in AI systems like DALL-E 2. But instead of using it to generate realistic images from natural language prompts, like DALL-E 2 does, they adapted the model architecture so it could predict amino acid sequences of proteins that achieve specific structural objectives.

    In a paper to be published in Chem, the researchers demonstrate how these models can generate realistic, yet novel, proteins. The models, which learn biochemical relationships that control how proteins form, can produce new proteins that could enable unique applications, says senior author Markus Buehler, the Jerry McAfee Professor in Engineering and professor of civil and environmental engineering and of mechanical engineering.

    For instance, this tool could be used to develop protein-inspired food coatings, which could keep produce fresh longer while being safe for humans to eat. And the models can generate millions of proteins in a few days, quickly giving scientists a portfolio of new ideas to explore, he adds. 

    “When you think about designing proteins nature has not discovered yet, it is such a huge design space that you can’t just sort it out with a pencil and paper. You have to figure out the language of life, the way amino acids are encoded by DNA and then come together to form protein structures. Before we had deep learning, we really couldn’t do this,” says Buehler, who is also a member of the MIT-IBM Watson AI Lab.

    Joining Buehler on the paper are lead author Bo Ni, a postdoc in Buehler’s Laboratory for Atomistic and Molecular Mechanics; and David Kaplan, the Stern Family Professor of Engineering and professor of bioengineering at Tufts.

    Adapting new tools for the task

    Proteins are formed by chains of amino acids, folded together in 3D patterns. The sequence of amino acids determines the mechanical properties of the protein. While scientists have identified thousands of proteins created through evolution, they estimate that an enormous number of amino acid sequences remain undiscovered.

    To streamline protein discovery, researchers have recently developed deep learning models that can predict the 3D structure of a protein for a set of amino acid sequences. But the inverse problem — predicting a sequence of amino acid structures that meet design targets — has proven even more challenging.

    A new advent in machine learning enabled Buehler and his colleagues to tackle this thorny challenge: attention-based diffusion models.

    Attention-based models can learn very long-range relationships, which is key to developing proteins because one mutation in a long amino acid sequence can make or break the entire design, Buehler says. A diffusion model learns to generate new data through a process that involves adding noise to training data, then learning to recover the data by removing the noise. They are often more effective than other models at generating high-quality, realistic data that can be conditioned to meet a set of target objectives to meet a design demand.

    The researchers used this architecture to build two machine-learning models that can predict a variety of new amino acid sequences which form proteins that meet structural design targets.

    “In the biomedical industry, you might not want a protein that is completely unknown because then you don’t know its properties. But in some applications, you might want a brand-new protein that is similar to one found in nature, but does something different. We can generate a spectrum with these models, which we control by tuning certain knobs,” Buehler says.

    Common folding patterns of amino acids, known as secondary structures, produce different mechanical properties. For instance, proteins with alpha helix structures yield stretchy materials while those with beta sheet structures yield rigid materials. Combining alpha helices and beta sheets can create materials that are stretchy and strong, like silks.

    The researchers developed two models, one that operates on overall structural properties of the protein and one that operates at the amino acid level. Both models work by combining these amino acid structures to generate proteins. For the model that operates on the overall structural properties, a user inputs a desired percentage of different structures (40 percent alpha-helix and 60 percent beta sheet, for instance). Then the model generates sequences that meet those targets. For the second model, the scientist also specifies the order of amino acid structures, which gives much finer-grained control.

    The models are connected to an algorithm that predicts protein folding, which the researchers use to determine the protein’s 3D structure. Then they calculate its resulting properties and check those against the design specifications.

    Realistic yet novel designs

    They tested their models by comparing the new proteins to known proteins that have similar structural properties. Many had some overlap with existing amino acid sequences, about 50 to 60 percent in most cases, but also some entirely new sequences. The level of similarity suggests that many of the generated proteins are synthesizable, Buehler adds.

    To ensure the predicted proteins are reasonable, the researchers tried to trick the models by inputting physically impossible design targets. They were impressed to see that, instead of producing improbable proteins, the models generated the closest synthesizable solution.

    “The learning algorithm can pick up the hidden relationships in nature. This gives us confidence to say that whatever comes out of our model is very likely to be realistic,” Ni says.

    Next, the researchers plan to experimentally validate some of the new protein designs by making them in a lab. They also want to continue augmenting and refining the models so they can develop amino acid sequences that meet more criteria, such as biological functions. 

    “For the applications we are interested in, like sustainability, medicine, food, health, and materials design, we are going to need to go beyond what nature has done. Here is a new design tool that we can use to create potential solutions that might help us solve some of the really pressing societal issues we are facing,” Buehler says.

    This research was supported, in part, by the MIT-IBM Watson AI Lab, the U.S. Department of Agriculture, the U.S. Department of Energy, the Army Research Office, the National Institutes of Health, and the Office of Naval Research.

    ###

    Written by Adam Zewe, MIT News Office

    [ad_2]

    Massachusetts Institute of Technology (MIT)

    Source link

  • Tackling counterfeit seeds with “unclonable” labels

    Tackling counterfeit seeds with “unclonable” labels

    [ad_1]

    Newswise — Average crop yields in Africa are consistently far below expected, and one significant reason is the prevalence of counterfeit seeds whose germination rates are far lower than those of the genuine ones. The World Bank estimates that as much as half of all seeds sold in some African countries are fake, which could help to account for crop production that is far below potential.

    There have been many attempts to prevent this counterfeiting through tracking labels, but none have proved effective; among other issues, such labels have been vulnerable to hacking because of the deterministic nature of their encoding systems. But now, a team of MIT researchers has come up with a kind of tiny, biodegradable tag that can be applied directly to the seeds themselves, and that provides a unique randomly created code that cannot be duplicated.

    The new system, which uses minuscule dots of silk-based material, each containing a unique combination of different chemical signatures, is described today in the journal Science Advances in a paper by MIT’s dean of engineering Anantha Chandrakasan, professor of civil and environmental engineering Benedetto Marelli, postdoc Hui Sun, and graduate student Saurav Maji.

    The problem of counterfeiting is an enormous one globally, the researchers point out, affecting everything from drugs to luxury goods, and many different systems have been developed to try to combat this. But there has been less attention to the problem in the area of agriculture, even though the consequences can be severe. In sub-Saharan Africa, for example, the World Bank estimates that counterfeit seeds are a significant factor in crop yields that average less than one-fifth of the potential for maize, and less than one-third for rice. 

    Marelli explains that a key to the new system is creating a randomly-produced physical object whose exact composition is virtually impossible to duplicate. The labels they create “leverage randomness and uncertainty in the process of application, to generate unique signature features that can be read, and that cannot be replicated,” he says.

    What they’re dealing with, Sun adds, “is the very old job of trying, basically, not to get your stuff stolen. And you can try as much as you can, but eventually somebody is always smart enough to figure out how to do it, so nothing is really unbreakable. But the idea is, it’s almost impossible, if not impossible, to replicate it, or it takes so much effort that it’s not worth it anymore.”

    The idea of an “unclonable” code was originally developed as a way of protecting the authenticity of computer chips, explains Chandrakasan, who is the Vannevar Bush Professor of Electrical Engineering and Computer Science. “In integrated circuits, individual transistors have slightly different properties coined device variations,” he explains, “and you could then use that variability and combine that variability with higher-level circuits to create a unique ID for the device. And once you have that, then you can use that unique ID as a part of a security protocol. Something like transistor variability is hard to replicate from device to device, so that’s what gives it its uniqueness, versus storing a particular fixed ID.” The concept is based on what are known as physically unclonable functions, or PUFs.

    The team decided to try to apply that PUF principle to the problem of fake seeds, and the use of silk proteins was a natural choice because the material is not only harmless to the environment but also classified by the Food and Drug Administration in the “generally recognized as safe” category, so it requires no special approval for use on food products.

    “You could coat it on top of seeds,” Maji says, “and if you synthesize silk in a certain way, it will also have natural random variations. So that’s the idea, that every seed or every bag could have a unique signature.”

    Developing effective secure system solutions have long been one of Chandrakasan’s specialties, while Marelli has spent many years developing systems for applying silk coatings to a variety of fruits, vegetables, and seeds, so their collaboration was a natural for developing such a silk-based coding system towards enhanced security. 

    “The challenge was what type of form factor to give to silk,” Sun says, “so that it can be fabricated very easily.” They developed a simple drop-casting approach that produces tags that are less than one-tenth of an inch in diameter. The second challenge was to develop “a way where we can read the uniqueness, in also a very high throughput and easy way.”

    For the unique silk-based codes, Marelli says, “eventually we found a way to add a color to these microparticles so that they assemble in random structures.” The resulting unique patterns can be read out not only by a spectrograph or a portable microscope, but even by an ordinary cellphone camera with a macro lens. This image can be processed locally to generate the PUF code and then sent to the cloud and compared with a secure database to ensure the authenticity of the product. “It’s random so that people cannot easily replicate it,” says Sun. “People cannot predict it without measuring it.”

    And the number of possible permutations that could result from the way they mix four basic types of colored silk nanoparticles is astronomical. “We were able to show that with a minimal amount of silk, we were able to generate 128 random bits of security,” Maji says. “So this gives rise to 2 to the power 128 possible combinations, which is extremely difficult to crack given the computational capabilities of the state-of-the-art computing systems.”

    Marelli says that “for us, it’s a good test bed in order to think out-of-the-box, and how we can have a path that somehow is more democratic.” In this case, that means “something that you can literally read with your phone, and you can fabricate by simply drop casting a solution, without using any advanced manufacturing technique, without going in a clean room.”

    Some additional work will be needed to make this a practical commercial product, Chandrakasan says. “There will have to be a development for at-scale reading” via smartphones. “So. that’s clearly a future opportunity.” But the principle now shows a clear path to the day when “a farmer could at least, maybe not every seed, but could maybe take some random seeds in a particular batch and verify them,” he says.

    The research was partially supported by the U.S. Office of Naval research and the National Science Foundation, Analog Devices Inc., an EECS Mathworks fellowship, and a Paul M. Cook Career Development Professorship.

    ###

    [ad_2]

    Massachusetts Institute of Technology (MIT)

    Source link

  • On social media platforms, more sharing means less caring about accuracy

    On social media platforms, more sharing means less caring about accuracy

    [ad_1]

    Newswise — As a social media user, you can be eager to share content. You can also try to judge whether it is true or not. But for many people it is difficult to prioritize both these things at once.

    That’s the conclusion of a new experiment led by MIT scholars, which finds that even considering whether or not to share news items on social media reduces people’s ability to tell truths from falsehoods.

    The study involved asking people to assess whether various news headlines were accurate. But if participants were first asked whether they would share that content, they were 35 percent worse at telling truths from falsehoods. Participants were also 18 percent less successful at discerning truth when asked about sharing right after evaluating them.

    “Just asking people whether they want to share things makes them more likely to believe headlines they wouldn’t otherwise have believed, and less likely to believe headlines they would have believed,” says David Rand, a professor at the MIT Sloan School of Management and co-author of a new paper detailing the study’s results. “Thinking about sharing just mixes them up.”

    The results suggest an essential tension between sharing and accuracy in the realm of social media. While people’s willingness to share news content and their ability to judge it accurately can both be bolstered separately, the study suggests the two things do not positively reinforce each other when considered at the same time.

    “The second you ask people about accuracy, you’re prompting them, and the second you ask about sharing, you’re prompting them,” says Ziv Epstein, a PhD student in the Human Dynamics group at the MIT Media Lab and another of the paper’s co-authors. “If you ask about sharing and accuracy at the same time, it can undermine people’s capacity for truth discernment.”

    The paper, “The social media context interferes with truth discernment,” will be published in Science Advances. The authors are Epstein; Nathaniel Sirlin, a research assistant at MIT Sloan; Antonio Arechar, a professor at the Center for Research and Teaching in Economics, in Aguascalientes, Mexico; Gordon Pennycook, an associate professor at the University of Regina; and Rand, who is the Erwin H. Schell Professor, a professor of management science and of brain and cognitive sciences, and the director of MIT’s Applied Cooperation Team.  

    To carry out the study, the researchers conducted two waves of online surveys of 3,157 Americans whose demographic characteristics approximated the U.S. averages for age, gender, ethnicity, and geographic distribution. All participants use either Twitter or Facebook. People were shown a series of true and false headlines about politics and the Covid-19 pandemic, and were randomly assigned to two groups. At times they were asked only about accuracy or only about sharing content; at other times they were asked about both, in differing orders. From this survey design, the scholars could determine the effect that being asked about sharing content has on people’s news accuracy judgments.

    In conducting the survey, the researchers were exploring two hypotheses about sharing and news judgements. One possibility is that being asked about sharing could make people more discerning about content because they would not want to share misleading news items. The other possibility is that asking people about sharing headlines feeds into the generally distracted condition in which consumers view news while on social media, and therefore detracts from their ability to tell truth from falsity.

    “Our results are different from saying, ‘If I told you I was going to share it, then I say I believe it because I don’t want to look like I shared something I don’t believe,” Rand says. “We have evidence that that’s not what is going on. Instead, it’s about more generalized distraction.”

    The research also examined partisan leanings among participants and found that when it came to Covid-19 headlines, being prompted about sharing affected the judgment of Republicans more than Democrats, although there was not a parallel effect for political news headlines.

    “We don’t really have an explanation for that partisan difference,” Rand says, calling the issue “an important direction for future research.”

    As for the overall findings, Rand suggests that, as daunting as the results might sound, they also contain some silver linings. One conclusion of the study is that people’s belief in falsehoods may be more influenced by their patterns of online activity than by an active intent to deceive others.

    “I think there’s in some sense a hopeful take on it, in that a lot of the message is that people aren’t immoral and purposely sharing bad things,” Rand says. “And people aren’t totally hopeless. But more it’s that the social media platforms have created an environment in which people are being distracted.”

    Eventually, the researchers say, those social media platforms could be redesigned to create settings in which people are less likely to share misleading and inaccurate news content.

    “There are ways of broadcasting posts that aren’t just focused on sharing,” Epstein says.

    He adds: “There’s so much room to grow and develop and design these platforms that are consistent with our best theories about how we process information and can make good decisions and form good beliefs. I think this is an exciting opportunity for platform designers to rethink these things as we take a step forward.”

    The project was funded in part by the MIT Sloan Latin America Office; the Ethics and Governance of Artificial Intelligence Initiative of the Miami Foundation; the William and Flora Hewlett Foundation; the Reset initiative of Luminate; the John Templeton Foundation; the TDF Foundation, the Canadian Institutes of Health Research; the Social Sciences and Humanities Research Council of Canada; the Australian Research Council; Google; and Facebook.

    [ad_2]

    Massachusetts Institute of Technology (MIT)

    Source link

  • Study: Preschool gives a big boost to college attendance

    Study: Preschool gives a big boost to college attendance

    [ad_1]

    Newswise — Attending preschool at age 4 makes children significantly more likely to go to college, according to an empirical study led by an MIT economist.

    The study examines children who attended public preschools in Boston from 1997 to 2003. It finds that among students of similar backgrounds, attendance at a public preschool raised “on-time” college enrollment — starting right after high school — by 8.3 percentage points, an 18 percent increase. There was also a 5.4 percentage point increase in college attendance at any time.

    “We find that 4-year-olds who were randomly allocated a seat in a public Boston preschool during this time period, 1997 to 2003, are more likely to attend college, and that it’s a pretty large effect,” says Parag Pathak, a professor in MIT’s Department of Economics and co-author of a newly published paper detailing the study’s results. “They’re also more likely to graduate from high school, and they’re more likely to take the SAT.”

    The study does not find a connection between preschool attendance and higher scores for students on Massachusetts’ standardized tests. But it does find that children who attended preschool had fewer behavioral issues later on, including fewer suspensions, less absenteeism, and fewer legal-system problems.

    “There are many things that influence whether you go to college, and these behavioral outcomes are relevant to that,” says Pathak, who is also a director of Blueprint Labs, an MIT research center that uses advanced empirical methods to examine issues in education, health care, and the workforce.

    The paper, “The Long-Term Effects of Universal Preschool in Boston,” is published in the February issue of the Quarterly Journal of Economics. The authors are Guthrie Gray-Lobe, a research associate at the Becker-Friedman Institute for Economics at the University of Chicago and a research affiliate at MIT’s Blueprint Labs; Pathak, who is the Class of 1922 Professor of Economics at MIT; and Christopher Walters PhD ’13, an associate professor of economics at the University of California at Berkeley.

    Lottery numbers

    Publicly funded preschool programs have become increasingly popular and prevalent in recent decades. Across the U.S., 44 states operated publicly funded preschool programs as of 2019, along with 24 of the 40 biggest U.S. cities. The portion of 4-year-olds in the U.S. in a public preschool program has grown from 14 percent in 2002 to 34 percent in 2019.

    To conduct the study, the researchers followed the academic trajectories of over 4,000 students, in seven cohorts from 1997 to 2003, who took part in a lottery the Boston public school system conducted to place students into a limited number of available preschool slots.

    The use of the lottery makes the study rigorous: It creates a natural experiment, allowing the researchers to track the educational outcomes of two groups of students from otherwise similar backgrounds in the same school system. In this case, one group attended preschool, while the other did not. That approach has rarely been applied to studies of preschool programs. 

    “The [method] of this work is to take advantage of the elaborate rationing that happens in big-city school districts in their choice processes. We’ve developed techniques to find the right treatment and control comparisons in data produced by these systems,” Pathak says.

    The study also found a 5.9 percentage point jump in attendance at four-year colleges for students who had attended preschool. Preschool-educated students also were 8.5 percentage points more likely to take the SAT.

    “It’s fairly rare to find school-based interventions that have effects of this magnitude,” says Pathak, who won the 2018 John Bates Clark medal, awarded annually by the American Economic Association to the best economist under age 40 in the U.S.

    But while the study does find that preschool increases SAT scores, there was no discernible change on the MCAS, the standardized tests Massachusetts students take in multiple fields in elementary school, middle school, and high school. That stands in contrast to the larger link in education between higher test scores and college attendance.

    “It’s not the case that we have an increase in test scores and it corresponds with an increase in college-going,” Pathak says. “That’s very intriguing.” At the same time, he adds, “I don’t think the takeaway here is we shouldn’t have people take tests.”

    On their best behavior?

    Indeed, the study’s findings suggest that preschool may have a long-term beneficial effect that is not strictly or even primarily academic, but has an important behavioral component. Children attending preschool may be gaining important behavioral habits that keep them out of trouble. For instance: Attending preschool lowers juvenile incarceration by 1 percentage point. 

    “If I had to speculate what’s behind these long-term effects for college, this is our leading hypothesis,” Pathak says of the reduction in behavioral problems. “There’s a lot more that needs to be done on this. It’s an intriguing finding. Others have highlighted these sorts of so-called noncognitive sleeper effects of education, and I’ve been quite skeptical about it. But now our own findings suggest there may be something to that story.”

    While academic research about preschool programs dates at least to the 1960s, the current study has a distinctive set of attributes and findings, including the use of the Boston lottery to create a natural experiment; the long-range nature of the effects being found; and the combination of minimal impact on test scores coupled with indications that preschool has lasting behavioral benefits.

    “There are probably two broader lessons,” Pathak says. “We cannot judge the effectiveness of early childhood interventions by just looking at short-run outcomes, stopping by third grade. You’d get a totally misleading picture of Boston’s program if you did that. The second is that I think it’s really critical to measure outcomes beyond test scores, such as these behavioral outcomes, to have a more complete picture of what’s happening to the child.”

    Shedding more light on the subject is possible, Pathak thinks, by further analyzing preschool programs with policies that create natural experiments.

    “We’re really excited because there’s a lot of potential to apply our approach to other settings,” Pathak says.

    The study was supported, in part, by the W.T. Grant Early Career Scholars Program, while the Boston Public Schools and Massachusetts Department of Elementary and Secondary Education helped facilitate the research.

    ###

    [ad_2]

    Massachusetts Institute of Technology (MIT)

    Source link

  • New sensor uses MRI to detect light deep in the brain

    New sensor uses MRI to detect light deep in the brain

    [ad_1]

    Newswise — CAMBRIDGE, MA — Using a specialized MRI sensor, MIT researchers have shown that they can detect light deep within tissues such as the brain.

    Imaging light in deep tissues is extremely difficult because as light travels into tissue, much of it is either absorbed or scattered. The MIT team overcame that obstacle by designing a sensor that converts light into a magnetic signal that can be detected by MRI (magnetic resonance imaging).

    This type of sensor could be used to map light emitted by optical fibers implanted in the brain, such as the fibers used to stimulate neurons during optogenetic experiments. With further development, it could also prove useful for monitoring patients who receive light-based therapies for cancer, the researchers say.

    “We can image the distribution of light in tissue, and that’s important because people who use light to stimulate tissue or to measure from tissue often don’t quite know where the light is going, where they’re stimulating, or where the light is coming from. Our tool can be used to address those unknowns,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering.

    Jasanoff, who is also an associate investigator at MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears today in Nature Biomedical Engineering. Jacob Simon PhD ’21 and MIT postdoc Miriam Schwalm are the paper’s lead authors, and Johannes Morstein and Dirk Trauner of New York University are also authors of the paper.

    A light-sensitive probe

    Scientists have been using light to study living cells for hundreds of years, dating back to the late 1500s, when the light microscope was invented. This kind of microscopy allows researchers to peer inside cells and thin slices of tissue, but not deep inside an organism.

    “One of the persistent problems in using light, especially in the life sciences, is that it doesn’t do a very good job penetrating many materials,” Jasanoff says. “Biological materials absorb light and scatter light, and the combination of those things prevents us from using most types of optical imaging for anything that involves focusing in deep tissue.”

    To overcome that limitation, Jasanoff and his students decided to design a sensor that could transform light into a magnetic signal.

    “We wanted to create a magnetic sensor that responds to light locally, and therefore is not subject to absorbance or scattering. Then this light detector can be imaged using MRI,” he says.

    Jasanoff’s lab has previously developed MRI probes that can interact with a variety of molecules in the brain, including dopamine and calcium. When these probes bind to their targets, it affects the sensors’ magnetic interactions with the surrounding tissue, dimming or brightening the MRI signal.

    To make a light-sensitive MRI probe, the researchers decided to encase magnetic particles in a nanoparticle called a liposome. The liposomes used in this study are made from specialized light-sensitive lipids that Trauner had previously developed. When these lipids are exposed to a certain wavelength of light, the liposomes become more permeable to water, or “leaky.” This allows the magnetic particles inside to interact with water and generate a signal detectable by MRI.

    The particles, which the researchers called liposomal nanoparticle reporters (LisNR), can switch from permeable to impermeable depending on the type of light they’re exposed to. In this study, the researchers created particles that become leaky when exposed to ultraviolet light, and then become impermeable again when exposed to blue light. The researchers also showed that the particles could respond to other wavelengths of light.

    “This paper shows a novel sensor to enable photon detection with MRI through the brain. This illuminating work introduces a new avenue to bridge photon and proton-driven neuroimaging studies,” says Xin Yu, an assistant professor radiology at Harvard Medical School, who was not involved in the study.

    Mapping light

    The researchers tested the sensors in the brains of rats — specifically, in a part of the brain called the striatum, which is involved in planning movement and responding to reward. After injecting the particles throughout the striatum, the researchers were able to map the distribution of light from an optical fiber implanted nearby.

    The fiber they used is similar to those used for optogenetic stimulation, so this kind of sensing could be useful to researchers who perform optogenetic experiments in the brain, Jasanoff says.

    “We don’t expect that everybody doing optogenetics will use this for every experiment — it’s more something that you would do once in a while, to see whether a paradigm that you’re using is really producing the profile of light that you think it should be,” Jasanoff says.

    In the future, this type of sensor could also be useful for monitoring patients receiving treatments that involve light, such as photodynamic therapy, which uses light from a laser or LED to kill cancer cells.

    The researchers are now working on similar probes that could be used to detect light emitted by luciferases, a family of glowing proteins that are often used in biological experiments. These proteins can be used to reveal whether a particular gene is activated or not, but currently they can only be imaged in superficial tissue or cells grown in a lab dish.

    Jasanoff also hopes to use the strategy used for the LisNR sensor to design MRI probes that can detect stimuli other than light, such as neurochemicals or other molecules found in the brain.

    “We think that the principle that we use to construct these sensors is quite broad and can be used for other purposes too,” he says.

    ###

    The research was funded by the National Institutes of Health, the G. Harold and Leyla Y. Mathers Foundation, a Friends of the McGovern Fellowship from the McGovern Institute for Brain Research, the MIT Neurobiological Engineering Training Program, and a Marie Curie Individual Fellowship from the European Commission.

    [ad_2]

    Massachusetts Institute of Technology (MIT)

    Source link

  • Study: Without more data, a black hole’s origins can be “spun” in any direction

    Study: Without more data, a black hole’s origins can be “spun” in any direction

    [ad_1]

    Newswise — Clues to a black hole’s origins can be found in the way it spins. This is especially true for binaries, in which two black holes circle close together before merging. The spin and tilt of the respective black holes just before they merge can reveal whether the invisible giants arose from a quiet galactic disk or a more dynamic cluster of stars.

    Astronomers are hoping to tease out which of these origin stories is more likely by analyzing the 69 confirmed binaries detected to date. But a new study finds that for now, the current catalog of binaries is not enough to reveal anything fundamental about how black holes form.

    In a study appearing in the journal Astronomy and Astrophysics LettersMIT physicists show that when all the known binaries and their spins are worked into models of black hole formation, the conclusions can look very different, depending on the particular model used to interpret the data. 

    A black hole’s origins can therefore be “spun” in different ways, depending on a model’s assumptions of how the universe works.

    “When you change the model and make it more flexible or make different assumptions, you get a different answer about how black holes formed in the universe,” says study co-author Sylvia Biscoveanu, an MIT graduate student working in the LIGO Laboratory. “We show that people need to be careful because we are not yet at the stage with our data where we can believe what the model tells us.”

    The study’s co-authors include Colm Talbot, an MIT postdoc; and Salvatore Vitale, an associate professor of physics and a member of the Kavli Institute of Astrophysics and Space Research at MIT.

    A tale of two origins

    Black holes in binary systems are thought to arise via one of two paths. The first is through “field binary evolution,” in which two stars evolve together and eventually explode in supernovae, leaving behind two black holes that continue circling in a binary system. In this scenario, the black holes should have relatively aligned spins, as they would have had time — first as stars, then black holes — to pull and tug each other into similar orientations. If a binary’s black holes have roughly the same spin, scientists believe they must have evolved in a relatively quiet environment, such as a galactic disk.

    Black hole binaries can also form through “dynamical assembly,” where two black holes evolve separately, each with its own distinct tilt and spin. By some extreme astrophysical processes, the black holes are eventually brought together, close enough to form a binary system. Such a dynamical pairing would likely occur not in a quiet galactic disk, but in a more dense environment, such as a globular cluster, where the interaction of thousands of stars can knock two black holes together. If a binary’s black holes have randomly oriented spins, they likely formed in a globular cluster.

    But what fraction of binaries form through one channel versus the other? The answer, astronomers believe, should lie in data, and particularly, measurements of black hole spins.

    To date, astronomers have derived the spins of black holes in 69 binaries, which have been discovered by a network of gravitational-wave detectors including LIGO in the U.S., and its Italian counterpart Virgo. Each detector listens for signs of gravitational waves — very subtle reverberations through space-time that are left over from extreme, astrophysical events such as the merging of massive black holes.

    With each binary detection, astronomers have estimated the respective black hole’s properties, including their mass and spin. They have worked the spin measurements into a generally accepted model of black hole formation, and found signs that binaries could have both a preferred, aligned spin, as well as random spins. That is, the universe could produce binaries in both galactic disks and globular clusters.

    “But we wanted to know, do we have enough data to make this distinction?” Biscoveanu says. “And it turns out, things are messy and uncertain, and it’s harder than it looks.”

    Spinning the data

    In their new study, the MIT team tested whether the same data would yield the same conclusions when worked into slightly different theoretical models of how black holes form.

    The team first reproduced LIGO’s spin measurements in a widely used model of black hole formation. This model assumes that a fraction of binaries in the universe prefer to produce black holes with aligned spins, where the rest of the binaries have random spins. They found that the data appeared to agree with this model’s assumptions and showed a peak where the model predicted there should be more black holes with similar spins.

    They then tweaked the model slightly, altering its assumptions such that it predicted a slightly different orientation of preferred black hole spins. When they worked the same data into this tweaked model, they found the data shifted to line up with the new predictions. The data also made similar shifts in 10 other models, each with a different assumption of how black holes prefer to spin.

    “Our paper shows that your result depends entirely on how you model your astrophysics, rather than the data itself,” Biscoveanu says.

    “We need more data than we thought, if we want to make a claim that is independent of the astrophysical assumptions we make,” Vitale adds.

    Just how much more data will astronomers need? Vitale estimates that once the LIGO network starts back up in early 2023, the instruments will detect one new black hole binary every few days. Over the next year, that could add up to hundreds more measurements to add to the data.

    “The measurements of the spins we have now are very uncertain,” Vitale says. “But as we build up a lot of them, we can gain better information. Then we can say, no matter the detail of my model, the data always tells me the same story — a story that we could then believe.”

    This research was supported in part by the National Science Foundation.

    ###

     

    Additional background

    Paper: “Spin it as you like: the (lack of a) measurement of the spin tilt
    distribution with LIGO-Virgo-KAGRA binary black holes”

    https://www.aanda.org/articles/aa/full_html/2022/12/aa45084-22/aa45084-22.html

    [ad_2]

    Massachusetts Institute of Technology (MIT)

    Source link

  • Earth can regulate its own temperature over millennia, new study finds

    Earth can regulate its own temperature over millennia, new study finds

    [ad_1]

    Newswise — The Earth’s climate has undergone some big changes, from global volcanism to planet-cooling ice ages and dramatic shifts in solar radiation. And yet life, for the last 3.7 billion years, has kept on beating.

    Now, a study by MIT researchers in Science Advances confirms that the planet harbors a “stabilizing feedback” mechanism that acts over hundreds of thousands of years to pull the climate back from the brink, keeping global temperatures within a steady, habitable range.

    Just how does it accomplish this? A likely mechanism is “silicate weathering” — a geological process by which the slow and steady weathering of silicate rocks involves chemical reactions that ultimately draw carbon dioxide out of the atmosphere and into ocean sediments, trapping the gas in rocks.

    Scientists have long suspected that silicate weathering plays a major role in regulating the Earth’s carbon cycle. The mechanism of silicate weathering could provide a geologically constant force in keeping carbon dioxide — and global temperatures — in check. But there’s never been direct evidence for the continual operation of such a feedback, until now.

    The new findings are based on a study of paleoclimate data that record changes in average global temperatures over the last 66 million years. The MIT team applied a mathematical analysis to see whether the data revealed any patterns characteristic of stabilizing phenomena that reined in global temperatures on a  geologic timescale.

    They found that indeed there appears to be a consistent pattern in which the Earth’s temperature swings are dampened over timescales of hundreds of thousands of years. The duration of this effect is similar to the timescales over which silicate weathering is predicted to act.

    The results are the first to use actual data to confirm the existence of a stabilizing feedback, the mechanism of which is likely silicate weathering. This stabilizing feedback would explain how the Earth has remained habitable through dramatic climate events in the geologic past.

    “On the one hand, it’s good because we know that today’s global warming will eventually be canceled out  through this stabilizing feedback,” says Constantin Arnscheidt, a graduate student in MIT’s Department of Earth, Atmospheric, and Planetary Sciences (EAPS). “But on the other hand, it will take hundreds of thousands of years to happen, so not fast enough to solve our present-day issues.”

    The study is co-authored by Arnscheidt and Daniel Rothman, professor of geophysics at MIT.

    Stability in data

    Scientists have previously seen hints of a climate-stabilizing effect in the Earth’s carbon cycle: Chemical analyses of ancient rocks have shown that the flux of carbon in and out of Earth’s surface environment has remained relatively balanced, even through dramatic swings in global temperature. Furthermore, models of silicate weathering predict that the process should have some stabilizing effect on the global climate. And finally, the fact of the Earth’s enduring habitability points to some inherent, geologic check on extreme temperature swings.

    “You have a planet whose climate was subjected to so many dramatic external changes. Why did life survive all this time? One argument is that we need some sort of stabilizing mechanism to keep temperatures suitable for life,” Arnscheidt says. “But it’s never been demonstrated from data that such a mechanism has consistently controlled Earth’s climate.”

    Arnscheidt and Rothman sought to confirm whether a stabilizing feedback has indeed been at work, by looking at  data of global temperature fluctuations through geologic history. They worked with a range of global temperature records compiled by other scientists, from the chemical composition of ancient marine fossils and shells, as well as preserved Antarctic ice cores.

    “This whole study is only possible because there have been great advances in improving the resolution of these deep-sea temperature records,” Arnscheidt notes. “Now we have data going back 66 million years, with data points at most thousands of years apart.”

    Speeding to a stop

    To the data, the team applied the mathematical theory of stochastic differential equations, which is commonly used to reveal patterns in widely fluctuating datasets.

    “We realized this theory makes predictions for what you would expect Earth’s temperature history to look like if there had been feedbacks acting on certain timescales,” Arnscheidt explains.

    Using this approach, the team analyzed the history of average global temperatures over the last 66 million years, considering the entire period over different timescales, such as tens of thousands of years versus hundreds of thousands, to see whether any patterns of stabilizing feedback emerged within each timescale.

    “To some extent, it’s like your car is speeding down the street, and when you put on the brakes, you slide for a long time before you stop,” Rothman says. “There’s a timescale over which frictional resistance, or a stabilizing feedback, kicks in, when the system returns to a steady state.”

    Without stabilizing feedbacks, fluctuations of global temperature should grow with timescale. But the team’s analysis revealed a regime in which fluctuations did not grow, implying that a stabilizing mechanism reigned in the climate before fluctuations grew too extreme. The timescale for this stabilizing effect — hundreds of thousands of years — coincides with what scientists predict for silicate weathering.

    Interestingly, Arnscheidt and Rothman found that on longer timescales, the data did not reveal any stabilizing feedbacks. That is, there doesn’t appear to be any recurring pull-back of global temperatures on timescales longer than a million years. Over these longer timescales, then, what has kept global temperatures in check?

    “There’s an idea that chance may have played a major role in determining why, after more than 3 billion years, life still exists,” Rothman offers.

    In other words, as the Earth’s temperatures fluctuate over longer stretches, these fluctuations may just happen to be small enough in the geologic sense, to be within a range that a stabilizing feedback, such as silicate weathering, could periodically keep the climate in check, and more to the point, within a habitable zone.

    “There are two camps: Some say random chance is a good enough explanation, and others say there must be a stabilizing feedback,” Arnscheidt says. “We’re able to show, directly from data, that the answer is probably somewhere in between. In other words, there was some stabilization, but pure luck likely also played a role in keeping Earth continuously habitable.”

    This research was supported in part by a MathWorks fellowship and the National Science Foundation.

    [ad_2]

    Massachusetts Institute of Technology (MIT)

    Source link

  • Building with Nanoparticles, From the Bottom Up

    Building with Nanoparticles, From the Bottom Up

    [ad_1]

    Newswise — Researchers at MIT have developed a technique for precisely controlling the arrangement and placement of nanoparticles on a material, like the silicon used for computer chips, in a way that does not damage or contaminate the surface of the material.

    The technique, which combines chemistry and directed assembly processes with conventional fabrication techniques, enables the efficient formation of high-resolution, nanoscale features integrated with nanoparticles for devices like sensors, lasers, and LEDs, which could boost their performance.

    Transistors and other nanoscale devices are typically fabricated from the top down — materials are etched away to reach the desired arrangement of nanostructures. But creating the smallest nanostructures, which can enable the highest performance and new functionalities, requires expensive equipment and remains difficult to do at scale and with the desired resolution.

    A more precise way to assemble nanoscale devices is from the bottom up. In one scheme, engineers have used chemistry to “grow” nanoparticles in solution, drop that solution onto a template, arrange the nanoparticles, and then transfer them to a surface. However, this technique also involves steep challenges. First, thousands of nanoparticles must be arranged on the template efficiently. And transferring them to a surface typically requires a chemical glue, large pressure, or high temperatures, which could damage the surfaces and the resulting device.

    The MIT researchers developed a new approach to overcome these limitations. They used the powerful forces that exist at the nanoscale to efficiently arrange particles in a desired pattern and then transfer them to a surface without any chemicals or high pressures, and at lower temperatures. Because the surface material remains pristine, these nanoscale structures can be incorporated into components for electronic and optical devices, where even minuscule imperfections can hamper performance.

    “This approach allows you, through engineering of forces, to place the nanoparticles, despite their very small size, in deterministic arrangements with single-particle resolution and on diverse surfaces, to create libraries of nanoscale building blocks that can have very unique properties, whether it is their light-matter interactions, electronic properties, mechanical performance, etc.,” says Farnaz Niroui, the EE Landsman Career Development Assistant Professor of Electrical Engineering and Computer Science (EECS) at MIT, a member of the MIT Research Laboratory of Electronics, and senior author on a new paper describing the work. “By integrating these building blocks with other nanostructures and materials we can then achieve devices with unique functionalities that would not be readily feasible to make if we were to use the conventional top-down fabrication strategies alone.”

    The research is published in Science Advances. Niroui’s co-authors are lead author Weikun “Spencer” Zhu, a graduate student in the Department of Chemical Engineering, as well as EECS graduate students Peter F. Satterthwaite, Patricia Jastrzebska-Perfect, and Roberto Brenes.

    Use the forces

    To begin their fabrication method, known as nanoparticle contact printing, the researchers use chemistry to create nanoparticles with a defined size and shape in a solution. To the naked eye, this looks like a vial of colored liquid, but zooming in with an electron microscope would reveal millions of cubes, each just 50 nanometers in size. (A human hair is about 80,000 nanometers wide.)

    The researchers then make a template in the form of a flexible surface covered with nanoparticle-sized guides, or traps, that are arranged in the shape they want the nanoparticles to take. After adding a drop of nanoparticle solution to the template, they use two nanoscale forces to move the particles into the right position. The nanoparticles are then transferred onto arbitrary surfaces.

    At the nanoscale, different forces become dominant (just like gravity is a dominant force at the macroscale). Capillary forces are dominant when the nanoparticles are in liquid and van der Waals forces are dominant at the interface between the nanoparticles and the solid surface they are in contact with. When the researchers add a drop of liquid and drag it across the template, capillary forces move the nanoparticles into the desired trap, placing them precisely in the right spot. Once the liquid dries, van der Waals forces hold those nanoparticles in position.

    “These forces are ubiquitous and can often be detrimental when it comes to the fabrication of nanoscale objects as they can cause the collapse of the structures. But we are able to come up with ways to control these forces very precisely to use them to control how things are manipulated at the nanoscale,” says Zhu.

    They design the template guides to be the right size and shape, and in the precisely proper arrangement so the forces work together to arrange the particles. The nanoparticles are then printed onto surfaces without a need for any solvents, surface treatments, or high temperatures. This keeps the surfaces pristine and properties intact while allowing yields of more than 95 percent. To promote this transfer, the surface forces need to be engineered so that the van der Waals forces are strong enough to consistently promote particles to release from the template and attach to the receiving surface when placed in contact.

    Unique shapes, diverse materials, scalable processing

    The team used this technique to arrange nanoparticles into arbitrary shapes, such as letters of the alphabet, and then transferred them to silicon with very high position accuracy. The method also works with nanoparticles that have other shapes, such as spheres, and with diverse material types. And it can transfer nanoparticles effectively onto different surfaces, like gold or even flexible substrates for next-generation electrical and optical structures and devices.

    Their approach is also scalable, so it can be extended to be used toward fabrication of real-world devices.

    Niroui and her colleagues are now working to leverage this approach to create even more complex structures and integrate it with other nanoscale materials to develop new types of electronic and optical devices.

    This work was supported, in part, by the National Science Foundation (NSF) and the NSF Graduate Research Fellowship Program.

    ###

    [ad_2]

    Massachusetts Institute of Technology (MIT)

    Source link

  • Method for decoding asteroid interiors could help aim asteroid-deflecting missions

    Method for decoding asteroid interiors could help aim asteroid-deflecting missions

    [ad_1]

    Newswise — NASA hit a bullseye in late September with DART, the Double Asteroid Redirection Test, which flew a spacecraft straight at the heart of a nearby asteroid. The one-way kamikaze mission smashed into the stadium-sized space rock and successfully reset the asteroid’s orbit. DART was the first test of a planetary defense strategy, demonstrating that scientists could potentially deflect an asteroid headed for Earth.

    Now MIT researchers have a tool that may improve the aim of future asteroid-targeting missions. The team has developed a method to map an asteroid’s interior structure, or density distribution, based on how the asteroid’s spin changes as it makes a close encounter with more massive objects like the Earth.

    Knowing how the density is distributed inside an asteroid could help scientists plan the most effective defense. For instance, if an asteroid were made of relatively light and uniform matter, a DART-like spacecraft could be aimed differently than if it were deflecting an asteroid with a denser, less balanced interior.

    “If you know the density distribution of the asteroid, you could hit it at just the right spot so it actually moves away,” says Jack Dinsmore ’22, who developed the new asteroid-mapping technique as an MIT undergraduate majoring in physics.  

    The team is eager to apply the method to Apophis, a near-Earth asteroid that is estimated to pose a significant hazard if it were to make impact. Scientists have ruled out the likelihood of a collision during Apophis’ next flybys for at least a century. Beyond that, their forecasts grow fuzzy.   

    “Apophis will miss Earth in 2029, and scientists have cleared it for its next few encounters, but we can’t clear it forever,” says Dinsmore, who is now a graduate student at Stanford University. “So, it’s good to understand the nature of this particular asteroid, because if we ever need to redirect it, it’s important to understand what it’s made of.”

    Dinsmore and Julien de Wit, assistant professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), detail their new method in a study appearing today in the Monthly Notices of the Royal Astronomical Society.

    Spinning boiled versus raw

    The seeds of the team’s asteroid-mapping method grew out of an MIT class Dinsmore took last year, taught by de Wit. The class, 12.401 (Essentials of Planetary Sciences), introduces the basic principles and formation mechanisms of planets, asteroids, and other objects in the solar system. As a final project, Dinsmore explored how an asteroid behaves during a close encounter.

    In class, he wrote a code to simulate various shapes and sizes of asteroids as well as how their orbital and spin dynamics change when influenced by the gravitational pull of a more massive object like the Earth.

    “I initially just tried to ask, what happens when an asteroid passes by Earth? Does it respond at all? Because I wasn’t sure,” Dinsmore recalls. “And the answer is, it does, in a way that depends very strongly on the shape and physical properties of the asteroid.”

    That initial realization prompted another question: Could the dynamics of an asteroid’s close encounter be used to predict not just its shape and size, but also its internal makeup? To get at an answer, Dinsmore continued the project with de Wit, through the MIT Undergraduate Research Opportunities Program (UROP), which enables students to perform original research with a faculty member.

    He and de Wit took a deeper dive into the dynamics of a close encounter, writing out a more complex code, which they used to simulate a zoo of different asteroids, each with a different size, shape, and internal composition, or distribution of density. They then ran the simulation forward to see how each asteroid’s spin should wobble or shift as it passes close to an object of a certain mass and gravitational pull.  

    “It’s similar to how you can tell the difference between a raw and boiled egg,” de Wit offers. “If you spin the egg, the egg responds and spins differently depending on its interior properties. The same goes for an asteroid during a close encounter: You can get a grasp of what’s happening on the inside just by looking on how it responds to the strong gravitational forces it experiences during a flyby.”

    A close match

    The team is presenting their results in a new software “toolkit,” which they name AIME, for Asteroid Interior Mapping from Encounters (the acronym also translates as “love” in French). The software can be used to reconstruct the internal density distribution of an asteroid, from observations of its spin change during a close encounter.

    The researchers say that, if scientists can take more detailed measurements of asteroids and their spin dynamics during close encounters, these measurements could be used to improve AIME’s reconstructions of asteroid interiors.

    Their best chance, they say, may come with Apophis. During its forthcoming close encounters, de Wit and Dinsmore hope astronomers will point their telescopes at the space rock to measure its size, shape, and spin evolution as it streaks past. They could then feed these measurements into AIME to find a match — a simulated asteroid with the same size, shape, and spin dynamics as Apophis, that also relates to a particular interior density distribution.

    “Then, with AIME, you could publish a density map that most likely represents Apophis’ interior,” Dinsmore says.

    “Understanding the interior properties of asteroids helps us understand the extent to which close encounters could be of concern, and how to deal with them, as well as where they formed and how they got here,” de Wit adds. “Now with this framework, there’s a new way of getting a look inside an asteroid.”

    This research was supported, in part, by the MIT UROP office.

    ###

    [ad_2]

    Massachusetts Institute of Technology (MIT)

    Source link