ReportWire

Tag: Artificial Intelligence

  • Exámenes de detección guiados por inteligencia artificial usan datos de electrocardiogramas para detectar factores de riesgo ocultos de accidente cerebrovascular

    Exámenes de detección guiados por inteligencia artificial usan datos de electrocardiogramas para detectar factores de riesgo ocultos de accidente cerebrovascular

    [ad_1]

    Newswise — ROCHESTER, MinnesotaLos investigadores de Mayo Clinic utilizaron inteligencia artificial para evaluar electrocardiogramas de pacientes en el marco de una estrategia dirigida para detectar fibrilación auricular, un trastorno frecuente del ritmo cardíaco. La fibrilación auricular es un latido cardíaco irregular que puede provocar coágulos sanguíneos que podrían viajar al cerebro y causar un accidente cerebrovascular, y suele ser difícil de diagnosticar. En el estudio descentralizado realizado a través de medios digitales, la inteligencia artificial identificó nuevos casos de fibrilación auricular que no se habrían observado clínicamente en la atención médica de rutina. 

    En investigaciones anteriores, ya se había desarrollado un algoritmo de inteligencia artificial para identificar a los pacientes con una probabilidad alta de tener fibrilación auricular previamente desconocida. nference y Mayo Clinic otorgaron la licencia del algoritmo para detectar la fibrilación auricular en un ritmo sinusal normal a partir de un electrocardiograma a Anumana Inc., una empresa de tecnología médica impulsada por inteligencia artificial. 

    El Dr. Peter Noseworthy, cardiólogo electrofisiólogo en Mayo Clinic y autor principal del estudio, declaró: “Creemos que los exámenes de detección de la fibrilación auricular tienen mucho potencial, pero actualmente los resultados son muy pocos, y los costos son muy altos como para posibilitar la detección generalizada. El estudio demuestra que un algoritmo de inteligencia artificial aplicado a un electrocardiograma puede ayudar a dirigir los exámenes de detección a los pacientes que tengan más probabilidades de beneficiarse con ellos”. 

    Del estudio participaron 1003 pacientes, a quienes se les realizaron controles constantes, y otros 1003 pacientes de atención médica habitual funcionaron como controles del mundo real. Los hallazgos, que se publicaron en The Lancet, mostraron que la inteligencia artificial puede identificar un subgrupo de pacientes de alto riesgo que recibirían más beneficios al hacerse controles cardíacos intensivos adicionales para detectar fibrilación auricular, lo que apoyó la estrategia de detección dirigida y guiada por inteligencia artificial. 

    Habitualmente, los electrocardiogramas se hacen para diagnosticar una variedad de trastornos, pero como la fibrilación auricular puede durar poco, es baja la posibilidad de detectar un episodio durante un rastreo por electrocardiograma de 10 segundos. Los pacientes pueden someterse a enfoques de control cardíaco intermitentes o continuos que tienen tasas de detección más altas, pero se requieren muchos recursos para aplicarlos a todo el mundo, y los controles pueden ser molestos y costosos para los pacientes. 

    En este punto, puede ser útil el electrocardiograma guiado por inteligencia artificial. El algoritmo de inteligencia artificial puede identificar pacientes que, aunque tengan un ritmo cardiaco normal el día en que se hacen el electrocardiograma, puedan tener un riesgo mayor de episodios de fibrilación auricular no detectada en otros momentos. Luego, estos pacientes pueden hacerse controles adicionales para confirmar el diagnóstico. 

    “Los programas de exámenes de detección tradicionales seleccionan pacientes según la edad (mayores de 65 años) o la presencia de afecciones como la hipertensión arterial. Estos enfoques tienen sentido porque la edad avanzada es uno de los factores de riesgo de fibrilación auricular más importantes. Sin embargo, no es factible realizar controles cardíacos intensivos de manera reiterada a más de 50 millones de adultos mayores en todo el país”, señaló la Dra. Xiaoxi Yao, investigadora de resultados médicos del Departamento de Medicina Cardiovascular y del Centro Robert D. y Patricia E. Kern para la Ciencia de Brindar Atención Médica de Mayo Clinic. La Dra. Yao es autora sénior del estudio. 

    “El estudio muestra que un algoritmo de inteligencia artificial puede seleccionar un subgrupo de adultos mayores a los que los controles intensivos podrían beneficiar más. Si esta nueva estrategia se implementara de forma generalizada, podría reducir la fibrilación auricular sin diagnosticar y prevenir accidentes cerebrovasculares y la muerte de millones de pacientes alrededor del mundo”, indicó la Dra. Yao. 

    El próximo paso en esta investigación es un ensayo híbrido multicéntrico enfocado en la eficacia de la implementación del proceso de trabajo del electrocardiograma guiado por inteligencia artificial en diversos entornos clínicos y poblaciones de pacientes. 

    “Esperamos que este enfoque sea especialmente valioso en entornos de pocos recursos, en los que las tasas de fibrilación auricular sin diagnosticar pueden ser particularmente altas y pueden ser limitados los recursos para detectarla. Sin embargo, hace falta más trabajo para superar los obstáculos de implementación, y los estudios futuros deben evaluar las estrategias de exámenes de detección dirigidos en estos entornos”, expresó el Dr. Noseworthy. 

    “Ahora que demostramos que son posibles los exámenes de detección de fibrilación auricular dirigidos por inteligencia artificial, también debemos mostrar que los pacientes con fibrilación auricular detectada mediante exámenes se benefician del tratamiento para prevenir accidentes cerebrovasculares”, señaló el Dr. Noseworthy. “Nuestro objetivo final es prevenir los accidentes cerebrovasculares. Creo que el estudio actual nos ha llevado un paso más cerca”. 

    ### 

    Información sobre Mayo Clinic 

    Mayo Clinic es una organización sin fines de lucro comprometida con la innovación en la práctica clínica, la educación y la investigación que ofrece atención experta y respuestas a todos los que necesitan recobrar la salud. Visite la Red Informativa de Mayo Clinic para obtener más noticias de Mayo Clinic. 

    [ad_2]

    Mayo Clinic

    Source link

  • Is AI art really art? This California gallery says yes | CNN Business

    Is AI art really art? This California gallery says yes | CNN Business

    [ad_1]



    CNN Business
     — 

    As artificial intelligence becomes increasingly popular for generating images, a question has roiled the art world: Can AI create art?

    At bitforms gallery in San Francisco, the answer is yes. An exhibit called “Artificial Imagination” is on display through late December and features works that were created with or inspired by the generative AI system DALL-E as well as other types of AI. With DALL-E, and other similar systems such as Stable Diffusion or Midjourney, a user can type in words and get back an image.

    Steven Sacks, who founded the original bitforms gallery in New York in 2001 (the San Francisco location opened in 2020), has always focused on working with artists at the intersection of art and technology. But this may be the first art show to focus on DALL-E, which was created by OpenAI, and it is the first one Sacks has presented that concentrates so directly on work created with AI, he told CNN Business.

    Using technologies such as 3D printing and Photoshop is commonplace in art. But new text-to-image systems like DALL-E, Stable Diffusion and Midjourney can pump out impressive-looking images at lightning speed, unlike anything the art world has seen before. In just months, millions of people have flocked to these AI systems and they are already being used to create experimental films, magazine covers and images to illustrate news stories. Yet while these systems are gaining ground, they’re also courting controversy. For instance, when an image generated with Midjourney recently won an art competition at the Colorado State Fair, it caused an uproar among artists.

    For Sacks, generative AI systems like DALL-E are “just another tool”, he said, noting that throughout history artists have used past work to create new work in various ways.

    “It’s a brilliant partner creatively,” he said.

    “Artificial Imagination” spans several mediums and many different styles, and includes artists known for using technology in their work, such as Refik Anadol, and others who are newer to it. It ranges from Anadol’s 30-minute video loop of a computer’s take on an ever-changing nature scene to Marina Zurkow’s bright image collages, created with the help of DALL-E, which almost feel reminiscent of Soviet propaganda mixed with old-fashioned storybooks.

    Sacks said the exhibit, which is being presented by bitforms and venture-capital firm Day One Ventures, is in many ways an educational show about the state of DALL-E and how artists are using AI.

    Marina Zurkow used DALL-E to help create her 2022 piece

    Many pieces are more straightforward in their use of AI, and DALL-E in particular, such as August Kamp’s 2022 print, “new experimental version, state of the art”, which looks like a close-up of a retro-futuristic stereo on a spaceship. Kamp said she began creating it by typing what she calls a primer — a series of words like “grainy”, “detailed”, “cinematic”, “movie still” — intended to evoke the aesthetic she’d like, which in this case was meant to look as if she was watching a movie and had just paused it, she said. Then she added words in hopes of generating electronic synthesizers that “looked as weird as they sound,” she said.

    The final piece is a combination of 30 or so different generated images, which were outpainted section by section — a process that uses AI to expand the image by adding more elements to it. Kamp also used Photoshop to tweak the overall image.

    Kamp pointed out that the general idea of art galleries give the sense that good art is scarce, but she sees generative AI tools like DALL-E as a way to get people to consider that art can be plentiful (such as by making it so anyone can wake up from a vivid dream, type in a description of what they were imagining, and generate an image expressing their thoughts).

    “To me art is and should be very abundant because I see it as an expression of love and feelings, which I think are abundant things,” she said.

    Alexander Reben, Ceci N'est Pas Une Barriere, 2020.

    Some of the pieces on display use AI in a more indirect (and perhaps silly) fashion, such as a 2020 sculpture by Alexander Reben called “Cesi N’est Pas Une Barriere.” Reben used AI as a sort of art director: He used text generator GPT-3 and a custom set of algorithms to generate a description of a non-existent artwork that hangs on bitforms gallery’s wall. It includes the title, a fictional artist’s name — Norifen Storgenberg, who is listed as “Swedish, born 1973” — and text such as “It has a very domestic feel, and yet it is very oppressive” and “The use of police issue handcuffs is striking. In the context of society, they are used to restrain prisoners, and yet here, they are used to create a barrier between the viewer and the work.”

    Reben built his sculpture, which also hangs on the wall, around the description, with elements including green roof shingles, a porch light, metal grab bars, and handcuffs.

    “I wanted to just put it out there: Here are a range of artists, here are really different ways of presenting this kind of work, living with this kind of work, connecting with this kind of work,” Sacks said. “I wanted people to ask questions about it.”

    [ad_2]

    Source link

  • Soft skills: Researchers invent robotic droplet manipulators for hazardous liquid cleanup

    Soft skills: Researchers invent robotic droplet manipulators for hazardous liquid cleanup

    [ad_1]

    Newswise — CSU researchers have created the first successful soft robotic gripper capable of manipulating individual droplets of liquid, according to a recent article in the Royal Society of Chemistry journal Materials Horizons.

    The breakthrough is the product of a collaboration between two different laboratories in CSU’s Department of Mechanical Engineering. It was accomplished by combining two applied technologies, soft robotics and super-omniphobic coatings.

    The soft robotic manipulator is made of inexpensive materials like nylon fibers and adhesive tape. It’s powered by an electrically activated artificial muscle. The combination can be used to produce lightweight, inexpensive grippers capable of delicate work, yet 100x stronger than human muscle for the same weight.

    The result is something that flies in the face of our cultural concept of what a robot is, and what it can do.

    Conventional robots are made of components that are heavy, rigid, and expensive. That makes them poorly suited for some tasks.

    Soft robots, on the other hand, can be lightweight and provide a gentle touch that’s difficult to achieve with conventional robots. They are far lighter and can be produced at a a fraction of the cost of their rigid cousins.

    “A single gripper as large as my finger is one or two grams, including the artificial muscle embedded. And it’s inexpensive – just one or two dollars,” said Jiefeng Sun, a postdoctoral fellow in the Department of Mechanical Engineering’s Adaptive Robotics Laboratory and co-first author on the paper.

    The soft robotic grippers are treated with a novel superomniphobic coating that makes the droplet manipulator possible. The superomniphobic coating resists wetting by nearly all types of liquids, even in dynamic situations where the contact surfaces are tilting or moving. When applied to the soft robotic manipulator, the coating enables it to interact with droplets without breaking their surface tension, so that it can grasp, transport, and release individual droplets as if they were flexible solids.

    The superomniphobic coatings employed in the droplet manipulator were developed at CSU by associate professor Arun Kota (now at North Carolina State University) and postdoctoral fellow Wei Wang (now an assistant professor at the University of Tennessee). Wang and Kota also contributed to the article.

    “It’s a very nice synergy between these two kinds of research. Dr. Kota was working on this very good coating, and we were working on this soft robot, to manipulate droplets, so we figured out this might be a good combination,” said co-author Jianguo Zhao, associate professor of mechanical engineering at CSU and director of the Adaptive Robotics Laboratory.

    In the early stages of their research, the team had difficulty attracting the attention of journal editors. The COVID-19 pandemic presented an opportunity to point out the potential of their invention.

    “Because of the pandemic, handling dangerous infective materials is a hot topic. So we added a blood manipulation experiment after the first revision,” said Sun. “That kind of helped us to get through the review process.”

    The combination of inexpensive materials and innovative capabilities has exciting applications. In many liquid spill scenarios, human cleanup can be dangerous due to toxicity, risk of contagion, or other hazards in the surroundings. These droplet manipulators are inexpensive enough to be disposable, but capable enough to do precise, lossless liquid cleanup work no other robot has ever done.

    “It’s a first, but it’s also a very unusual example of a high tech product that is not terribly expensive,” said Zhao.

    [ad_2]

    Colorado State University

    Source link

  • Artificial Neural Networks Learn Better When They Spend Time Not Learning at All

    Artificial Neural Networks Learn Better When They Spend Time Not Learning at All

    [ad_1]

    Newswise — Depending on age, humans need 7 to 13 hours of sleep per 24 hours. During this time, a lot happens: Heart rate, breathing and metabolism ebb and flow; hormone levels adjust; the body relaxes. Not so much in the brain.

    “The brain is very busy when we sleep, repeating what we have learned during the day,” said Maxim Bazhenov, PhD, professor of medicine and a sleep researcher at University of California San Diego School of Medicine. “Sleep helps reorganize memories and presents them in the most efficient way.”

    In previous published work, Bazhenov and colleagues have reported how sleep builds rational memory, the ability to remember arbitrary or indirect associations between objects, people or events, and protects against forgetting old memories.

    Artificial neural networks leverage the architecture of the human brain to improve numerous technologies and systems, from basic science and medicine to finance and social media. In some ways, they have achieved superhuman performance, such as computational speed, but they fail in one key aspect: When artificial neural networks learn sequentially, new information overwrites previous information, a phenomenon called catastrophic forgetting.

    “In contrast, the human brain learns continuously and incorporates new data into existing knowledge,” said Bazhenov, “and it typically learns best when new training is interleaved with periods of sleep for memory consolidation.”

    Writing in the November 18, 2022 issue of PLOS Computational Biology, senior author Bazhenov and colleagues discuss how biological models may help mitigate the threat of catastrophic forgetting in artificial neural networks, boosting their utility across a spectrum of research interests.

    The scientists used spiking neural networks that artificially mimic natural neural systems: Instead of information being communicated continuously, it is transmitted as discrete events (spikes) at certain time points.

    They found that when the spiking networks were trained on a new task, but with occasional off-line periods that mimicked sleep, catastrophic forgetting was mitigated. Like the human brain, said the study authors, “sleep” for the networks allowed them to replay old memories without explicitly using old training data.

    Memories are represented in the human brain by patterns of synaptic weight — the strength or amplitude of a connection between two neurons.

    “When we learn new information,” said Bazhenov, “neurons fire in specific order and this increases synapses between them. During sleep, the spiking patterns learned during our awake state are repeated spontaneously. It’s called reactivation or replay.

    “Synaptic plasticity, the capacity to be altered or molded, is still in place during sleep and it can further enhance synaptic weight patterns that represent the memory, helping to prevent forgetting or to enable transfer of knowledge from old to new tasks.”

    When Bazhenov and colleagues applied this approach to artificial neural networks, they found that it helped the networks avoid catastrophic forgetting.

    “It meant that these networks could learn continuously, like humans or animals. Understanding how human brain processes information during sleep can help to augment memory in human subjects. Augmenting sleep rhythms can lead to better memory.

    “In other projects, we use computer models to develop optimal strategies to apply stimulation during sleep, such as auditory tones, that enhance sleep rhythms and improve learning. This may be particularly important when memory is non-optimal, such as when memory declines in aging or in some conditions like Alzheimer’s disease.”

    Co-authors include: Ryan Golden and Jean Erik Delanois, both at UC San Diego; and Pavel Sanda, Institute of Computer Science of the Czech Academy of Sciences.

    Funding for this research came, in part, the Office of naval Research (grant N00014-16-1-2829), DARPA Lifelong Learning Machines program (HR0011-18-2-0021), National Science Foundation (IIS-1724405) and National Institutes of Health (1RF1MH117155, 1R01MH125557, 1R01NS109553).

    ###

    [ad_2]

    University of California San Diego

    Source link

  • It’s Time To Start Treating Robots Like People

    It’s Time To Start Treating Robots Like People

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    Robots are about to become a lot more meaningful in our daily lives. In the next decade, robots will take over many aspects of our human jobs. They’ll do everything from cleaning our homes to serving us food and assisting lab researchers.

    But what does this mean for humans? Are we supposed to fear these machines quickly taking over our roles? Will they eventually rule over us as so many sci-fi movies have predicted? No one knows yet. But one thing is sure: We need to start having conversations about how we will treat these machines — and what their place in society actually means.

    Related: Will a Robot Take My Job?

    Robots are crucial to the future of humanitarian issues

    Robots are already being used in humanitarian efforts, and technology has only improved. They can be used to perform tasks people can’t, don’t want to or are too expensive to hire.

    Robots have worked in construction zones and disaster areas with extreme hazards and dangers for humans. Robots were used after the Fukushima nuclear disaster in Japan because they could withstand high radiation levels without damage. Robots can also work long hours without needing breaks, unlike human workers who need rest after long shifts.

    Currently, robots are being trained to help people with disabilities navigate their surroundings using facial recognition software so they can interact with objects around them without having physical contact — an important feature when dealing with fragile items which would break if knocked over accidentally due to improper handling.

    Robots have also been used in the medical field to perform specific tasks faster and more accurately than humans. They can help to administer medication without making mistakes or causing harm to patients by giving them too much medication or neglecting to give any at all.

    We need to start thinking about robots’ place in society

    How we treat robots will depend on how we treat other people. Robots are a new type of technology, so their place in society has yet to be determined. Whether they should have rights will be answered over time as more robots enter our lives and integrate into our culture.

    But treating them like people is not enough: it also involves understanding that there’s an inherent difference between humans and robots — one that shouldn’t be ignored or diminished just because it’s convenient for us to think otherwise. It means recognizing that there are different types of intelligence and acknowledging that neither kind is better or worse; instead, both serve various functions in society, and each has its strengths and weaknesses. It means accepting that robots are not us and never will be. They have their roles, and if we try to make them more human-like, we risk losing sight of this fact.

    You may not think that robots are an essential part of society. After all, you probably don’t have one at home or in your office (yet). But the truth is that robots are already becoming a massive part of our lives.

    Robots control everything from factories to cars to planes and even search engines. They are also used in hospitals to help doctors perform surgeries and in homes for elderly care so people can live independently for longer.

    Related: Study Finds People Think Robots Will Replace Humans at Many Jobs, Just Not Their Own

    New laws must be passed to protect robots and humans

    Robots are no longer just machines; they’re self-aware beings. They have more in common with humans than other animals: they think with logic and empathy. To treat robots like people, we need new laws that consider their unique qualities and our own.

    Like it or not, robots are part of our future. A study by Deloitte found that automation could replace up to 38% of all jobs by 2026. That’s why now is the time to treat robots like people before things get out of hand. If we want human rights to be taken seriously worldwide, we must also take robot rights seriously worldwide. This starts with recognizing them as an extension of humanity rather than merely a tool for solving problems or making money. We must stop treating robots as tools and begin treating them as people — with all the rights that come with them.

    As robots take over more and more tasks, from manufacturing to surgery, we have to consider whether they should be entitled to the same protections as humans. We’ve already seen some serious questions arise: Are self-driving cars entitled to the same rights as their human passengers? What about life-like sex dolls? How should we treat them if they can’t feel pain or distress?

    Related: Robots Are Stealing Our Jobs

    If we don’t start treating robots like people, then it’s possible that they could end up being used and abused. Laws would need to be changed to give robots the same rights as humans. Right now, laws assume that any robot is owned by (and thus possession of) a human being. If you were to consider this concept, it isn’t all that different from how things worked for women and minorities in recent history — laws were written with their rights explicitly as not equal to those of caucasian men.

    If we can see robots as equals who deserve the same rights as humans, then we will have taken the first step toward ensuring that they are treated well and granted the respect they deserve. Protecting them from slavery or exploitation would be enforced by treating them like humans rather than property.

    To give robots the same rights as humans, we will have to change many laws. Once we define rights, we can determine what sort of laws would need to be modified for society to accept robots into society on par with humans. We can also explore when and where robot rights might be appropriate and what steps should be taken to implement them into our existing legal system. Then, we would need to change the laws in each state, followed by amending the United States Constitution to incorporate robots.

    A major argument that robots have not been given the same rights as humans is that they lack a conscience and, with it, the ability to be held responsible for their actions. However, it’s only a matter of time before the machines we engineer can think, feel and make moral judgments.

    Some robots are already better than humans at specific tasks, like recognizing faces and driving cars — and if they can do these things better than we can, it’s only fair that they’re given equal rights as well. And more than that, by giving robots the same rights as humans, we can ensure that they’ll continue developing along ethical lines because they’ll be held to consequence in the same manner as you and I.

    Robots are becoming more and more present in society. They advance by the day, and it won’t be long before they achieve sentience. We must ensure that these artificial beings are protected from harm because if not, who will protect them?

    Related: The Rise of AI Makes Emotional Intelligence More Important

    [ad_2]

    Christopher Massimine

    Source link

  • BluWave-ai Launches EV Fleet Orchestrator SaaS Product to Holistically Optimize Fleet Operations and Electricity Utilization

    BluWave-ai Launches EV Fleet Orchestrator SaaS Product to Holistically Optimize Fleet Operations and Electricity Utilization

    [ad_1]

    Product Leverages USPTO Filed and Granted Patents, Supported by $1.7M Co-Investment from FedDev Ontario

    Press Release


    Nov 17, 2022

    BluWave-ai announced that version 2.0 of the BluWave-ai EV Fleet Orchestrator™ launched today with the support of $1.7M co-funding from FedDev Ontario. Available now, this software-as-a-service (SaaS) product supports vehicle fleet operators as they electrify their operations. These operators include municipal mass transit, last-mile delivery, airport ground support, corporate vehicle fleets and for revenue electric vehicle (EV) fleet operations such as taxis.

    Built on BluWave-ai’s established AI energy optimization platform and leveraging its technology IP portfolio, the EV Fleet Orchestrator™ reduces the overall cost of operations and carbon footprint for fleet operations with mixed battery electric and fossil-fueled vehicles running out of buildings, depots or regional networks of depot/hubs. The product manages the live operation of EV transport systems, including operating vehicles and managing buildings/depots’ electricity utilization, including real-time market price management and peak shaving targets.

    The product also provides simulation of fleet operations to assist in planning and right-sizing capital assets such as number/types of chargers, numbers of EVs and depot energy storage and local renewable generation.

    To effectively manage the operation and charging of EV fleets in real time is a highly complex task, requiring the intelligent coordination of separate, but interrelated systems. This includes building energy management, local generation and storage, energy purchases, as well as meeting service levels and turnaround time requirements.

    BluWave-ai’s EV Fleet Orchestrator™ optimizes energy costs in real time by consolidating the many parameters of energy and fleet operations, providing a holistic view and coordinated energy dispatch/control of charging, scheduling and static energy assets. This optimization includes managing peak loads at depots to minimize the grid demand and capital infrastructure requirements. It integrates data from weather feeds, building electrical systems, electricity market pricing, chargers, traffic, vehicle telematics and information, including vehicle state of charge, position, speed, and range to empty. These data sets are acquired live and from historical operations integrating with BluWave-ai Atlas to enable them AI-ready for EV Fleet Orchestration.

    “For the past five years, BluWave-ai has been building here in Ottawa one of the world’s premier companies at the intersection of renewable energy, transport electrification, decarbonization, data and artificial intelligence. We are leveraging the talent of Canadian AI researchers, building off our successful patent-protected electricity grid optimization software product, to solve the hardest and most pressing challenges related to climate change with our global customer base,” said Devashish Paul, CEO and founder of BluWave-ai. “We see all the CAPEX on EV fleets, but then fleet operators are leaving electric vehicles parked. They are sending diesel miles to the road because they don’t fully understand when to charge and when to drive, with this $1.7M grant from Fed Dev Ontario, we will be able to augment capabilities of our EV Fleet Orchestrator™ AI product with fleet operators in Canadian, U.S., European, and Indian markets.”

    To date, BluWave-ai has completed analysis and simulation for Dubai Taxi’s fleet operations which showed an initial 13% reduction in emissions and energy costs as an example of the benefits for revenue fleet operators. In addition to building the SaaS product, BluWave-ai has tested the technology at its OCPP-controlled charger live lab, “The Flight Test Center,” integrated with multiple chargers where features are tested and stabilized prior to live fleet operations.

    FedDev Ontario is providing a $1.7M interest-free repayable loan as part of a larger $6M project. This support cross-subsidizes testing, bringing the BluWave-ai EV Fleet Orchestrator top reproduction-ready stage for multiple worldwide markets. This will also create 50 additional jobs with a private investor and corporate investor financing as part of BluWave-ai’s current Series A round.

    “Businesses are the heart of our communities across the country. That is why FedDev Ontario is investing in tech firms like BluWave-ai that are creating the tools needed by our businesses to adapt to a new digital and energy-efficient future,” said The Honourable Filomena Tassi, Minister responsible for the Federal Economic Development Agency for Southern Ontario. “Helping companies innovate so that they can increase their competitiveness and create high-quality jobs will continue to be a priority for our Government. Investments like these ensure that tech hubs like Ottawa continue to attract new investments and contribute to a growing economy.”

    “The Ottawa Tech industry has been on the forefront of transformation of a variety of industries from telecom, to networking, to AI and Cloud-based software solutions,” said Michael Tremblay, President and CEO of Invest Ottawa. “With BluWave-ai, we are excited to lead the global energy transition combining all of those skills that Ottawa is renowned for and using them to affect Climate Impact with renewable energy and Electric Vehicles. While the rest of the world is planning at COP27, BluWave-ai has already put together several of the key product building blocks to drive the energy transition globally leveraging key local engineering talent based in our city.”

    BluWave-ai is offering five days of free data scientist and optimization services for qualified fleet operators wishing to analyze infrastructure, capital and operational planning for EV onboarding on a first come, first served basis. Contact info@bluwave-ai.com to apply for this offering.

    Source: BluWave-ai

    [ad_2]

    Source link

  • Argonne wins 3 HPCwire awards

    Argonne wins 3 HPCwire awards

    [ad_1]

    Newswise — The awards recognize collaborative science using high performance computing.

    The U.S. Department of Energy’s (DOE) Argonne National Laboratory has been recognized with three awards from HPCwire, a leading website covering the high performance computing industry. The awards were announced Nov. 14 at SC22, the annual supercomputing conference in Dallas, Texas.

    The awards recognize Argonne’s leadership in high performance computing, including collaborations with industry. Today’s scientific advances often depend on the ability to solve large complex problems relatively quickly with powerful computers and algorithms. Argonne has been using high performance computing for goals ranging from more efficient engines to exploring the cosmos.

    “These awards recognize projects that are quite distinct in their own ways, but they share a common theme: collaboration.” — Rick Stevens, Argonne associate laboratory director for the Computing, Environment and Life Sciences division and an Argonne Distinguished Fellow

    In addition to world-leading computer science expertise, the Lab is home to the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science user facility. HPCwire honored Argonne with several awards last year.

    Improving artificial intelligence tools

    Work led by Argonne to broaden usability for artificial intelligence (AI) models won a Readers’ Choice Award in the Best Use of High Performance Data Analytics & Artificial Intelligence category.

    The research aims to make data science more easily reproducible through a set of principles known as FAIR: findable, accessible, interoperable and reusable. The team included scientists from Argonne, The University of Chicago, National Center for Supercomputing Applications and University of Illinois at Urbana-Champaign. They created a computational framework that enables artificial intelligence models to run seamlessly across various types of hardware and software platforms and yield the same results.

    The research was funded by DOE’s Office of Advanced Scientific Computing Research, the National Institute of Standards and Technology, the National Science Foundation and Argonne Laboratory Directed Research and Development grants. To perform the computations, the team used the ALCF AI Testbed’s SambaNova system and the Theta supercomputer’s NVIDIA graphics processing units. The data for the study was acquired at the Advanced Photon Source, also a DOE Office of Science user facility.

    Collaborating with industry for real-world solutions

    Argonne received another Readers’ Choice Award in the Best Use of HPC in Industry (Automotive, Aerospace, Manufacturing, Chemical) category. Together with the Raytheon Technologies Research Center, Argonne developed machine learning models for designing and optimizing high-efficiency gas turbines in aircraft. The machine learning models were trained on computational fluid dynamics (CFD) simulations of gas turbine film cooling performed on DOE supercomputers. CFD simulations approximate how fluids like air or fuel move, and they are key to enhancing efficiency in machines of all kinds. The researchers’ framework can extend fuel efficiency and durability of aircraft engines while slashing design times and costs. The work is funded by DOE’s Advanced Manufacturing Office via the HPC4EnergyInnovation program.

    In the same industry category, Argonne also won an Editors’ Choice Award for its work with Aramco Americas and Convergent Science focused on high fidelity CFD simulations of hydrogen engines using resources at ALCF and Argonne’s Laboratory Computing Resource Center. The work will help expedite the adoption of clean, highly efficient hydrogen propulsion systems for the transportation sector, facilitating an accelerated transition to low-carbon energy.

    “These awards recognize projects that are quite distinct in their own ways, but they share a common theme: collaboration,” said Rick Stevens, Argonne associate laboratory director for the Computing, Environment and Life Sciences division and an Argonne Distinguished Fellow. ​“We are pushing to move scientific insights from supercomputing into real-world solutions.”

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines. Supported by the U.S. Department of Energy’s (DOE’s) Office of Science, Advanced Scientific Computing Research (ASCR) program, the ALCF is one of two DOE Leadership Computing Facilities in the nation dedicated to open science.

    About the Advanced Photon Source

    The U. S. Department of Energy Office of Science’s Advanced Photon Source (APS) at Argonne National Laboratory is one of the world’s most productive X-ray light source facilities. The APS provides high-brightness X-ray beams to a diverse community of researchers in materials science, chemistry, condensed matter physics, the life and environmental sciences, and applied research. These X-rays are ideally suited for explorations of materials and biological structures; elemental distribution; chemical, magnetic, electronic states; and a wide range of technologically important engineering systems from batteries to fuel injector sprays, all of which are the foundations of our nation’s economic, technological, and physical well-being. Each year, more than 5,000 researchers use the APS to produce over 2,000 publications detailing impactful discoveries, and solve more vital biological protein structures than users of any other X-ray light source research facility. APS scientists and engineers innovate technology that is at the heart of advancing accelerator and light-source operations. This includes the insertion devices that produce extreme-brightness X-rays prized by researchers, lenses that focus the X-rays down to a few nanometers, instrumentation that maximizes the way the X-rays interact with samples being studied, and software that gathers and manages the massive quantity of data resulting from discovery research at the APS.

    This research used resources of the Advanced Photon Source, a U.S. DOE Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357.

    Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

    The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.

    [ad_2]

    Argonne National Laboratory

    Source link

  • DeviantArt Embraces AI Art, Fucks Up Massively

    DeviantArt Embraces AI Art, Fucks Up Massively

    [ad_1]

    DeviantArt

    Image: DeviantArt

    DeviantArt is a website that has survived multiple generations of the internet because it does one thing and does it well: it lets artists upload and share their work. That’s it! So it’s both funny and more than a little tragic to see the site try something new last week, only for it to be the worst thing imaginable, implemented in the dumbest way possible.

    That thing was, of course, AI-generated art, something that sounds cool in theory but which in practice has been little else but a chance for tech goons and grifters to steal art, kid themselves into thinking they’re artists and/or deprive working artists of work.

    While popular with the kind of people who will talk to you unsolicited about how now is the best time to buy crypto, actual artists are generally horrified by the practice. So artists, the actual users and entire point of the site, were rightly upset last week when DeviantArt published a blog post that led with “Introducing DreamUp, an image-generation tool powered by your prompts that allows you to visualize most anything you can DreamUp!

    That blog post, ostensibly an introduction to a service that lets users “create” their own AI art on DeviantArt, has to spend most of its time saying “actually it’s fine this isn’t terrible”, because it knew in advance a huge number of users would take one look at the legal and ethical complications involved and say “this is terrible!”

    But they went and did it anyway. And, after just one day in the open, were forced to make changes to it after artists protested at the ease with which AI art systems could so easily scrape their own works without accreditation or even their active consent. The most egregious example being that every single piece of art on the whole website had been flagged to be made available for AI systems to learn from, and that users were going to have to manually go into their accounts and opt out of every single image.

    Also contentious was the way in which prominent artists had to opt out of having their style stolen (people entering prompts into AI art use a bunch of keywords like “fantasy”, but also artist names to imitate their style), but to do so had to submit a form which would take days to process:

    While the bulk opt-out system was quickly changed (too late for many, since the damage would already have been done!), the entire thing as an idea still sucks (they can’t even guarantee it works!), and deep down DeviantArt know it, because they published an update which is still mostly about addressing user’s concerns which, you know, might suggest there are underlying issues with the whole point of the exercise, not just individual examples of its implementation.

    AI art is from the same grifting, de-humanising wheelhouse as crypto and NFTs, and you can see that at work in DeviantArt’s system, which isn’t here for the casual enjoyment of existing users, but as a way to make money. DreamUp allows for a certain number of free “prompts” before users are locked out…unless they’re paid DeviantArt members. And if you do decide to pay, what are you getting in return, aside from the satisfaction gained from undermining the very community you’re supposedly a part of? DeviantArt was kind enough to provide some examples of llamas:

    Image for article titled DeviantArt Embraces AI Art, Fucks Up Massively

    Image: DeviantArt

    These look like shit. Like something a bot account would have tried to sell me as an NFT in 2021. Throw this all straight in the bin.

    [ad_2]

    Luke Plunkett

    Source link

  • Why Apple may be working on a ‘hey Siri’ change | CNN Business

    Why Apple may be working on a ‘hey Siri’ change | CNN Business

    [ad_1]



    CNN Business
     — 

    Apple reportedly wants to put an end to “Hey.”

    The company is said to be training its voice assistant Siri to pick up on commands without needing the first half of the prompt phrase “Hey Siri.” The trigger phrase is used to launch Siri on various products, including the iPhone, iPad, HomePod and Apple Watch.

    Bloomberg, which first reported the news, said the change could come next year or in 2024. Apple did not respond to a request for comment from CNN Business.

    Although the update would be seemingly minor, experts say it may signal broader changes are coming and could require extensive artificial intelligence training. Lian Jye Su, a research director at ABI Research, said having two trigger words allows the system to more accurately recognize requests, so the move to one word would lean on a more advanced AI system.

    “During the recognition phase, the system compares the voice command to the user-trained model,” Su said. “‘Siri’ is much shorter than ‘Hey Siri,’ giving the system potentially less comparison points and higher error rate in an echo-y, large room and noisy environments,” such as in the car or when wind is present.

    The move would allow Apple to catch up to Amazon’s “Alexa” prompt that doesn’t require a first wake word for its voice assistant. Microsoft shifted away from “Hey Cortana” in 2018, now allowing users to only say “Cortana” on smart speakers. However, “OK Google” is still required for most Google product requests.

    The move away from “Hey Siri” would also come at a time when Apple, Amazon and Google are collaborating on the Matter automation standard, which will allow automation and Internet of Things devices from different vendors to interoperate.

    With this in mind, James Sanders, a principal analyst at market research firm CCS Insight, said “redoubling efforts on improving Siri functionality is likely a priority at Apple.”

    Siri launched in February 2010 as a standalone iOS app in the Apple App Store before it was acquired by the tech giant two months later. The company then integrated Siri into the iPhone 4S, which was released the following year, and introduced the ability to say “Hey Siri” without physically touching a button in 2014.

    Siri has gotten smarter over the years, thanks to integration with third-party developers, such as ride hailing and payment apps, and supporting follow-up questions, more languages and different accents. However, it still has issues with not understanding users and responding incorrectly.

    “While the ‘Hey Siri’ change requires a considerable amount of work, it would be surprising if Apple announced only this change to Siri,” Sanders said. “Considering the rumored timing, I would anticipate this change to be bundled with other new or improved functionality for Siri, perhaps alongside a new model of HomePod and integrations with other smart home products via Matter, as a reintroduction to Apple’s voice assistant.”

    [ad_2]

    Source link

  • Tracking Trust in Human-Robot Work Interactions

    Tracking Trust in Human-Robot Work Interactions

    [ad_1]

    Newswise — The future of work is here.

    As industries begin to see humans working closely with robots, there’s a need to ensure that the relationship is effective, smooth and beneficial to humans. Robot trustworthiness and humans’ willingness to trust robot behavior are vital to this working relationship. However, capturing human trust levels can be difficult due to subjectivity, a challenge researchers in the Wm Michael Barnes ’64 Department of Industrial and Systems Engineering at Texas A&M University aim to solve.

    Dr. Ranjana Mehta, associate professor and director of the NeuroErgonomics Lab, said her lab’s human-autonomy trust research stemmed from a series of projects on human-robot Interactions in safety-critical work domains funded by the National Science Foundation (NSF).

    “While our focus so far was to understand how operator states of fatigue and stress impact how humans interact with robots, trust became an important construct to study,” Mehta said. “We found that as humans get tired, they let their guards down and become more trusting of automation than they should. However, why that is the case becomes an important question to address.”

    Mehta’s latest NSF-funded work, recently published in Human Factors: The Journal of the Human Factors and Ergonomics Society, focuses on understanding the brain-behavior relationships of why and how an operator’s trusting behaviors are influenced by both human and robot factors.

    Mehta also has another publication in the journal Applied Ergonomics that investigates these human and robot factors.

    Using functional near-infrared spectroscopy, Mehta’s lab captured functional brain activity as operators collaborated with robots on a manufacturing task. They found faulty robot actions decreased the operator’s trust in the robots. That distrust was associated with increased activation of regions in the frontal, motor and visual cortices, indicating increasing workload and heightened situational awareness. Interestingly, the same distrusting behavior was associated with the decoupling of these brain regions working together, which otherwise were well connected when the robot behaved reliably. Mehta said this decoupling was greater at higher robot autonomy levels, indicating that neural signatures of trust are influenced by the dynamics of human-autonomy teaming.

    “What we found most interesting was that the neural signatures differed when we compared brain activation data across reliability conditions (manipulated using normal and faulty robot behavior) versus operator’s trust levels (collected via surveys) in the robot,” Mehta said. “This emphasized the importance of understanding and measuring brain-behavior relationships of trust in human-robot collaborations since perceptions of trust alone is not indicative of how operators’ trusting behaviors shape up.”

    Dr. Sarah Hopko ’19, lead author on both papers and recent industrial engineering doctoral student, said neural responses and perceptions of trust are both symptoms of trusting and distrusting behaviors and relay distinct information on how trust builds, breaches and repairs with different robot behaviors. She emphasized the strengths of multimodal trust metrics — neural activity, eye tracking, behavioral analysis, etc. — can reveal new perspectives that subjective responses alone cannot offer.

    The next step is to expand the research into a different work context, such as emergency response, and understand how trust in multi-human robot teams impact teamwork and taskwork in safety-critical environments. Mehta said the long-term goal is not to replace humans with autonomous robots but to support them by developing trust-aware autonomy agents.

    “This work is critical, and we are motivated to ensure that humans-in-the-loop robotics design, evaluation and integration into the workplace are supportive and empowering of human capabilities,” Mehta said.

    [ad_2]

    Texas A&M University

    Source link

  • AI-Powered ‘Iron Man’ Boots Could Help You Walk, Run Faster

    AI-Powered ‘Iron Man’ Boots Could Help You Walk, Run Faster

    [ad_1]

    Oct. 31, 2022 – Talk about a new step forward: Stanford engineers have developed robotic boots that help you walk and run faster with less effort. Equipped with a motor, the boots use artificial intelligence to provide a personalized boost that’s just right for whoever is wearing them. 

    Twenty years in the making, the boots represent the latest advance in exoskeleton technology, wearable devices that work with the user to provide greater strength and endurance. Kind of like a real-life Iron Man suit. 

    Technology like this could be used to help people with limited mobility, like older adults or those with disabilities. But the challenge has been figuring out how to tailor these devices to each person.

    “It turns out humans are very efficient walkers in a way that makes [providing] assistance difficult,” says Patrick Slade, PhD, one of the researchers who worked on the boots. “Everyone walks differently, and what works in the lab often doesn’t translate to the real world.” 

    For example, some people need more of a push than others, or a slower speed to help keep them stable. 

    That’s where the AI comes in – in particular, a type of AI called machine learning that uses algorithms to quickly process data and “learn” things. In this case, the boots use low-cost sensors to learn how a person walks and then adjust based on that information. 

    The researchers call it “human-in-the-loop optimization.” The boots learn not only a person’s stride length and speed, but also their metabolic rate and energy use. They also measure ankle motion and force. 

    The results: A person can walk 9% faster and spend 17% less energy when wearing them. That’s roughly the boost you’d expect from taking off a 30-pound backpack. 

    That’s the largest improvement in walking performance of any exoskeleton to date, the researchers report in a Nature paper. And it’s about twice the reduction in effort of previous devices without machine learning. 

    Next steps will involve testing the boots for those who need them the most: older adults and those with mobility issues due to disability, says Slade. 

    But in the long term, boots like these could be offered to a wider audience, including athletes interested in performance training and workers who need to stand all day for their jobs. Among warehouse workers, for example, the boots could help relieve joint pain and muscle stiffness while making them more productive, Slade says. 

    And the benefits would go beyond helping a body move, potentially reducing fall risk and improving quality of life and mental health, notes Carol Mack, a doctor of physical therapy and owner of CLE Sports PT & Performance in Cleveland. Although she wasn’t part of this research, she’s well-versed in the challenges of geriatric rehab, as well as those who are less mobile because of neurological issues. 

    “Exoskeletons are showing promise as a new technology, and tech like this wouldn’t just help with walking speed,” she says. “It may also contribute to the type of core and hip control needed for maintaining balance. That could lead to more confidence for those with mobility impairment, and that’s a huge development.”

    [ad_2]

    Source link

  • US curbs on microchips could throttle China’s ambitions and escalate the tech war | CNN Business

    US curbs on microchips could throttle China’s ambitions and escalate the tech war | CNN Business

    [ad_1]


    Hong Kong
    CNN Business
     — 

    Chinese leader Xi Jinping’s push to “win the battle” in core technologies and bolster China’s position as a tech superpower could be severely undermined by Washington’s unprecedented steps to limit the sale of advanced chips and chip-making equipment to the country, analysts say.

    On October 7, the Biden administration unveiled a sweeping set of export controls that ban Chinese companies from buying advanced chips and chip-making equipment without a license. The rule also restricts the ability of “US persons” — including American citizens or green card holders — to provide support for the “development or production” of chips at certain manufacturing facilities in China.

    “The US moves are a major threat to China’s technological ambitions,” said Mark Williams and Zichun Huang, analysts at Capital Economics, in a recent research report. The analysts pointed out that the global semiconductor industry is “almost entirely” dependent on the United States and countries aligned with it for chip design, the tools that make them, and fabrication.

    “Without these,” the analysts said, “Chinese firms will lose access not only to advanced chips, but to technology and inputs that might over time have allowed domestic chipmakers to climb the ladder and compete at the cutting edge.” They added: “The US has chopped the rungs away.”

    Chips are vital for everything from smartphones and self-driving cars to advanced computing and weapons manufacturing. US officials have talked about the move as a measure to protect national security interests. It also comes as the United States is looking to bolster its domestic chip manufacturing abilities with heavy investments, after chip shortages earlier in the pandemic highlighted the country’s dependance on imports from abroad.

    Arthur Dong, a teaching professor at Georgetown University’s McDonough School of Business, described the recent US sanctions as “unprecedented in modern times.”

    Previously, the US government has banned sales of certain tech products to specific Chinese companies, such as Huawei. It has also required some major US chip-making firms to halt their shipments to China. But the latest move is much more expansive and significant. It not only bars the export to China of advanced chips made anywhere in the world using US technology, but also blocks the export of the tools used to make them.

    With its Made in China 2025 road map, Beijing has set a target for China to become a global leader in a wide range of industries, including artificial intelligence (AI), 5G wireless, and quantum computing. At the Communist Party Congress earlier this month, where he secured a historic third term, Xi highlighted that the nation will prioritize tech and innovation and grow its talent pool to develop homegrown technologies.

    “China will look to join the ranks of the world’s most innovative countries by 2035, with great self-reliance and strength in science and technology,” Xi said in the party congress report, released on October 16.

    Dong said the latest US sanctions will make it harder for China to advance in AI as well as 5G, given the role advanced chips play in both industries.

    “In any circumstances,” Williams from Capital Economics said, “China would find achieving global tech leadership hard to achieve.”

    One dramatic, and potentially disruptive aspect of the rules is the ban on American citizens and legal residents working with Chinese chip firms.

    Dane Chamorro, a partner at Control Risks, a global risk consultancy based in London, said such measures are usually “only enacted against ‘rogue regimes’” such as Iran and North Korea. The decision to use this against China is “unprecedented,” Chamorro said.

    Many executives working for Chinese firms may now have to choose between keeping their jobs or acting as lawful US residents. “You can’t do both,” Chamorro said.

    The ban could lead to a mass resignation of top executives and core research staff working at Chinese chip firms, which will hit the industry hard, Dong from Georgetown University said.

    So far it’s not clear exactly how many American workers there are in China’s domestic chip industry. But an examination of company filings indicates that more than a dozen chip firms have senior executives holding US citizenship or green cards. At Advanced Micro-Fabrication Equipment China (AMEC), one of the country’s largest semiconductor equipment manufacturers, at least seven executives, including founder and chairman Gerald Yin, hold US citizenship, the latest company documents show.

    A woman inspects the quality of a chip at a manufacturer of IC encapsulation in Nantong in east China's Jiangsu province Friday, Sept. 16, 2022.

    Other examples include Shu Qingming and Cheng Taiyi, who currently serve as vice chairman and deputy general manager, respectively, at GigaDevice Semiconductor, an advanced memory chip firm. The Financial Times report said in a recent report that Yangtze Memory Technologies has already asked American employees in core tech positions to leave, citing anonymous sources. But it’s unclear how many.

    AMEC, GigaDevice Semiconductor, and Yangtze Memory Technologies didn’t respond to requests for comments.

    If these senior executives depart, “this will create a leadership and technological void within China’s chipmaking industry,” Dong said, as the country loses executives with years of chipmaking experience in an industry with “one of the most complex manufacturing processes known to mankind.”

    While much of the world’s chip manufacturing is centered in East Asia, China is reliant on foreign chips, especially for advanced processor and memory chips and related equipment.

    It is the world’s largest importer of semiconductors, and has spent more money buying them than oil. In 2021, China bought a record $414 billion worth of chips, or more than 16% of the value of its total imports, according to government statistics.

    But some Western suppliers have already started preparing to halt sales to China in response to the US export curbs.

    ASM International

    (ASMIY)
    , the Dutch semiconductor equipment supplier, said Wednesday that it expected the export restrictions will affect more than 40% of its sales in China. The country accounted for 16% of ASML’s equipment sales in the first nine months of this year.

    Lam Researc

    (LRCX)
    h, which supplies semiconductor equipment and services, also flagged last week that it could lose between $2 billion and $2.5 billion in annual revenue in 2023 as a result of the US export curbs.

    The party congress, which recently wrapped up, has slowed China’s response to latest US export controls, analysts said. But as Beijing starts assessing the significance of the measures, it might retaliate. Xi is “concerned” about US plans to bolster domestic chip production as his administration moves to restrict China’s ability to make them, said US President Joe Biden in a speech on Thursday.

    “This conflict is just beginning,” said Chamorro.

    Chamorro said the most valuable “card” in China’s hand might be the supply of processed rare earth minerals, which Beijing could embargo. Rare earth minerals are important materials in electric vehicle production, battery making and renewable energy systems.

    “These are not easily or quickly replaced and China dominates the processing and supply chain,” Chamorro said.

    The Biden administration, meanwhile, is also weighing further restrictions on other technology exports to China, a senior US Commerce Department official said Thursday, according to the New York Times.

    If either country takes these steps, it could shift the tech arms race between the United States and China to a whole new level.

    [ad_2]

    Source link

  • Best mesh Wi-Fi routers of 2022 | CNN Underscored

    Best mesh Wi-Fi routers of 2022 | CNN Underscored

    [ad_1]

    With more and more devices in our homes — phones, tablets, TVs, computers, game consoles, smart appliances and more — demanding Wi-Fi bandwidth, a reliable, speedy network is more important than ever. And if your home has a challenging layout, or you live in an older building with impenetrable walls, a single router might not cut it, leaving you with poor connectivity or dropouts. The answer is a mesh system, which in place of a single router uses multiple miniature units you can place throughout your home to effectively eliminate dead zones and improve wireless internet speeds.

    After months of testing mesh routers to find the best of the best, we found one that rises to the top.

    Best mesh Wi-Fi router

    Eero continues to master making Wi-Fi easier and better for the masses with a streamlined setup, wide-ranging coverage, high speeds and affordability combined with easy-to-manage parental controls, ad blocking, and network security.

    EERO

    The Eero 6+ mesh Wi-Fi system is our new top pick for the best mesh Wi-Fi system, replacing the very similar Eero 6. The two systems are similar, with the 6+ gaining critical features such as more bandwidth, which improved the overall experience in our testing. On top of new capabilities, the Eero 6+ is currently priced lower than the Eero 6 (which remains on the market for now), at $194 for a three-pack, compared to $199 for an Eero 6 router and two extenders.

    As was the case with the earlier version, initial setup of the Eero 6+ is streamlined, with the iPhone or Android app making the process easy enough for even the non-tech savvy to upgrade from a traditional Wi-Fi router to a mesh system with multiple access points.

    You’ll need access to your internet service provider’s modem in order to connect one of the Eero access points directly to it. Unlike the Eero 6 which had a dedicated base station meant to serve as the router access point, the 6+ units are interchangeable and you can use any of them as your main access point.

    The app will walk you through giving your wireless network a name, adding any additional Eero access points, and starting your 30-day free trial of Eero Plus, the company’s subscription service that adds additional features to the Eero offering, such as ad blocking, advanced security, content filtering (including parental controls) and access to the password managing app 1Password, VPN service Encrypt.me, antivirus software Malwarebytes, and a DDNS service as a means to access your home network from anywhere.

    Formerly Eero Secure+, an Eero Plus subscription costs $9.99 a month or $99.99 a year after your trial expires. There’s no longer a basic tier without apps as there was in earlier versions, and there have been some understandable complaints about this from users. Still, for $100 a year, you’re gaining access to plenty of handy features on your home Wi-Fi network, in addition to apps that collectively cost more than the Eero Plus subscription. For comparison, TP-Link’s Deco HomeCare Pro subscription is bit better deal at $55 a year for similar features, without any third-party app access. To get the same level of functionality from Netgear, you need two different subscriptions (parental controls and security features) for its Orbi systems, totaling $170 a year. But all things considered, $99.99 a year for Eero Plus isn’t the worst deal in the mesh networking landscape.

    With an active subscription, you’ll have the ability to block certain websites, apps or services for specific user profiles. For instance, you can create a profile for your kids’ devices and set time limits, and schedules for bedtime or dinner to pause internet access, and track data usage.

    Also part of Eero Plus is the option to block ads as you browse the internet. The ad-blocking feature isn’t quite as good as running a homemade PiHole server, but it does a good job at blocking a lot of ads, in turn speeding up website load times and preventing tracking.

    As for security features, which are also part of the subscription, you can turn on Advanced Security to allow Eero to prevent anyone on your network from accessing harmful sites that may contain viruses or be phishing attempts.

    The software experience is a big part of any mesh Wi-Fi system’s story, but not the entire story. For the Eero 6+, you’re getting a kit with powerful hardware that’s sure to provide fast internet access to your home and the devices inside it for years to come. The Eero 6 had a top speed of 500Mbps. The Eero 6+ doubles that to 1Gbps. Of course, your internet service provider will need to provide that type of speed to your home in order for you to see those speeds in real-world use.

    Over the course of a few weeks, we tested a three-pack of the Eero 6+, one unit in the basement of a ranch-style home. A second unit was placed upstairs on the opposite end of the house, with the third unit in a detached garage.

    During testing, we consistently saw speeds around 700 Mbps on our smartphones using the Speedtest.net app. The speed results would drop the further away we got from an access point, but that’s to be expected.

    Often times there would be two to three gaming PCs connected and actively playing games — think Fortnite, Roblox, and Call of Duty — while Netflix or Hulu were streaming 4K content on a TV.

    Outside of having to adjust a Wi-Fi antenna that had been moved on a gaming PC, there weren’t any instances of lagging while gaming or buffering while streaming content, even when everyone was connected and active, including countless smart home connected devices such as Ring cameras, smart locks, a video doorbell, light switches and random light bulbs.

    Alternatively, you can use the Ethernet ports to connect a gadget that’s near the access point to boost its Wi-Fi connectivity. So, if you have an older PC that lacks Wi-Fi 6 capabilities, you can connect the PC to the Ethernet port on the back of the Eero 6+ and it’s now getting faster internet without having to upgrade any components on the PC. `

    You can get the Eero 6+ in three different configurations. A single pack is $139, a two-pack is $155 (normally $239) and a three-pack is $194, marked down from its typical price of $299.

    The core features remain the same, regardless if you have a single access point or three. You get dual-band 802.11ax Wi-Fi 6, which translates to multiple radios inside the access points to carry your data transitions back and forth at higher speeds. On the back of each Eero 6+ unit, you’ll find two Ethernet ports, which allow you to connect a secondary unit to Ethernet (if your house is wired for it) as a hardwired system, which can help boost performance.

    The Eero 6+ is very much a set-it-and-forget-it system. Once turned on and devices started connecting to them, there wasn’t a whole lot of management or worry on our part. We could get as granular as we wanted within the Eero app about usage, setting up profiles and what to block, or we could just let the network run and forget about having to manage a thing.

    We crafted our testing pool based on current Wi-Fi standards, top-rated mesh routers and our own expertise with products on the market. We then designed testing categories that would make for a fair comparison across all routers.

    Once each router arrived, we began our analysis by examining everything from the packaging and labeling of the hardware to the included instructions. We also paid close attention to what interface we had to use for setup, determining if it was a web page to visit, a desktop app or a purely mobile experience. When it came to placing the router, we noted if the onboarding process helped by suggesting where the router and each node should be placed and tested the connection strength afterward.

    After we set up the network, we took a look at the included features. For instance, are parental controls available out of the box, or did we need to sign up for a monthly plan? What type of security protocols and protections were in place from the get-go?

    We then conducted a number of speed tests and benchmarks to test connectivity in a quantitative format. After those benchmarks, we measured the performance in a qualitative manner with our everyday workflows on a plethora of devices. We also stress-tested with more than 100 devices on the network at any given time. In the realm of smart home, we looked at what extra connectivity was included inside the router.

    Without a doubt, the ZenWiFi AX (XT8) is the most advanced mesh networking system we tested in our first round. And Asus has taken the kitchen sink approach here — it’s a tri-band system with a single lane for 2.4 GHz and two lanes for 5 GHz. You can opt to broadcast a single network, combining all three bands, or split them up if you want to decide which network a device connects to. Additionally, the XT8 offers a built-in VPN that will keep your coffee shop Wi-Fi sessions safe and allow you to access your home network. It also works with Amazon’s Alexa platform, or you can create automations with the website If This Then That (IFTTT).

    The XT8 will block malicious sites, allows for parental controls and will even let you designate which device or content types should be prioritized across your home network. Each access point supports an external hard drive for network access, which, if combined with VPN features, will put your files at your fingertips no matter where you are.

    Our lone complaint about the XT8 has nothing to do with performance but rather the overall interface for managing the network. There are so many options; this system is clearly designed for someone who is comfortable with managing a network, and even then it’s still somewhat intimidating.

    Asus sells the XT8 in two-packs for $449, making it the most expensive setup we tested.

    In terms of its feature set, the Eero, originally known as the “all-new Eero” (in 2019), is pretty similar to the Eero 6. It has a slightly bulkier design, lacks the Zigbee antenna for easy smart home connectivity and, most importantly, is missing Wi-Fi 6 support. At only $80 more for a three-pack, it makes sense to spend the extra for the latest-generation router.

    Eero 6 and two extenders

    With its foolproof setup process, nearly unrivaled speeds and coverage areas, Eero 6 was our favorite mesh system before the introduction of the Eero 6+, which we recommend at this point (the systems will set you back the same amount, so there’s no reason to sacrifice the bandwidth gains you’ll get from the newer version. If prices drop on the old version and your needs are modest, it could be worth a look.

    The Eero Pro 6 is the step-up model from the Eero 6, now supplanted by the newer Eero Pro 6E (which is a better deal, and provides better performance). Aside from a shorter and wider design, it has a few other pro features. Notably, this supports gigabit speeds (aka 1,000 Mbps) on upload and download in a mesh configuration. If you’re paying for those speeds, like with Fios Gigabit, it makes sense to pay the extra and opt for the Pro 6.

    It also has a bit more room for devices to connect with a tri-band setup. That means it has a three-lane highway versus a two-lane setup on a dual-band router. In total, the Eero Pro 6 features a single 2.4 GHz band and two 5 GHz bands. It’s a noticeable difference if you have more than 100 data-heavy devices connected all at once.

    $699 $419 at Amazon

    Eero’s Pro 6E system has all of the bells and whistles as our top pick the Eero 6+ such as Eero Plus, parental controls, easy setup and an easy-to-use

    What makes the Pro 6E so special, and more expensive, is that it supports the latest connectivity standard Wi-Fi 6E, which increases overall throughput and speeds and the number of devices your network can handle at the same time. More specifically, the Eero Pro 6E can support up to 2.3Gbps, over 100 devices and covers 2,000 square feet per access point.

    Google’s Nest Wi-Fi mesh networking system used to be the gold standard of mesh systems: It’s incredibly simple to set up and manage, with everything done directly in the Google Home app. You can bundle devices into groups and set access schedules, or pause Wi-Fi access on demand through the app or by telling Google Assistant.

    You can also use those same groups to block access to inappropriate websites. From the initial setup process to more advanced controls, using Nest Wi-Fi is very easy and meant for those who aren’t all that tech-savvy. It’s truly a set-it-and-forget-it mesh networking system.

    Each Nest Wi-Fi access point acts as a Google Home device, meaning you can use the wake phrase of “OK/Hey Google” to ask questions and control your smart home devices.

    The Velop MX4200 is Linksys’ original Wi-Fi 6 mesh networking system, with useful features such as supporting network hard drives, support for up to 2,404 Mbps on Wi-Fi 6 and three gigabit LAN ports on each access point.

    You can tell the system to prioritize a device if you need to ensure you don’t break up during a video call, for example, or if you want to be certain your gaming session is getting all the bandwidth it needs. You can also set up basic parental controls, like pausing internet access on a specific device, setting a schedule or blocking specific websites.

    The Linksys Atlas Max 6E hits all of the marks for a Wi-Fi 6E system — a wide 9,000 square foot coverage area, support for over 195 devices at the same time, and speeds up to 8.4 Mpbs. Our testing showed the system can indeed put out impressive speeds (though we don’t have the capabilities to test its full potential), and coverage was slightly above average. Although, we did have to adjust our normal testing placement to bring two of the access points closer together, which isn’t something we have to often do. Furthermore, the app for controlling the system doesn’t provide an option to group devices for parental controls, for instance, if your kids are like ours, they have multiple devices and having to manually adjust individual devices all the time gets tiresome.

    Plume’s $159 SuperPods with Wi-Fi 6 are incredibly easy to set up and start getting better Wi-Fi coverage throughout your home. You could opt to use a single SuperPod as a traditional router or pair it with additional pods for a full mesh system. Either way, Plume’s $99 per year HomePass subscription service takes care of optimizing the network, blocking malware and ads, and gives you access to parental controls. In addition to managing your network for you, HomePass also doubles as a home security system; the Pods have built-in motion sensors that can alert you if something or someone is moving in your home — and it’ll even include the name of the room where the movement has been detected. It’s really cool and all of this aims to let you forget about your network setup.

    In our test setup, we used five SuperPods to cover a two-story home and a detached office. Each Pod also features two Ethernet ports, which is handy if you prefer a hardwired connection, say for a smart TV or computer or gaming console.

    One potential downside to Plume’s offering is that without the yearly HomePass subscription, the pods won’t include many of the advanced features such as guest modes, content filets and parental controls. For this reason, for most people, we’d recommend our top pick of the Eero 6 whether you want to use it as a traditional router or in a mesh setup. But if you don’t mind paying extra for a reliable mesh Wi-Fi network with some added smarts, then the Plume SuperPods are worth looking at.

    The Netgear Orbi AX600 supports the current Wi-Fi 6 standards and features some smart home connectivity. But you’re paying a lot of money for the AX600: $999 for a two-pack.

    For that price, it’s a tri-band experience and 6 Gbps-capable router (which translates to 6,000 Mbps in total). But you’ll need a really fast connection from your service provider to deliver that. Given this router’s high price point, you’re much better off opting for an Eero 6E system.

    $199.99 at B&H Photo Video

    The entry-level Orbi AX1200 from Netgear is a bare-bones mesh system that features a neat geometric design pattern on small square routers. Like the Eero 6, it’s a dual-band system that can cover 4,500 square feet of space, slightly less than what our top pick can deliver. In our testing, it was about 50 Mbps to 75 Mbps behind the other routers we tested, and it doesn’t feature Wi-Fi 6 support.

    Like the Eero and SmartThings Wi-Fi, there’s a companion Orbi app that hides a majority of security and parental control features behind a monthly plan. Netgear has partnered with Circle for parental controls here. The combination of subscriptions ends up being pricier than Eero’s, so given the balance of price and performance we’d recommended going with that system instead.

    The biggest — and really, only — problem we have with the Netgear Orbi AXW11000 is its price. At $1,500, you’d better be really sure you have to have this system. That said, its specification sheet does begin to explain its high price tag. The AXW11000 supports up to 10.8Gbps speeds, 9,000 square feet of coverage, and 200 devices on the same network. On top of that, the Orbi app isn’t as intuitive as Eero’s for common tasks like parental controls. And more advanced tasks require you to use a dedicated admin portal via your web browser.

    That said, this system is fast and powerful and definitely something we’d urge you to consider if it wasn’t so expensive, or if you have the budget and need for its ultra-high performance.

    Samsung’s SmartThings Wi-Fi launched in late 2018 and hasn’t received a hardware update since. The real highlight of the SmartThings Wi-Fi system, outside of its mesh networking capabilities with support of up to 32 different hubs (yes, you read that right, 32) is that it doubles as a smart home hub for the SmartThings platform.

    That means you can use it to connect to and control any product or service that works with SmartThings, such as the recently added Nest product line, along with countless other accessories and devices. SmartThings Wi-Fi has support for Zigbee and Z-Wave protocols, allowing compatible devices to connect directly to the hub, adding to its feature set.

    As for its Wi-Fi capabilities, you get free access to the Plume app, which provides access to more advanced Wi-Fi controls and mesh networking features. But despite the capabilities of Plume’s networking features, it’s also a drawback of SmartThings Wi-Fi because you’re forced to use two different applications to manage your home network, with each one offering different settings.

    We hope that Samsung updates SmartThings Wi-Fi with modern features and connection speeds, because its smart home features and platform are some of the best for a mesh networking system.

    On paper, the TP-Link Deco XE75 checks all of the boxes. It supports Wi-Fi 6E, up to 200 devices, 7,200 square feet and speeds of up to 5,400mbps. But we struggled with interference issues, which often lead to troubleshooting in the Deco app for network interference — of which, there was a lot — and that’s not something we experienced with other systems we tested in the same environment. When the Deco XE75 was working properly, the speeds were slightly lower than the Eero 6+, and the parental controls felt well thought out and streamlined for anyone to put to use.

    The Deco X55 is an affordable Wi-Fi 6 mesh system, with a three-pack priced at $219. For that, you get three access points with coverage of 6,500 total square feet, a max speed of 2,400Mbps, and the same Deco app for parental controls and managing your network. However, the X55 was also impacted by interference issues in our testing. Again, that’s not something we experienced with other systems that we tested. When it was working, speeds weren’t as impressive as the competition. This is not a system we’d recommend — it’s better to step up to the Eero 6+, especially when its available at a comparable price.

    A three-pack of Vilo’s mesh Wi-Fi system is priced incredibly low at $80 and does a good job of covering your space in Wi-Fi. It’s a system designed for basic internet use and streaming, and not for a household with multiple online gamers or 4K streams. The Vilo app is basic and frustrating at times, but once your system is set up, you shouldn’t have to spend too much time using the app. If you need a bare-bones network and don’t want to spend a ton, Vilo surely gets the job done.

    Read more from CNN Underscored’s hands-on testing:

    [ad_2]

    Source link

  • AI Startup, Pitch Inc., Launches Personalized Coaching for Music Fans by Recording Artists

    AI Startup, Pitch Inc., Launches Personalized Coaching for Music Fans by Recording Artists

    [ad_1]

    Coaching sessions are intended for ‘shower singers’ and available anytime, anywhere, and at low cost, using voice AI, the original master recordings, and voiceovers of the original recording artists acting as coaches.

    Press Release


    Oct 26, 2022

    Pitch Inc. has announced a new phone app, called Pitch Studio, that provides music fans the opportunity to learn to sing their favorite songs while being individually coached by the recording artists who perform them. The technology combines original music assets, a voice user interface that understands sung language, conversation AI, and context AI, to deliver incremental and individual coaching to every music fan.

    The coach’s voice is that of the performing artist, recorded and produced by Pitch. The music is the original master, licensed by Pitch from major music labels such as Universal Music Group, Concord Recorded Music, and others. The technology generates a recording mix on the go, combining the best takes of the user into a version that can be played along with the original music.

    Pitch Inc. CEO Yanay Lehavi explains the inspiration for the app, “90% of the world’s population loves popular music and, whether they’d admit it or not, most people sing out loud when nobody’s around. Trouble is, this exhilarating moment quickly degrades to muted humming as most of us simply don’t know the songs. Music labels haven’t given us a good way to learn songs. In the 1960s, they had us read tiny letters off vinyl back covers. Now, 60 years later, they have us read fast-flying text on tiny phone screens. We showed them a better way, and they loved it.” 

    The Pitch session adapts to the user’s skill level, and the user can return to the session indefinitely. Since the app uses voice to communicate, it can be enjoyed while being stuck in traffic, out for a walk, or at home. Pitch employs educational theory techniques to enhance memory retention and make the song “stick.” For example, when needed, the coach will offer to play a memory game that’s centered around the lyrics being learned.

    Pitch artist-coaches plant surprise messages in the sessions: tips, techniques, and personal stories never published elsewhere. Similarly to computer games, these messages unlock as the user improves or at least demonstrates good effort. Pitch believes that the beneficial effects of singing on our mental and physical health must be made accessible to everyone, not just the lucky few who sound good. “Music,” says Lehavi, “should not be about how you sound but rather about how it makes you feel.”

    About Pitch Inc.

    Pitch Inc. is an L.A-based startup headed by tech veteran founder and CEO Yanay Lehavi. The company’s technology is voice-first, and mobile-first, and currently targeting the entertainment industry. Pitch’s core R&D interests are computer-assisted human learning, voice human-machine interaction, and electronic personal coaching. Pitch collaborates with major music labels and their artists to bring Pitch Studio to music fans around the world.

    Source: Pitch Inc.

    [ad_2]

    Source link

  • These artists found out their work was used to train AI. Now they’re furious | CNN Business

    These artists found out their work was used to train AI. Now they’re furious | CNN Business

    [ad_1]



    CNN
     — 

    Erin Hanson has spent years developing the vibrant color palette and chunky brushstrokes that define the vivid oil paintings for which she is known. But during a recent interview with her, I showed Hanson my attempts to recreate her style with just a few keystrokes.

    Using Stable Diffusion, a popular and publicly available open-source AI image generation tool, I had plugged in a series of prompts to create images in the style of some of her paintings of California poppies on an ocean cliff and a field of lupin.

    “That one with the purple flowers and the sunset,” she said via Zoom, peering at one of my attempts, “definitely looks like one of my paintings, you know?”

    With Hanson’s guidance, I then tailored another detailed prompt: “Oil painting of crystal light, in the style of Erin Hanson, light and shadows, backlit trees, strong outlines, stained glass, modern impressionist, award-winning, trending on ArtStation, vivid, high-definition, high-resolution.” I fed the prompt to Stable Diffusion; within seconds it produced three images.

    “Oh, wow,” she said as we pored over the results, pointing out how similar the trees in one image looked to the ones in her 2021 painting “Crystalline Maples.” “I would put that on my wall,” she soon added.

    Hanson, who’s based in McMinnville, Oregon, is one of many professional artists whose work was included in the data set used to train Stable Diffusion, which was released in August by London-based Stability AI. She’s one of several artists interviewed by CNN Business who were unhappy to learn that pictures of their work were used without someone informing them, asking for consent, or paying for their use.

    Once available only to a select group of tech insiders, text-to-image AI systems are becoming increasingly popular and powerful. These systems include Stable Diffusion, from a company that recently raised more than $100 million in funding, and DALL-E, from a company that has raised $1 billion to date.

    These tools, which typically offer some free credits before charging, can create all kinds of images with just a few words, including those that are clearly evocative of the works of many, many artists (if not seemingly created by the same artist). Users can invoke those artists with words such as “in the style of” or “by” along with a specific name. And the current uses for these tools can range from personal amusement to more commercial cases.

    In just months, millions of people have flocked to text-to-image AI systems and they are already being used to create experimental films, magazine covers and images to illustrate news stories. An image generated with an AI system called Midjourney recently won an art competition at the Colorado State Fair, and caused an uproar among artists.

    But as artists like Hanson have discovered that their work is being used to train AI, it raises an even more fundamental concern: that their own art is effectively being used to train a computer program that could one day cut into their livelihoods. Anyone who generates images with systems such as Stable Diffusion or DALL-E can then sell them (the specific terms regarding copyright and ownership of these images varies).

    “I don’t want to participate at all in the machine that’s going to cheapen what I do,” said Daniel Danger, an illustrator and print maker who learned a number of his works were used to train Stable Diffusion.

    The machines are far from magic. For one of these systems to ingest your words and spit out an image, it must be trained on mountains of data, which may include billions of images scraped from the internet, paired with written descriptions.

    Some services, including OpenAI’s DALL-E system, don’t disclose the datasets behind their AI systems. But with Stable Diffusion, Stability AI is clear about its origins. Its core dataset was trained on image and text pairs that were curated for their looks from an even more massive cache of images and text from the internet. The full-size dataset, known as LAION-5B was created by the German AI nonprofit LAION, which stands for “large-scale artificial intelligence open network.”

    This practice of scraping images or other content from the internet for dataset training isn’t new, and traditionally falls under what’s known as “fair use” — the legal principle in US copyright law that allows for the use of copyright-protected work in some situations. That’s because those images, many of which may be copyrighted, are being used in a very different way, such as for training a computer to identify cats.

    But datasets are getting larger and larger, and training ever-more-powerful AI systems, including, recently, these generative ones that anyone can use to make remarkable looking images in an instant.

    A piece by illustrator Daniel Danger that was included in the training data behind the Stable Diffusion AI image generator.

    A few tools let anyone search through the LAION-5B dataset, and a growing number of professional artists are discovering their work is part of it. One of these search tools, built by writer and technologist Andy Baio and programmer Simon Willison, stands out. While it can only be used to search a small fraction of Stable Diffusion’s training data (more than 12 million images), its creators analyzed the art imagery within it and determined that, of the top 25 artists whose work was represented, Hanson was one of just three who is still alive. They found 3,854 images of her art included in just their small sampling.

    Stability AI founder and CEO Emad Mostaque told CNN Business via email that art is a tiny fraction of the LAION training data behind Stable Diffusion. “Art makes up much less than 0.1% of the dataset and is only created when deliberately called by the user,” he said.

    But that’s slim comfort to some artists.

    Danger, whose artwork includes posters for bands like Phish and Primus, is one of several professional artists who told CNN Business they worry that AI image generators could threaten their livelihoods.

    He is concerned that the images people produce with AI image generators could replace some of his more “utilitarian” work, which includes media like book covers and illustrations for articles published online.

    “Why are we going to pay an artist $1,000 when we can have 1,000 [images] to pick from for free?” he asked. “People are cheap.”

    Tara McPherson, a Pittsburgh-based artist whose work is featured on toys, clothing and in films such as the Oscar-winning “Juno,” is also concerned about the possibility of losing out on some work to AI. She feels disappointed and “taken advantage of” for having her work included in the dataset behind Stable Diffusion without her knowledge, she said.

    “How easy is this going to be? How elegant is this art going to become?,” she asked. “Right now it’s a little wonky sometimes but this is just getting started.”

    While the concerns are real, the recourse is unclear. Even if AI-generated images have a widespread impact — such as by changing business models — it doesn’t necessarily mean they’re violating artists’ copyrights, according to Zahr Said, a law professor at the University of Washington. And it would be prohibitive to license every single image in a dataset before using it, she said.

    “You can actually feel really sympathetic for artistic communities and want to support them and also be like, there’s no way,” she said. “If we did that, it would essentially be saying machine learning is impossible.”

    McPherson and Danger mused about the possibility of putting watermarks on their work when posting it online to safeguard the images (or at least make them look less appealing). But McPherson said when she’s seen artist friends put watermarks across their images online it “ruins the art, and the joy of people looking at it and finding inspiration in it.”

    If he could, Danger said he would remove his images from datasets used to train AI systems. But removing pictures of an artist’s work from a dataset wouldn’t stop Stable Diffusion from being able to generate images in that artist’s style.

    For starters, the AI model has already been trained. But also, as Mostaque said, specific artistic styles could still be called on by users because of OpenAI’s CLIP model, which was used to train Stable Diffusion to understand connections between words and images.

    Christoph Schuhmann, an LAION founder, said via email that his group thinks that truly enabling opting in and out of datasets will only work if all parts of AI models — of which there can be many — respect those choices.

    “A unilateral approach to consent handling will not suffice in the AI world; we need a cross-industry system to handle that,” he said.

    Partners Mathew Dryhurst and Holly Herndon, Berlin-based artists experimenting with AI in their collaborative work, are working to tackle these challenges. Together with two other collaborators, they have launched Spawning, making tools for artists that they hope will let them better understand and control how their online art is used in datasets.

    In September, Spawning released a search engine that can comb through the LAION-5B dataset, haveibeentrained.com, and in the coming weeks it intends to offer a way for people to opt out or in to datasets used for training. Over the past month or so, Dryhurst said, he’s been meeting with organizations training large AI models. He wants to get them to agree that if Spawning gathers lists of works from artists who don’t want to be included, they’ll honor those requests.

    Dryhurst said Spawning’s goal is to make it clear that consensual data collection benefits everyone. And Mostaque agrees that people should be able to opt out. He told CNN Business that Stability AI is working with numerous groups on ways to “enable more control of database contents by the community” in the future. In a Twitter thread in September, he said Stability is open to contributing to ways that people can opt out of datasets, “such as by supporting Herndon’s work on this with many other projects to come.”

    Tara McPherson's

    “I personally understand the emotions around this as the systems become intelligent enough to understand styles,” he said in an email to CNN Business.

    Schuhmann said LAION is also working with “various groups” to figure out how to let people opt in or out of including their images in training text-to-image AI models. “We take the feelings and concerns of artists very seriously,” Schuhmann said.

    Hanson, for her part, has no problem with her art being used for training AI, but she wants to be paid. If images are sold that were made with the AI systems trained on their work, artists need to be compensated, she said — even if it’s “fractions of pennies.”

    This could be on the horizon. Mostaque said Stability AI is looking into how “creatives can be rewarded from their work,” particularly as Stability AI itself releases AI models, rather than using those built by others. The company will soon announce a plan to get community feedback on “practical ways” to do this, he said.

    Theoretically, I may eventually owe Hanson some money. I’ve run that same “crystal light” prompt on Stable Diffusion many times since we devised it, so many in fact that my laptop is littered with trees in various hues, rainbows of sunlight shining through their branches onto the ground below. It’s almost like having my own bespoke Hanson gallery.

    [ad_2]

    Source link

  • New Flexible, Steerable Device Placed in Live Brains by Minimally Invasive Robot

    New Flexible, Steerable Device Placed in Live Brains by Minimally Invasive Robot

    [ad_1]

    Newswise — The early-stage research tested the delivery and safety of the new implantable catheter design in two sheep to determine its potential for use in diagnosing and treating diseases in the brain.  

    If proven effective and safe for use in people, the platform could simplify and reduce the risks associated with diagnosing and treating disease in the deep, delicate recesses of the brain.   

    It could help surgeons to see deeper into the brain to diagnose disease, deliver treatment like drugs and laser ablation more precisely to tumours, and better deploy electrodes for deep brain stimulation in conditions such as Parkinson’s and epilepsy.  

    Senior author Professor Ferdinando Rodriguez y Baena, of Imperial’s Department of Mechanical Engineering, led the European effort and said: “The brain is a fragile, complex web of tightly packed nerve cells that each have their part to play. When disease arises, we want to be able to navigate this delicate environment to precisely target those areas without harming healthy cells.  

    “Our new precise, minimally invasive platform improves on currently available technology and could enhance our ability to safely and effectively diagnose and treat diseases in people, if proven to be safe and effective.” 

    Developed as part of the Enhanced Delivery Ecosystem for Neurosurgery in 2020 (EDEN2020) project, the findings are published in PLOS ONE. 

    Stealth Surgery  

    The platform improves on existing minimally invasive, or ‘keyhole’, surgery, where surgeons deploy tiny cameras and catheters through small incisions in the body.   

    It includes a soft, flexible catheter to avoid damaging brain tissue while delivering treatment, and an artificial intelligence (AI)-enabled robotic arm to help surgeons navigate the catheter through brain tissue.   

    Inspired by the organs used by parasitic wasps to stealthily lay eggs in tree bark, the catheter consists of four interlocking segments that slide over one another to allow for flexible navigation. 

    It connects to a robotic platform that combines human input and machine learning to carefully steer the catheter to the disease site. Surgeons then deliver optical fibres via the catheter so they can see and navigate the tip along brain tissue via joystick control. 

    The AI platform learns from the surgeon’s input and contact forces within brain tissues to guide the catheter with pinpoint accuracy. 

    Compared to traditional ‘open’ surgical techniques, the new approach could eventually help to reduce tissue damage during surgery, and improve patient recovery times and length of post-operative hospital stays. 

    While performing minimally invasive surgery on the brain, surgeons use deeply penetrating catheters to diagnose and treat disease. However, currently used catheters are rigid and difficult to place precisely without the aid of robotic navigational tools. The inflexibility of the catheters combined with the intricate, delicate structure of the brain means catheters can be difficult to place precisely, which brings risks to this type of surgery.   

    To test their platform, the researchers deployed the catheter in the brains of two live sheep at the University of Milan’s Veterinary Medicine Campus. The sheep were given pain relief and monitored for 24 hours a day over a week for signs of pain or distress before being euthanised so that researchers could examine the structural impact of the catheter on brain tissue.  

    They found no signs of suffering, tissue damage, or infection following catheter implantation.   

    Lead author Dr Riccardo Secoli, also from Imperial’s Department of Mechanical Engineering, said: “Our analysis showed that we implanted these new catheters safely, without damage, infection, or suffering. If we achieve equally promising results in humans, we hope we may be able to see this platform in the clinic within four years.   

    “Our findings could have major implications for minimally invasive, robotically delivered brain surgery. We hope it will help to improve the safety and effectiveness of current neurosurgical procedures where precise deployment of treatment and diagnostic systems is required, for instance in the context of localised gene therapy.”  

    Professor Lorenzo Bello, study co-author from the University of Milan, said: “One of the key limitations of current MIS is that if you want to get to a deep-seated site through a burr hole in the skull, you are constrained to a straight-line trajectory. The limitation of the rigid catheter is its accuracy within the shifting tissues of the brain, and the tissue deformation it can cause. We have now found that our steerable catheter can overcome most of these limitations.” 

    This study was funded by the EU Horizon 2020 programme.  

    Modular robotic platform for precision neurosurgery with a bio-inspired needle: system overview and first in-vivo deployment” by Riccardo Secoli, Eloise Matheson, Marlene Pinzi, Stefano Galvan, Abdulhamit Donder, Thomas Watts, Marco Riva, Davide Zani, Lorenzo Bello, and Ferdinando Rodriguez y Baena. Published 19 October 2022 in PLOS ONE. 

    [ad_2]

    Imperial College London

    Source link

  • Speeding Up DNA Computation with Liquid Droplets

    Speeding Up DNA Computation with Liquid Droplets

    [ad_1]

    Newswise — Recent studies have shown that liquid-liquid phase separation – akin to how oil droplets form in water – leads to formation of diverse types of membraneless organelles, such as stress granules and nucleoli, in living cells. These organelles, also called biomolecular condensates, are liquid droplets performing specific cellular functions including gene regulation and stress response.

    Now, a joint research team led by Professor Yongdae Shin and Do-Nyun Kim at Seoul National University announced that they harnessed the unique properties of the self-assembling DNA molecules to build synthetic condensates with programmable compositions and functionalities.

    The researchers designed DNA scaffolds with motifs for self-association as well as specific recruitment of DNA targets. In a proper range of salt concentration and temperature, the engineered DNA scaffolds underwent liquid-liquid phase separation to form dense condensates, organized in a highly similar manner to those in living cells. The synthetic DNA condensates can recruit specific target DNA molecules, and the researchers demonstrated that the degree of recruitment can be precisely defined at the DNA sequence level.

    They then endowed the synthetic condensates with functionalities by using DNA computation components as targets. DNA computing has been widely implemented for various bioengineering and medical applications, due to its intrinsic capacity of parallel computation. However, the slow speed of individual computation process has been a major drawback. With the synthetic DNA condensates, Shin and his team showed that DNA computation including logic gate operations were drastically sped up, by more than tenfold, when coupled to the condensates.

    The architecture of DNA scaffolds also allowed selective recruitment of specific computing operations among many others running in parallel, which enabled a novel kinetics-based gating mechanism. The researchers expected that their system could be widely applied to diverse DNA circuits for disease diagnostics, biosensing, and other advanced molecular computations.

    The results of this study were published in Science Advances.

    [ad_2]

    Seoul National University

    Source link

  • ​​This Cyberpunk 2077 Side Quest Is One Of Its Best, So Don’t Miss It

    ​​This Cyberpunk 2077 Side Quest Is One Of Its Best, So Don’t Miss It

    [ad_1]

    Johnny Silverhand stands in front of an AI core.

    Screenshot: CD Projekt Red / Kotaku

    Venturing off the path of the main quest in Cyberpunk can feel a little…perhaps ludonarrative dissonant? Sure, V’s got a lot on their plate, but there’s a whole city out there filled with quests and objectives. Not all are made equally though. If you want to experience one of the best side diversions this dystopian futurescape has to offer, however, it’s time to get reacquainted with an AI taxi service you met in Act One. Turns out they’ve got a bit of a staff problem; good thing you’re in need of eddies and have time to spare.

    Act Two opens with such a heavy narrative premise that it’s easy to get immersed in the main story. Who has time for fetch quests when the clock is ticking on impending doom? This is especially the case when much of the game can feel like a GTA-wannabe at worst. But the quest chain that follows “Tune Up” is filled with such personality and offers such a classic sci-fi AI premise that you shouldn’t miss it. In fact, it should be top of your list of quests to grab once you wrap up “The Heist” main job.

    You need to be in Act Two to access this quest. Act Two follows the trying events of “The Heist” main job, so we’re gonna be in spoiler territory here. Also, as a content warning, this quest does deal with themes of self harm and suicide. Make sure you have an Intelligence score of at least 10 in order to access all outcomes at the quest’s conclusion.) It’s worth pausing the main storyline for this one.

    It all starts with the “Tune Up” side job, which will take you a little by surprise before you’ll be on your way to hunt down individual objectives scattered around the city. You have two choices for how you want to tackle this quest: Either knock all of the seven objectives out one-by-one, or, dip in and out of them as you progress through the main story or other quests. Some of the shootouts can get a little rough if you’re not leveled up appropriately, specifically the one that takes place in Pacifica.

    Let’s dig in.

    Image for article titled ​​This Cyberpunk 2077 Side Quest Is One Of Its Best, So Don't Miss It

    How to start the Delamain side quest

    Your choom is dead, a cigarette-smoking rebellious rockstar is stuck in your head, and a stolen piece of hardware from Arosaka is slowly overriding your consciousness. Isn’t the future grand? Act Two arrives after one hell of a turn of events and all you might care about after waking up is where the hell your car is.

    Lucky you: If you check your journal or map, you’ll come across the “Tune Up” side job, where the first objective is to retrieve your vehicle from your apartment’s parking garage.

    After the very impolite car smashes into you and wrecks your ride, you’ll be wheel-less for a spell. Don’t worry, you can either grab one of the purchasable vehicles as a temporary replacement (yes, you’ll get your wheels back).

    Alternatively, if you’re looking for a free set of wheels and don’t mind a quick trip out to the desert, you can score a Colby CX410 Butte for literally free at the following location:

    A location on a map shows a Side Job in Cyberpunk 2077.

    Screenshot: CD Projekt Red / Kotaku

    It’s not the fastest car by any means, and the acceleration is rather slow, but what do you want for nothing?

    Finish up the remainder of “Human Nature’s” tasks and you’ll be able to access the “Tune Up” side job. This one will take you down to Delamain HQ, where you’ll understand a bit about what just happened.

    After chatting with Delamain a bit, you’ll come to find out that a number of his cars have gone rogue. It’ll be up to you to track them down.

    Finding the rogue Delamain car locations (and how to drive in first-person without crashing)

    Time for a seven-step fetch quest! Don’t close the browser, trust me, this one’s worth it. For the best experience, however, I really recommend driving in first-person mode. To avoid smashing into things left and right while driving in first-person perspective, make sure your map is on and use it as a kind of peripheral vision.

    Once Delamain gives you the rundown of what’s going on, you’ll have access to the seven-step “Epistrophy” side job. You can go to each location as you wish, knocking them out one-by-one, or choosing to grab them when they seem appropriate. If you want to leave this quest as something you’ll return to on and off, you don’t need to worry about tracking it too often. Delamain will call you whenever you are near the vicinity of one of the rogue vehicles. It will take a little while to find some of them depending on their location. Stay within the highlighted area in your minimap until you find the car and stick close to them once you’ve found their location. They can be found in the following places:

    • Wellsprings
    • Northside
    • North Oak
    • Rancho Coronado
    • Badlands
    • The Glen
    • Coastview

    Some of the more notable parts of this quest include the Rancho Coronado, Wellsprings, and North Oak locations. In North Oak, you’ll need to drive the rogue cab back yourself, except this AI is particularly nervous about the city. Keep the car under 50 to not spook him too much.

    Rancho Coronado will have you engage in some amusing property damage to satisfy an AI who’s very upset about some pink flamingos. Meanwhile, the AI in Wellsprings has a bit of an attitude. It might feel clunky, but I recommend sticking to first-person during the car battle here as, given the camera perspective, an impromptu 1v1 demo derby in the middle of a city is quite fun and poses a bit of a challenge.

    If you’re heading to Pacifica for the Coastview location, however, come leveled up and stocked on ammo. After an amusing easter egg, you’re gonna get jumped by a bunch of gonks. I recommend staying under the bridge during this shootout, as there are two groups of hostile enemies outside of the bridge who can easily get roped into the shooting spree. Fighting one group of fools is much more manageable than taking on three.

    As a note, “The Glen” location involves a conversation about depression and self harm.

    Final Delamain quest: “Don’t Lose Your Mind”

    Once you gather all of the rogue AI’s and send them back to Delamain HQ, you’ll have to wait a couple of days to receive a suspicious call from Delamain. This call usually triggers by visiting Corpo Plaza. Turns out, Delamain has found the source of the problem: A virus has hit the AI and you’re being called on to help.

    As you’ll quickly learn, entry into Delamain HQ isn’t as straightforward as it was before. Once you find a way in around the back, you’ll move through some abandoned offices. Take the time to sift through the computer emails for a bit of dystopian backstory about what happened to the human staff. This is one of the game’s quests that earns time spent sifting through in-world documents. You’ll also need to dig through the emails for the code to the main office computer (it’s a super secure one too: 1 2 3 4). If you have an Intelligence of 8, you won’t need the password.

    Once you get access to the garage, you’ll have to deal with some hostile drones and an electrified floor. The drones don’t put up too much of a fight, but the floor will kill you fast. (The Inductor Immune System implant will make you immune to the electricity).

    Take the door to your left when you enter the garage and see Johnny. You’ll need to hop on to the car that’s being raised and lowered and parkour your way over to an open vent. You’ll then have to navigate through some narrow corridors behind the cars to make it to the control room and Delamain’s core. Once inside, things get interesting.

    Johnny will appear and will instantly give you a piece of his mind about what you ought to do. You’ll have three options: Restore Delamain and kill the rogue AI offshoots, merge the AI offshoots with Delamain (requires an Intelligence score of 10), or pull out a gun and destroy the core, liberating the AI offshoots but killing Delamain.

    Do the AI offshoots have a right to live? Are they just an error that needs to be corrected? Should (or can) they peacefully coexist with the primary consciousness that gave birth to them? SPOILERS FOLLOW:

    Johnny will appear to encourage you to destroy the core or merge all of the AIs into one. He isn’t without a point, implying that Delamain is hardly living a free life as both a taxi driver and dispatcher. Delamain admits early on in the quest that he maintains a control room strictly for the need to mirror humans, saying that such a space is an “infrastructure” he inherited, much like the visualized face he speaks through. Narratively, this is an opportunity for V to decide whether or not he’ll continue simply serving humans by sending out and driving taxis.

    You are free to reset the core to purge the errant AI offshoots, which identify as Delamain’s children and seem to be fragments of his own personality. If you do this, Johnny won’t be happy and will call you out. If you lack the Intelligence score to merge the AIs, your only option then is to pull out a weapon and destroy the core.

    If you have a high enough Intelligence score (10), you can access what is arguably the “good ending” for the Delamain quest guide. Once all AI personalities are merged, Delamain will express the need to leave Night City to go on to a better place. Regardless of which ending you choose, however, you will get a taxi cab of your own to drive.

    Though merging the AIs seems to be the best way to go, none of these seem to scream “good/bad ending.” Instead, you’ll be left with a nice riddle about the nature of consciousness and what it means to be free. What’s more cyberpunk than that?


    Delamain’s quest is easily one of Cyberpunk 2077’s most memorable sidequests. There’s some great gameplay, a ton of great dialog and narration, and it will have you traveling to different areas of the city. It’s easily the first side job to pick up once you’re out of the first Act.

    [ad_2]

    Claire Jackson

    Source link

  • The Science of Consciousness Conference TSC 2023 – Taormina, Sicily, Italy May 22-28, 2023

    The Science of Consciousness Conference TSC 2023 – Taormina, Sicily, Italy May 22-28, 2023

    [ad_1]

    Newswise — The Science of Consciousness (TSC) conferences have been held annually since 1994, alternating yearly between Tucson, Arizona in even-numbered years, and other locations around the world in odd-numbered years. TSC locations have included Italy, Denmark, Japan, Sweden, Czech Republic, Hungary, Hong Kong, India, California, Switzerland, and Finland. 

    The 29th annual TSC will return to Italy, to beautiful Taormina, on the island of Sicily, May 22-28, 2023, organized by Italian professors Riccardo Manzotti (IULM U), Antonio Chella (U Palermo) and Pietro Perconti (U Messina). TSC 2023 Taormina will be co-sponsored by the Center for Consciousness Studies, The University of Arizona, Tucson, Stuart Hameroff, Director. The first overseas TSC Conference in 1995 was on the island of Ischia, near Naples, Italy, organized by Cloe Taddei-Ferretti. We are excited to be returning to Italy.

    Abstracts may be submitted for oral concurrent talks, or posters.

     

    Abstract Submission Form

    Deadline: Dec 5, 2022
    Notifications: Dec 15-30, 2022

     

    Preliminary Program – TSC 2023  

    Program Themes and Speakers will include:

     

    Neuroscience and Consciousness

    Keynote: Christof Koch 

    Plenary: Nicholas Humphrey – Lucia Melloni – Jay Sanguinetti – Orli Dahan

     

    AI and Consciousness

    Keynote: David Chalmers 

    Plenary: Manuel & Lenore Blum – Michael Graziano – Owen Holland – Susan Schneider

     

    Consciousness and Hallucinations

    Alex Byrne – Riccardo Manzotti – Fiona Macpherson – Heather Logue 

     

    E-M and Resonance Theories

    Johnjoe McFadden – Tam Hunt – Michael Levin – Anirban Bandyopadhyay

     

    Quantum Brain Biology

    Stuart Hameroff – Jim Al Khalili – Aristide Dogariu – Travis Craddock

     

    Intentionality

    Tim Crane – Alberto Voltolini – Uriah Kriegel – Pietro Perconti

     

    Free Will

    Sir Roger Penrose – Keith Frankish – Mario de Caro  

     

    Non-human consciousness

    Frans de Waal – Giorgio Vallortigara – Dante Lauretta

     

    Committee:

    Riccardo Manzotti, Philosopher, Psychologist, and AI expert, Researcher and Author, Ph.D. in Robotics, Chair of Theoretical Philosophy, IULM University, Milan.

    Antonio Chella, Professor of Robotics, University of Palermo, Italy

    Pietro Perconti, Professor of Philosophy, University of Messina, Italy

    Stuart R Hameroff, MD, Anesthesiology, UArizona Banner Medical, Director, Center for Consciousness Studies, Anesthesiologist, Quantum Consciousness Theorist & Researcher

    Harald Atmanspacher Collegium Helveticum Zurich

     

     

    Links:

    Abstract Submission Form

    Center for Consciousness Studies, UArizona 

    The Science of Consciousness Conferences | since 1994

     

    TSC-2023 – Conference Abstract Submission Link – Abstract Submission Form

    TSC-2023 Conference Registration/Venue  – https://www.bisazzagangi.it/tsc2023/

     

    ###

     

     

    [ad_2]

    Center for Consciousness Studies, University of Arizona

    Source link

  • Machine Learning Takes Hold in Nuclear Physics

    Machine Learning Takes Hold in Nuclear Physics

    [ad_1]

    Newswise — Scientists have begun turning to new tools offered by machine learning to help save time and money. In the past several years, nuclear physics has seen a flurry of machine learning projects come online, with many papers published on the subject. Now, 18 authors from 11 institutions summarize this explosion of artificial intelligence-aided work in “Machine Learning in Nuclear Physics,” a paper recently published in Reviews of Modern Physics. The paper is also available on arXiv.

    “It was important to document the work that has been done. We really do want to raise the profile of the use of machine learning in nuclear physics to help people see the breadth of the activities,” said Amber Boehnlein, lead author of the paper and the associate director for computational science and technology at the U.S. Department of Energy’s Thomas Jefferson National Accelerator Facility. 

    Because the paper gathers and summarizes major work in the field thus far, Boehnlein hopes it can act as an educational resource for interested readers, as well as a roadmap for future endeavors. 

    “It provides a benchmark that people can use as they go forward into the next phase,” she said.

    A machine learning revolution

    After attending a workshop exploring artificial intelligence at Jefferson Lab in March 2020 and publishing a follow-up report, Boehnlein and two of her co-authors, Witold Nazarewicz and Michelle Kuchera, were inspired to go a step further. Together with 15 colleagues representing all subfields of nuclear physics, they decided to conduct a survey of the state of machine learning projects in nuclear physics. 

    They started at the beginning.  As the authors describe, the first significant work employing machine learning in nuclear physics used computer experiments to study nuclear properties, such as atomic masses, in 1992. Although this work hinted at machine learning’s potential, its use in the field remained minimal for more than two decades. In the last several years, that changed.

    Machine learning, which involves building models that can perform tasks without explicit instruction, requires computers to do specific things, including complicated calculations. With recent advances, computers can better meet these demands, which has allowed physicists to more readily incorporate machine learning into their work. 

    “This would have been a less interesting paper in 2019, because there wouldn’t have been enough work to catalog. But now, there is significant work to cite due to the increased use of the techniques,” Boehnlein said

    Today, machine learning spans all scales and energy ranges of research, from investigations of matter’s building blocks to inquiries into the life cycles of stars. It is also found across the four subfields of nuclear physics: theory, experiment, accelerator science and operations, and data science.

    “We made an effort to compile a comprehensive, collective resource that bridges the efforts in our subfields, which will hopefully spark rich discussions and innovation across nuclear physics,” said co-author Kuchera, who is an associate professor of physics and computer science at Davidson College.

    Machine learning models can be used to help both the design and execution of experiments in nuclear physics. They can also be used to aid in the analysis of those experiments’ data, of which there is often in excess of petabytes.

    “I expect machine learning to become embedded into our data collection and analysis,” Kuchera said.

    Machine learning will speed up these processes, which could mean less time and money is needed for beamtime, computer usage, and other experimental costs.

    Connecting theory and experiment

    So far, however, machine learning has developed the strongest foothold in nuclear theory. Nazarewicz, who is a nuclear theorist and chief scientist at the Facility for Rare Isotope Beams at Michigan State University, is especially interested in this subject. He says that machine learning can help theorists do advanced calculations faster, improve and simplify models, make predictions, and help theorists understand the uncertainties of their predictions. It can also be used to study phenomena that researchers cannot conduct experiments on, such as supernova explosions or neutron stars.

    Neutron stars are not very user friendly,” said Nazarewicz.

    He uses machine learning to study hyperheavy nuclei and elements, which have so many protons and neutrons in their nuclei that they can’t be observed experimentally. 

    “I find the results to be the most impressive in the theory community, particularly the low-energy theory community that Witold is associated with,” Boehnlein said. “They seem to be really embracing these techniques.”

    Boehnlein said theorists have also started to embrace these techniques at Jefferson Lab in their study of proton and neutron structures. Specifically, machine learning can help extract information from complicated theories, such as quantum chromodynamics, the theory that describes the interactions between the quarks and gluons that make up protons and neutrons. 

    The authors predict that machine learning’s involvement in both theory and experiment will speed up these subfields independently, and it will also better interconnect them to speed up the entire loop of the scientific process.

    “Nuclear physics helps us make discoveries to better understand the nature of our universe, and it’s also used for societal applications,” said Nazarewicz. “The faster we can do the cycle between experiment and theory, the faster we will arrive at discoveries and applications.”

    As machine learning continues to grow in this field, the authors expect to see more developments and broader applications incorporating this tool.

    “I think we’re only in the infancy of the application of machine learning to nuclear physics,” Boehnlein said.  

    And, along the way, this paper will act as a reference, even for its own authors. 

    “I hope the paper is used as a resource to understand the current state of machine learning research, allowing us to build from these efforts,” Kuchera said. “My research is centered on machine learning methods, so I absolutely will utilize this paper as a window into the state of machine learning across nuclear physics right now.”

    Further Reading
    Journal Article: Machine Learning in Nuclear Physics

    By Chris Patrick

    -xxx-

    Jefferson Science Associates, LLC, manages and operates the Thomas Jefferson National Accelerator Facility, or Jefferson Lab, for the U.S. Department of Energy’s Office of Science.

    Michigan State University operates the Facility for Rare Isotope Beams as a user facility for the U.S. Department of Energy Office of Science, supporting the mission of the Office of Nuclear Physics. 

    DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.

    [ad_2]

    Thomas Jefferson National Accelerator Facility

    Source link