ReportWire

Tag: Artificial Intelligence

  • Video: ‘Swifties’ take on Ticketmaster, new AI chatbot coming for your job and Apple sued for AirTag stalking on CNN Nightcap | CNN Business

    Video: ‘Swifties’ take on Ticketmaster, new AI chatbot coming for your job and Apple sued for AirTag stalking on CNN Nightcap | CNN Business

    [ad_1]

    The AI chatbot coming for your job, ‘Swifties’ take on Ticketmaster, and Apple sued for AirTag stalking

    Nightcap’s Jon Sarlin talks to futurist Amy Webb about the implications for ChatGPT, the next-gen AI tool that’s blowing everyone’s minds. Plus, Morgan Harper of the American Economic Liberties Project on whether Ticketmaster has met its match in Taylor Swift and her legion of devoted fans. And CNN’s Sam Kelly on the lawsuit filed against Apple by two women alleging their exes used AirTags to stalk them. To get the day’s business headlines sent directly to your inbox, sign up for the Nightcap newsletter.


    13:31

    – Source:
    CNN

    [ad_2]

    Source link

  • UPDATE: New AI Service Summarizes Content: Introducing Notedly

    UPDATE: New AI Service Summarizes Content: Introducing Notedly

    [ad_1]

    Press Release


    Dec 8, 2022

    Say goodbye to long readings.

    A new artificial intelligence service by Syntak, LLC is making waves for being able to automatically condense documents into bullet points. Notedly (https://notedly.ai) prides itself on its ability to process – and understand – documents of any kind, cutting down read time by at least 50% and increasing content absorption capabilities.

    The Boston-based startup was originally designed for students to help with schoolwork, and quickly became popular thanks to its promotional merchandise – including its iconic bucket hats. Since then, it has evolved into a powerful piece of enterprise software aimed at cutting hours of document analysis down to mere minutes.

    So what distinguishes it from the slew of other artificial intelligence tools? A difference in goal.

    Whereas other AI tools aim to automate copywriting, Notedly is focusing on making content more consumable. According to the founder, Amaan Ali, “we wanted to clear up one of the biggest bottlenecks for businesses, and we think that lies in time spent reading.”

    After several months developing the enterprise experience, Notedly has now launched for companies across the country. Businesses that are interested in learning more can visit https://notedly.ai/enterprise.

    Source: Syntak, LLC

    [ad_2]

    Source link

  • New AI Service Summarizes Content: Introducing Notedly

    New AI Service Summarizes Content: Introducing Notedly

    [ad_1]

    Press Release



    updated: Dec 8, 2022

    Say goodbye to long readings.

    A new artificial intelligence service by Syntak, LLC is making waves for being able to automatically condense documents into bullet points. Notedly (https://notedly.ai) prides itself on its ability to process – and understand – documents of any kind, cutting down read time by at least 50% and increasing content absorption capabilities.

    The Boston-based startup was originally designed for students to help with schoolwork, and quickly became popular thanks to its promotional merchandise – including its iconic bucket hats. Since then, it has evolved into a powerful piece of enterprise software aimed at cutting hours of document analysis down to mere minutes.

    So what distinguishes it from the slew of other artificial intelligence tools? A difference in goal.

    Whereas other AI tools aim to automate copywriting, Notedly is focusing on making content more consumable. According to the founder, Amaan Ali, “we wanted to clear up one of the biggest bottlenecks for businesses, and we think that lies in time spent reading.”

    After several months developing the enterprise experience, Notedly has now launched for companies across the country. Businesses that are interested in learning more can visit https://notedly.ai/enterprise.

    Source: Syntak, LLC

    [ad_2]

    Source link

  • How Land O’Lakes convinced its farmers to embrace A.I.

    How Land O’Lakes convinced its farmers to embrace A.I.

    [ad_1]

    Beyond the technical challenges of incorporating artificial intelligence into their internal systems, companies face another quandary: how to get employees to buy into the changes that A.I. can bring.

    At Fortune‘s Brainstorm A.I. conference in San Francisco on Monday, Teddy Bekele, CTO of agricultural cooperative Land O’Lakes, and Fiona Tan, CTO of online furniture retailer Wayfair LLC, compared and contrasted how workers at their companies have embraced—or raised an eyebrow at—efforts to introduce A.I. into the supply chain.

    Land O’Lakes is using A.I. to approximate the supply and demand of different products at different times of year. The technology has become a tool used directly by the company’s farmers. Farmers see it as assisting their decisions, not replacing their expertise, Bekele says. Yet getting the farmers fully on-board takes some convincing, he says, since planting and harvesting are high-stakes decisions. “Farmers will always try things, they’re entrepreneurs at heart,” Bekele explained. “However, to fully adopt it in their operation, they want to make sure the solution really works.” 

    Some A.I. models can seem counterintuitive to farmers at first. Bekele brought up the example of using A.I. models to determine the best locations to plant crops based on climate, topography, and soil. At times, the A.I. suggestion differs from where farmers have planted crops in the past. “On paper, [the A.I. model] doesn’t sound right,” Bekele says. But with some explanation, farmers come around to the idea.

    A.I. can also serve as a sort of second opinion for farmers. They input their own data into the A.I. tools and use the system to confirm their own instincts.

    Wayfair is a digitally-native company so its employees are fairly open to adopting new tech, yet Tan says that a tech-savvy workforce can become frustrated that A.I. doesn’t move faster. “Sometimes there’s impatience for the models to work immediately,” Tan said. “It’s not like it’s deployed today and it’s all going to work magically,” she said. 

    When Wayfair adds A.I. to internal processes, it starts with low-stakes tasks to mitigate the risk of errors and ensures humans are still checking the technology’s work, Tan says. “For example, in marketing, the worst that can happen is you pay too much for a bid, so that’s something we can tolerate,” she said. “Yet other areas, like when looking at images or text for the product to ascertain the quality of the furniture, we’ll have models give a suggestion or recommendation, and humans can go back and make sure it looks good,” she said. 

    Our new weekly Impact Report newsletter will examine how ESG news and trends are shaping the roles and responsibilities of today’s executives—and how they can best navigate those challenges. Subscribe here.

    [ad_2]

    Lucy Brewster

    Source link

  • Bank card fraud exploded during the pandemic. Then came the bot hiring boom

    Bank card fraud exploded during the pandemic. Then came the bot hiring boom

    [ad_1]

    Banks getting bigger is nothing new. The 2019 merger between two big banks, BB&T and SunTrust, which created Truist, was the largest in roughly a decade for the sector, and as big deals do, it led to a review of inefficiencies and opportunities, including in the back office.

    As Jarel Hawkins, former senior vice president of enterprise intelligent automation recently told CNBC at its Technology Executive Council Summit, the infrastructure and architecture was about five years behind and in need of modernization. But it was the pandemic as much as the deal itself which led to one important change in how the company looked at the combination of human workforce and technology.

    The use of digital banking boomed during the pandemic lockdowns and that led to an exponential growth in fraud. This led Truist to bring in robotic automation processing company UiPath to scale up its use of bots in the fraud process, and scale it down to the level of low-dollar transactions it previously would not have scrutinized as closely. Fraud was costing the bank a significant amount of money, but previous to the pandemic, employing a human workforce for every charge was not an efficient or economic way to solve the problem. The costs of the fraud were being refunded to consumers, but the bank was not claiming the costs back from payment processors.

    But once the two banks combined, “it became really valuable,” Hawkins said, just as the pandemic was leading to more fraud at low transaction values. “We went from 37,000 [claims] annually to 26,000 a month,” he said. Now, Hawkins says, it’s “upwards of eight figures” in new money coming back to the bank balance sheet as a result of the automation. 

    Robotic process automation allows a firm to learn where human errors are taking place in existing processes and teach automation to follow processes exactly as intended, while also identifying where human intervention is still needed. “Many cases [AI] can do 99% of the things correct,” Robert Enslin, co-CEO of UiPath, told CNBC, “and then there’s one or two things that a human needs to look at. And by automating so much of the process, you move the process through fast and you allow the humans to actually interact with the system in the areas that [they] absolutely must.” 

    Low-dollar chargebacks ranging from $50 to $100 can now proceed through a back-end process employing “digital workers” and existing business logic to connect with the payment processor and get a claim reimbursed to a customer. Bots also can be scaled up and down. In this case, as the cases of fraud rose, rather than trying to train employees to be able to handle a high volume process, a digital workforce could be scaled up immediately. “You can scale it up and down based on the economics of what’s happening around,” Enslin said.

    “Our consumers didn’t see any change in their experience. But we were able to drive that. And one of the great pieces of it is we were able to scale and go after higher value claims, $100, $200, $300, to expand and leverage that process,” Hawkins said.

    Truist next may take similar automation to its commercial business and its high-net-worth business next. “Fraud doesn’t care where it is,” Hawkins said.

    The bots, meanwhile, don’t take much time to get up to speed on a new market, and can work overnight.

    Watch the video above to learn more about how UiPath and Truist partnered with each other and the bots to solve an evolving fraud problem.

    [ad_2]

    Source link

  • How bots are being deployed inside banks after a pandemic boom in fraud

    How bots are being deployed inside banks after a pandemic boom in fraud

    [ad_1]

    Share

    UiPath co-CEO Robert Enslin and Jarel Hawkins, former head of enterprise intelligence automation at Truist, join CNBC’s Frank Holland for a CNBC Technology Executive Council Summit conversation about how bots are helping banks fight fraud.

    06:05

    Tue, Dec 6 20222:25 PM EST

    [ad_2]

    Source link

  • Finding the right AI for you

    Finding the right AI for you

    [ad_1]

    Newswise — The human genome is three billion letters of code, and each person has millions of variations. While no human can realistically sift through all that code, computers can. Artificial intelligence (AI) programs can find patterns in the genome related to disease much faster than humans can. They also spot things that humans miss. Someday, AI-powered genome readers may even be able to predict the incidence of diseases from cancer to the common cold. Unfortunately, AI’s recent popularity surge has led to a bottleneck in innovation.

    “It’s like the Wild West right now. Everyone’s just doing whatever the hell they want,” says Cold Spring Harbor Laboratory (CSHL) Assistant Professor Peter Koo. Just like Frankenstein’s monster was a mix of different parts, AI researchers are constantly building new algorithms from various sources. And it’s difficult to judge whether their creations will be good or bad. After all, how can scientists judge “good” and “bad” when dealing with computations that are beyond human capabilities?

    That’s where GOPHER, the Koo lab’s newest invention, comes in. GOPHER (short for GenOmic Profile-model compreHensive EvaluatoR) is a new method that helps researchers identify the most efficient AI programs to analyze the genome. “We created a framework where you can compare the algorithms more systematically,” explains Ziqi Tang, a graduate student in Koo’s laboratory.

    GOPHER judges AI programs on several criteria: how well they learn the biology of our genome, how accurately they predict important patterns and features, their ability to handle background noise, and how interpretable their decisions are. “AI are these powerful algorithms that are solving questions for us,” says Tang. But, she notes: “One of the major issues with them is that we don’t know how they came up with these answers.”

    GOPHER helped Koo and his team dig up the parts of AI algorithms that drive reliability, performance, and accuracy. The findings help define the key building blocks for constructing the most efficient AI algorithms going forward. “We hope this will help people in the future who are new to the field,” says Shushan Toneyan, another graduate student at the Koo lab.

    Imagine feeling unwell and being able to determine exactly what’s wrong at the push of a button. AI could someday turn this science-fiction trope into a feature of every doctor’s office. Similar to video-streaming algorithms that learn users’ preferences based on their viewing history, AI programs may identify unique features of our genome that lead to individualized medicine and treatments. The Koo team hopes GOPHER will help optimize such AI algorithms so that we can trust they’re learning the right things for the right reasons. Toneyan says:  “If the algorithm is making predictions for the wrong reasons, they’re not going to be helpful.”

    [ad_2]

    Cold Spring Harbor Laboratory

    Source link

  • Checking blood pressure in a heartbeat, using artificial intelligence and a camera

    Checking blood pressure in a heartbeat, using artificial intelligence and a camera

    [ad_1]

    Newswise — Australian and Iraqi engineers have designed a system to remotely measure blood pressure by filming a person’s forehead and extracting cardiac signals using artificial intelligence algorithms.

    Using the same remote-health technology they pioneered to monitor vital health signs from a distance, engineers from the University of South Australia and Baghdad’s Middle Technical University have designed a non-contact system to accurately measure systolic and diastolic pressure.

    It could replace the existing uncomfortable and cumbersome method of strapping an inflatable cuff to a patient’s arm or wrist, the researchers claim.

    In a new paper published in Inventions, the researchers describe the technique, which involves filming a person from a short distance for 10 seconds and extracting cardiac signals from two regions in the forehead, using artificial intelligence algorithms.

    The systolic and diastolic readings were around 90 per cent accurate, compared to the existing instrument (a digital sphygmomanometer) used to measure blood pressure, that is itself subject to errors.

    Experiments were performed on 25 people with different skin tones and under changing light conditions, overcoming the limitations reported in previous studies.

    “Monitoring blood pressure is essential to detect and manage cardiovascular diseases, the leading cause of global mortality, responsible for almost 18 million deaths in 2019,” UniSA remote sensing engineer Professor Javaan Chahl says.

    “Furthermore, in the past 30 years, the number of adults with hypertension has risen from 650 million to 1.28 billion worldwide.

    “The health sector needs a system that can accurately measure blood pressure and assess cardiovascular risks when physical contact with patients is unsafe or difficult, such as during the recent COVID outbreak.

    “If we can perfect this technique, it will help manage one of the most serious health challenges facing the world today,” Prof Chahl says.

    The cutting-edge technology has come a long way since 2017, when the UniSA and Iraqi research team demonstrated image-processing algorithms that could extract a human’s heart rate from drone video.

    In the past five years the researchers have developed algorithms to measure other vital signs, including breathing rates from 50 metres away, oxygen saturation, temperature, and jaundice in newborns.

    Their non-contact technology was also deployed in the United States during the pandemic to monitor for signs of COVID-19 from a distance.

    Notes for editors

    “Contactless blood pressure estimation system using a computer vision system” is published in Inventions. It is authored by Professor Javaan Chahl from the University of South Australia, and Dr Ali Al-Naji, Ahmed Bashar Fakhri and Mustafa F. Mahmood from Middle Technical University, Baghdad.

     

     

     

    [ad_2]

    University of South Australia

    Source link

  • Loving a Fake Person: Redefining Romance for the Virtual Age

    Loving a Fake Person: Redefining Romance for the Virtual Age

    [ad_1]

    Dec. 2, 2022 – When a young man finds himself falling for a 300-year-old cyborg in the 2019 sci-fi film Alita: Battle Angel, they share the following exchange:   

    “Does it bother you,” the cyborg (Alita) asks, “that I’m not completely human?” 

    “You are the most human person I have ever met,” the young man (Hugo) replies. 

    Cinema is filled with examples like this, of humans hitting it off with non-humans. See also the 2013 film Her, in which Joaquin Phoenix falls for a virtual assistant voiced by Scarlett Johansson, and the 2014 sci-fi flick Ex Machina, where a young programmer (Domhnall Gleeson) grows close to an AI robot that happens to resemble a beautiful woman (Alicia Vikander).

    But for many, the concept goes beyond the silver screen. In Japan, a whole subculture is devoted to romantic video games (RVGs), where players flirt with a computer-generated person and develop a relationship that some players describe as feeling genuine. RVGs are played worldwide but are especially popular among Japanese women (though there are several games for men as well). 

    Bizarre? Maybe even unhealthy? No doubt plenty of people would agree. But psychologist Mayu Koike, PhD, takes a different view. She and her colleagues at Hiroshima University are exploring whether such “virtual romantic relationships” could improve psychological well-being or even help people cope with the stress of real-world romance. So far, the answer to both questions is a tentative yes. 

    “People want to love and be loved, desires which can now be potentially fulfilled by virtual agents,” says Koike, who hopes to “cultivate a new field named ‘romantic anthropomorphism,’ bridging the gap between anthropomorphism and relationship science.” 

    Anthropomorphism – or placing human traits on nonhuman beings – is not new in psychology, but Koike aims to apply the concept to help us understand “virtual romance,” a romantic relationship between a human and a virtual partner.   

    Generally speaking, Koike says, her studies showed that if a person felt a connection with a “virtual agent,” their mood lifted – what psychologists call a “positive affect.” 

    “People think playing RVGs can improve their social skills,” Koike says, “and our ongoing study also shows that players want to practice a romantic relationship with a virtual agent before they commit to human-to-human relationships.”

    Her most recent paper, “Virtually in love: The role of anthropomorphism in virtual romantic relationships,” published in the British Journal of Social Psychology, describes three experiments examining the effects of “anthropomorphizing” the virtual partner.

    Results were mixed. When a player anthropomorphized the agent, the relationship felt more authentic. They also felt better and were more likely to desire a real-world relationship with the agent. But in a final experiment in which 104 female players met attractive male actors afterward, there was no correlation between how the women viewed their virtual relationship and how they interacted with the male actors.

    Still, that mood-boost is reason enough to study the process, because “it has a strong potential to improve our real-world relationships,” Koike says. This kind of research “might help to reduce loneliness and improve well-being.”

    Her recent paper builds on her 2020 study in the journal PLOS One titled “What factors attract people to play romantic video games?” Among those factors is a human-like voice and even touch, which is simulated (G-rated) in some games using, for example, a Wii controller to stroke someone’s hair, or a balance board for massage.

    As technology develops, and the quality of virtual agents improves, the potential for virtual romance will increase too, Koike notes. Such relationships could help fulfill the human need to love and be loved, or even serve as a “practicing tool for someone who is anxious about dating.” 

    “We should keep examining how these relationships with virtual agents can affect relationships in the modern world,” she says. 

    [ad_2]

    Source link

  • AI in Health Care: No, the Robots Are Not Taking Over

    AI in Health Care: No, the Robots Are Not Taking Over

    [ad_1]

    Dec. 1, 2022 – It’s common for many people to fear the unknown, and exactly how artificial intelligence might transform the health care and medical experience is no exception. 

    People might be afraid, for example, that AI will remove all human interaction from health care in the future. Not true, say the experts. Doctors and other health care workers might fear the technology will replace their clinical judgment and experience. Also not true, experts say. 

    The AI robots are not taking over. 

    AI and machine learning remain technologies that add to human know-how. For example, AI can help track a patient over time better than a health care professional relying on memory alone, can speed up image analysis, and is very good at prediction.

    But AI will never replace human intuition in medicine, experts say.

    “AI is unemotional. It’s fast and very, very smart, but it does not have intuition,” says Naheed Kurji, board chair of the Alliance for Artificial Intelligence in Healthcare and CEO of Cyclica Inc. 

    Machine learning, a form of artificial intelligence where a computer learns over time as it gets more and more data, could sound threatening to a person who might not fully understand the technology. That’s why education and greater awareness are essential to ease any concerns about this growing technology. 

    “You need to have an understanding of human behavior and how to help people overcome their inherent fears of something new,” Kurji says. 

    All this new science needs to be explained to the public, and machine learning is certainly one that deserves explanation,” says Angeli Moeller, PhD, head of data and integrations generating insights at Roche in Berlin, and board vice chair for the Alliance for Artificial Intelligence in Healthcare. 

    “It’s useful to ground it in examples that the general population is familiar with and with technology that has grown,” she says. “On our smartphones, we benefit from a significant amount of machine learning – even if you just look at your Google search or your satellite navigation system.”

    Moeller says it’s helpful to think of AI as an assistant to a doctor, nurse, a caregiver, or even a patient trying to understand more about a medical diagnosis, treatment plan, or prognosis. 

    Also, with big data comes big responsibility. “Health care industry accountability is important,” she says. 

    With than in mind, the Alliance for Artificial Intelligence in Healthcare was created in 2019 as a forum for industry players – drug companies, biotechnology firms, and database entities – to convene and address important AI questions. The group seeks to answer some fundamental questions, including: How do we ensure that we have ethical and appropriate use of artificial intelligence in health care? How do we make sure that that innovation gets to the patient as quickly as possible? 

    “If you think about your personal life, a decade ago, your car didn’t have autopilot modes where it drove itself,” says Sastry Chilukuri, co-CEO of Medidata and founder and president of Acorn AI. “You didn’t really have an iPhone – which is like a computer in your hand – much less like have an Apple Watch – which is like another minicomputer on your wrist pumping out all kinds of data.”

    “Our world has dramatically changed over just like the last 15 years,” he says. “It’s very interesting, I think. It’s a good time to be alive.”

    [ad_2]

    Source link

  • San Francisco allows police to deploy robots that can kill

    San Francisco allows police to deploy robots that can kill

    [ad_1]

    San Francisco allows police to deploy robots that can kill – CBS News


    Watch CBS News



    Police in San Francisco have been given the authority to use remote-controlled robots that are capable of using deadly force in emergency situations. Some are concerned about the militarization of police. Jeff Pegues takes a look.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • AI ‘Simulants’ Could Save Time and Money on New Medications

    AI ‘Simulants’ Could Save Time and Money on New Medications

    [ad_1]

    Nov. 30, 2022 – Artificial intelligence is poised to make clinical trials and drug development faster, cheaper, and more efficient. Part of this strategy is creating “synthetic control arms” that use data to create “simulants,” or computer-generated “patients” in a trial. 

    This way, researchers can enroll fewer real people and recruit enough participants in half the time. 

    Both patients and drug companies stand to gain, experts say. An advantage for people, for example, is simulants get the standard-of-care or placebo treatment, meaning all people in the study end up getting the experimental treatment. For drug companies unsure of which of their drug candidates hold the most promise, AI and machine learning can narrow down the prospects. 

    “So far, machine learning has primarily been effective at optimizing efficiency – not getting a better drug but rather optimizing the efficiency of screening. AI uses the learnings from the past to make drug discovery more effective and more efficient,” says Angeli Moeller, PhD, head of data and integrations generating insights at drugmaker Roche in Berlin, and vice chair of the Alliance for Artificial Intelligence in Healthcare board. 

    “I’ll give you an example. You might have a thousand small molecules and you want to see which one of them is going to bind to a receptor that’s involved in a disease. With AI, you don’t have to screen thousands of candidates. Maybe you can screen just one hundred,” she says.

    ‘Synthetic’ Trial Participants

    The first clinical trials to use data-created matches for patients – instead of control patients matched for age, sex or other traits – have already started. For example, Imunon Inc., a biotechnology company that develops next-generation chemotherapy and immunotherapy, used a synthetic control arm in its phase 1B trial of an agent added to pre-surgical chemotherapy for ovarian cancer.

    This early study showed researchers it would be worthwhile to continue evaluating the new agent in a phase 2 trial. 

    Using a synthetic control arm is “extremely cool,” says Sastry Chilukuri, co-CEO of Medidata, the company that supplied the data for the Phase 1B trial, and founder and president of Acorn AI.

    “What we have is the first FDA and EMA approval of a synthetic control arm where you’re replacing the entire control arm by using synthetic control patients, and these are patients that you pull out of historic clinical trial data,” he says.

    A Wave of AI-Boosted Research?

    The role of AI in research is expected to grow. To date, most AI-driven drug discovery research has focused on neurology and oncology. The start in these specialties is “probably due to the high unmet medical need and many well-characterized targets,” notes a March 2022 news and analysis piece in the journal Nature. 

    It speculated that this use of AI is just the start of “a coming wave.”

     “There is an increasing interest in the utilization of synthetic control methods [that is, using external data to create controls],” according to a review article in Nature Medicine in September.  

    It said the FDA already approved a medication in 2017 for a form of a rare pediatric neurologic disorder, Batten disease, based on a study with historical control “participants.”

    One example in oncology where a synthetic control arm could make a difference is glioblastoma research, Chilukuri says. This brain cancer is extremely difficult to treat, and patients typically drop out of trials because they want the experimental treatment and don’t want to remain in the standard-of-care control group, he says. Also, “just given the life expectancy, it’s very difficult to complete a trial.” 

    Using a synthetic control arm could speed up research and improve the chances of completing a glioblastoma study, Chilukuri says. “And the patients actually get the experimental treatment.”

    Still Early Days

    AI also could help limit “non-responders” in research.

    Clinical trials “are really difficult, they’re time-consuming, and they’re extremely expensive,” says Naheed Kurji, chair of the Alliance for Artificial Intelligence in Healthcare board, and president and CEO of Cyclica Inc, a data-driven drug discovery company based in Toronto. 

    “Companies are working very hard at finding more efficient ways to bring AI to clinical trials so they get outcomes faster at a lower cost but also higher quality.”

    There are a lot of clinical trials that fail, not because the molecule is not effective … but because the patients that were enrolled in a trial include a lot of non-responders. They just cancel out the responder data,” says Kurji. 

    “You’ve heard a lot of people talk about how we are going to make more progress in the next decade than we did in the last century,” Chilukuri says. “And that’s simply because of this availability of high-resolution data that allows you to understand what’s happening at an individual level.”

    “That is going to create this explosion in precision medicine,” he predicts.

    In some ways, it’s still early days for AI in clinical research. Kurji says, “There’s a lot of work to be done, but I think you can point to many examples and many companies that have made some really big strides.”

     

     

    [ad_2]

    Source link

  • Bitter friends: Inside the summit aiming to heal EU-US trade rift

    Bitter friends: Inside the summit aiming to heal EU-US trade rift

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    The transatlantic reset between Brussels and Washington is on life support.

    After four years of discord and disruption under Donald Trump, hopes were high that Joe Biden’s presidency would usher in a new era of cooperation between Europe and the U.S. after he declared: “America is back.”

    But when senior officials from both sides meet in Washington on Monday for a twice-yearly summit on technology and trade, the mood will be gloomier than at any time since Trump left office.

    The European Union is up in arms over Biden’s plans for hefty subsidies for made-in-America electric cars, claiming these payments, which partly kick in from January 1, are nothing more than outright trade protectionism. 

    At the same time, the U.S. is increasingly frustrated the 27-country bloc won’t be more aggressive in pushing back against China, accusing some European governments of caving in to Beijing’s economic might. 

    Those frictions are expected to overshadow the so-called EU-U.S. Trade and Technology Council (TTC) summit this week. At a time when the Western alliance is seeking to maintain a show of unity and strength in the face of Russian aggression and Chinese authoritarianism, the geopolitical stakes are high. 

    Biden may have helped matters last Thursday, during a joint press conference with French President Emmanuel Macron, by saying he believed the two sides can still resolve some of the concerns the EU has raised. 

    “We’re going to continue to create manufacturing jobs in America but not at the expense of Europe,” Biden said. “We can work out some of the differences that exist, I’m confident.”

    But, as ever, the details will be crucial.

    It is unclear what Biden can do to stop his Buy American subsidies from hurting European car-markers, for example, many of which come from powerful member countries like France and Germany. The TTC summit offers a crucial early opportunity for the two sides to begin to rebuild trust and start to deliver on Biden’s warm rhetoric.

    Judging by the TTC’s record so far, those attending, who will include U.S. Secretary of State Antony Blinken, will have their work cut out.

    More than 20 officials, policymakers and industry and society groups involved in the summit told POLITICO that the lofty expectations for the TTC have yet to deliver concrete results. Almost all of the individuals spoke on the condition of anonymity to discuss sensitive internal deliberations.

    U.S. Secretary of State Antony Blinken will be attending the TTC | Sean Gallup/Getty Images

    Some officials privately accused their counterparts of broken promises, particularly on trade. Others are frustrated at a lack of progress in 10 working groups on topics like helping small businesses to digitize and tackling climate change. 

    “With these kinds of allies, who needs enemies?” said one EU trade diplomat when asked about tensions around upcoming U.S. electric car subsidies. A senior U.S. official working on the summit hit back: “We need the Europeans to play ball on China. So far, we haven’t had much luck.”

    Much of the EU-U.S. friction is down to three letters: IRA.

    Biden’s Inflation Reduction Act, which provides subsidies to “Buy American” when it comes to purchasing electric vehicles, has infuriated officials in Brussels who see it as undermining the multilateral trading system and a direct threat to the bloc’s rival car industry. 

    “The expectation the TTC was established to provide a forum for precisely these advanced exchanges with a view to preventing trade frictions before they arise appears to have been severely frustrated,” said David Kleimann, a trade expert at the Bruegel think tank in Brussels. 

    Biden’s room for flexibility is limited. The context for the subsidies and tax breaks is his desire to make good on his promise to create more manufacturing jobs ahead of an expected re-election run in 2024. The U.S. itself is hovering on the edge of a possible recession. 

    In addition, the U.S. trade deficit with the EU hit a record $218 billion in 2021, second only to the U.S. trade deficit with China. The U.S. also ran an auto trade deficit of about $22 billion with European countries, with Germany accounting for the largest share of that. 

    Washington has few, if any, meaningful policy levers at its disposal to calm European anger. During a recent visit to the EU, Katherine Tai, the U.S. trade representative, urged European countries to pass their own subsidies to jumpstart Europe’s electric car production, according to three officials with knowledge of those discussions. 

    “It risks being the elephant in the room,” said Emily Benson, a senior fellow at the Center for Strategic and International Studies, a Washington-based think tank, when asked about the electric car dispute. 

    After a push from Brussels, there were increasing signs on Friday that the TTC could still play a role. In the latest version of the TTC’s draft declaration, obtained by POLITICO, both sides commit to addressing the European concerns over Biden’s subsidies, including via the Trade and Tech Council. Again, though, there was no detail on how Washington could resolve the issue.

    Politicians across Europe are already drawing up plans to fight back against Biden’s subsidies. That may include taking the matter to the World Trade Organization, hitting the U.S. with retaliatory tariffs or passing a “Buy European Act” that would nudge EU consumers and businesses to buy locally made goods and components.

    Officials and business leaders pose for a photo during the TTC in September 2021 | Pool photo by Rebecca Droke/AFP via Getty Images

    Privately, Washington has not been in the mood to give ground. Speaking to POLITICO before Biden met Macron, five U.S. policymakers said the IRA was not aimed at alienating allies, stressing that the green subsidies fit the very climate change goals that Europe has long called on America to adopt. 

    “There’s just a huge amount to be done and more frankly to be done than the market would provide for on its own,” said a senior White House official, who was not authorized to speak on the record. “We think the Inflation Reduction Act is reflective of that type of step, but we also think there is a space here for Europe and others, frankly, to take similar steps.”

    China tensions

    Senior politicians attending the summit are expected to play down tensions this week when they announce a series of joint EU-U.S. projects.

    These include funds for two telecommunications projects in Jamaica and Kenya and the announcement of new rules for how the emerging technology of so-called trustworthy artificial intelligence can develop. There’s also expected to be a plan for more coordination to highlight potential blockages in semiconductor supply chains, according to the draft summit statement obtained by POLITICO. 

    Yet even on an issue like microchips — where both Washington and Brussels have earmarked tens of billions of euros to subsidize local production — geopolitics intervenes.

    For months, U.S. officials have pushed hard for their European counterparts to agree to export controls to stop high-end semiconductor manufacturing equipment being sent to China, according to four officials with knowledge of those discussions. 

    Washington already passed legislation to stop Chinese companies from using such American-made hardware. The White House had been eager for the European Commission to back similar export controls, particularly as the Dutch firm ASML produced equipment crucial for high-end chipmaking worldwide. 

    Yet EU officials preparing for the TTC meeting said such requests had never been made formally to Brussels. The draft summit communiqué makes just a passing reference to China and threats from so-called non-market economies.

    Unlike the U.S., the EU remains divided on how to approach Beijing as some countries like Germany have long-standing economic ties with Chinese businesses that they are reluctant to give up. Without a consensus among EU governments, Brussels has little to offer Washington to help its anti-China push.

    “In theory, the TTC is not about China, but in practice, every discussion with the U.S. is,” said one senior EU official, speaking on the condition of anonymity. “If we talk with Katherine Tai about Burger King, it has an anti-China effect.”

    Gavin Bade, Clea Caulcutt, Samuel Stolton and Camille Gijs contributed reporting.

    [ad_2]

    Mark Scott, Barbara Moens and Doug Palmer

    Source link

  • Early research suggests promising use of AI to predict risk of heart attack or stroke using a single chest X-ray | CNN

    Early research suggests promising use of AI to predict risk of heart attack or stroke using a single chest X-ray | CNN

    [ad_1]



    CNN
     — 

    Early research suggests a promising use of artificial intelligence to predict the 10-year risk of death from a heart attack or stroke from a single chest X-ray.

    The preliminary findings were presented Tuesday at the annual meeting of the Radiological Society of North America. The research is in the final draft stages and has not been submitted for publication in a medical journal.

    Researchers used nearly 150,000 chest X-rays to train an artificial intelligence program to identify patterns in the images associated with risk from major cardiovascular disease events. They tested the program on a separate group of about 11,000 people and found “significant association” between the risk level predicted by the AI and the actual occurrence of a major cardiovascular disease event.

    The clinical standard for analyzing risk from cardiovascular disease is the atherosclerotic cardiovascular disease (ASCVD) risk score, a calculator that weights various patient data points that have been found to have a high association with adverse cardiovascular events, including age, blood pressure and history of smoking.

    Statin medication is recommended for people with a 10-year risk of 7.5% or higher. The AI model uses the same risk thresholds as the established risk calculator, and early findings suggest that it works just as well.

    “We’ve long recognized that X-rays capture information beyond traditional diagnostic findings, but we haven’t used this data because we haven’t had robust, reliable methods,” said Dr. Jakob Weiss, the lead researcher and a radiologist affiliated with Massachusetts General Hospital and the AI in Medicine program at Harvard Medical School’s Brigham and Women’s Hospital.

    Sometimes, the AI findings align with a traditional radiology reading, but other times, it picks up on things that may have been missed, he said.

    “Part of it is anatomical alterations that we would also pick up with our naked eye and that make physiological sense. Let’s say there’s increased blood pressure or cardiac failure – these are findings that we can pick up in a normal chest radiograph as well. But I think a lot of the information captured or extracted is somewhere embedded in the scan, but we can’t make sense of it as traditionally trained radiologists as of now,” Weiss said.

    “It has this black box character to it,” he said, which can sometimes make it hard to communicate risk to patients without an explanation to pinpoint.

    Dr. Donald Lloyd-Jones, chair of preventive medicine at Northwestern University’s Feinberg School of Medicine and former president of the American Heart Association, was co-chair of the risk assessment panel when the ASCVD risk calculator was created in 2013 and a key player in 2018 when the guidelines were updated to emphasize the relationship between the risk score and personal medical history.

    He was not involved in the new AI research but says it’s important to keep the field moving forward.

    “This is exactly the kind of application that artificial intelligence is best for,” he said. “So we need to continue to do things like this to really understand if we can find, particularly, patients who would otherwise slip through the cracks. I think that’s where it may be most useful.”

    But collecting all of the patient data points that go into the established risk calculator is still critical – because they’re actionable. And whether risk is calculated using a statistical formula or an AI model, the most successful outcomes will still require personalized patient assessments.

    “We don’t cure smoking by a chest X-ray. We actually need to work with the patient to find ways to get them to stop smoking,” Lloyd-Jones said. “The risk calculator is one part of risk assessment, but it’s not the only part. It’s a process that involves both the patient and the doctor in a discussion about what is the patient’s risk and how much we think a statin would help them.”

    For their research, Weiss and co-authors trained the AI using chest X-rays from participants in the National Cancer Institute’s Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial. It was tested on people who had a routine outpatient chest X-ray at Mass General Brigham and were potentially eligible for statin therapy, with an average age of 60.

    Additional research, including a controlled randomized trial, is necessary to validate the deep learning model.

    [ad_2]

    Source link

  • Can AI Drive More Diversity in Drug Development?

    Can AI Drive More Diversity in Drug Development?

    [ad_1]

    Nov. 29, 2022 – Artificial intelligence could help improve diversity, equity, and inclusion in clinical trials and drug development by overcoming some traditional human bias in these areas, but we’re not there yet, experts say. The technology could also assist doctors with data insights to make diagnosis and treatment more precise. 

    It starts with quality. Artificial intelligence (AI) relies on large amounts of data to create algorithms – or computer instructions – to develop best practices and predictions. But the instructions are only as good as the data used to create them. And people are the ones creating the data.

    “Underpinning the development of AI technologies are people, and those people have their own biases,” says Naheed Kurji, the chair of the board for the Alliance for Artificial Intelligence in Healthcare. “As a result, the algorithms will have their own biases.”

    Technology that uses speech to diagnose disease is an example. 

    “There are many cases, examples where companies have failed to recognize the differences in speech across different cultures,” says Kurji. When technology is based on speech patterns of a limited demographic, “then when that model is applied in the real world to a different demographic with a different accent, that model fails.”

    “As a result, it’s not representative.”

    Another example is genetic and genomic data. 

    “Give or take, 90-plus percent of genetic and genomic data has originated from people of European descent. It’s not from people from the continent of Africa, Southeast Asia, Asia, or South America,” says Kurji, who is also president and CEO of Cyclica Inc., a data-driven drug discovery company based in Toronto. 

    Therefore, “a lot of research that has been done on that level of data is inherently biased,” he says. 

    To Be Fair 

    Creating data that takes diversity, equity, and inclusion of people and cultures around the world into account is not a hopeless challenge. But it will take time, experts say. Once that is achieved, AI should be closer to being free of human and systemic biases.

    Greater awareness is essential. 

    “The solution to the problem comes from people inherently understanding that the bias exists,” Kurji says, and then only including fair and balanced data that passes a diversity test.

    Choosing More Wisely?

    Another promising avenue for AI is streamlining the drug development process, narrowing down potential drug candidates, and making clinical trials more cost-effective. 

    “If the source data has challenges and limitations, then that the AI is going to just keep propagating those limitations,” agrees Sastry Chilukuri, co-CEO of the data-driven clinical trial company Medidata and founder and president of Acorn AI. “The source data has to get more representative and has to get more equitable for the AI to reflect what’s happening.”

    When it comes to human or systemic bias in drug development, “it would be too much of a simplification to say AI or machine learning can fix it,” says Angeli Moeller, PhD, head of data and integrations generating insights at Roche in Berlin. “But responsible use of AI and machine learning can help us identify biases and find ways to mitigate any negative effects it might cause.”

    Silent Partners?

    At the same time AI aims to streamline drug development, the technology also can help make all doctors better at their jobs, experts say. AI would, for instance, help by spreading knowledge and expertise far and wide, sharing best practices from doctors with a lot of experience in more complex patients. This would help guide those who treat only a few such patients each year. 

    The surgical volumes in New York City or in Delhi could be as high as hundreds of patients a year, Chilukuri says. “But if you go to interiors of the U.S. like Nebraska, the surgeon just doesn’t see that much volume.” 

    AI could help doctors “by providing the kind of tools that allow them to be able to deliver the same top-notch care to all of their populations at lot faster,” he says.

    Boosting Efficiency 

    AI could help target therapy by using data to identify patients at highest risk. The technology also could improve some bottleneck areas in medicine, such as the time it takes to interpret radiology images, Kurji says. 

    There is an AI company “whose entire business model is not to replace your radiologist but to make radiologists better,” he notes. One of company’s aims is “to prevent death or severe ailment from radiology scans that get missed or that get stacked on the pile and just don’t get acted on fast enough for that patient.” 

    Radiologists are so busy, they may have only 30 seconds or less to interpret each scan, says Chilukuri. AI can flag a lesion of potential concern, but it can also compare an image to past scans on the same patient. This view afforded by AI does not just apply to radiology but across data-driven areas of medicine. 

    Advancing Personalized Medicine

    AI could also guide a personal approach to surgery, “because it’s not like humans come in small, medium and large,” Chilukuri says. The technology could help surgeons determine exactly where to operate on an individual patient.

    Moeller agrees that AI holds potential for boosting personalized medicine. 

    “AI can help with diagnosis and risk prediction, which can mean earlier interventions,” says Moeller, who’s also vice chair of the Alliance for Artificial Intelligence in Healthcare board.  “If you look, for instance, at a diabetic patient, what is the likelihood that he or she might develop eye problems from diabetic macular edema?”

    The technology could also help with getting a look at the big picture. 

    “Machine learning can look for patterns in a population that might not be in your medical textbook,” Moeller says. 

    Beyond diagnosis and treatment, AI also could help with recovery by customizing rehabilitation for each patient, Chilukuri predicts. 

    “It’s not like every person is going to rehab the exact same way. So, you have highly individualized AI plans that allow you to actually stay on track and predict where you’re going.”

    [ad_2]

    Source link

  • This AI-Powered App Makes You the Subject of a Fred again.. Song – EDM.com

    This AI-Powered App Makes You the Subject of a Fred again.. Song – EDM.com

    [ad_1]

    You can now generate an endless feed of song titles using Fred again..‘s distinct discographic nomenclature—all with the help of artificial intelligence.

    If you’ve ever wondered what being the subject of a Fred again.. song would look like, visit the AI-powered “You as a Fred again.. Song” site. Enter your first name and a selfie, and viola—you too can (spiritually) become a part of Fred’s Actual Life 3 universe.

    Built by Claire Wang, the programmatic platform applies Fred’s signature transparent blue tint to your image and generates a song title with your name, followed by a tongue-in-cheek parenthetical ad-lib.

    [ad_2]

    Cameron Sunkel

    Source link

  • How Bitcoin And Artificial Intelligence Will Free Your Time

    How Bitcoin And Artificial Intelligence Will Free Your Time

    [ad_1]

    This is an opinion editorial by Sydney Bright, a professional science writer on the topic of health benefits from mindfulness-based practices.

    Where is technology taking us? Will robots surpass our intelligence and replace us altogether? Will we combine with machines in some symbiotic merge that creates a new super being? Or are machines merely tools that will allow our more fundamental nature to thrive? In this article, I will argue that technology is how human beings will be able to return to a more natural life that is devoid of the harsh realities that existed 10,000 years ago.

    [ad_2]

    Sydney Bright

    Source link

  • A Track Sung By an AI Voice Has Eclipsed 100 Million Streams – EDM.com

    A Track Sung By an AI Voice Has Eclipsed 100 Million Streams – EDM.com

    [ad_1]

    While artists work hard to produce new music for fans and increase their streaming numbers, an A.I. has effortlessly recorded a song that eclipsed 100 million streams.

    According to a report by Music Business Worldwide, Chinese streaming giant Tencent Music Entertainment (TME) has recorded and released over 1,000 new songs that feature AI-generated vocals mimicking the human voice. One track in particular, titled “Today,” has become the first song with an AI voice to surpass the nine-digit milestone. That led to just under $350,000 in streaming revenue, per MBW.

    [ad_2]

    Mikala Lugen

    Source link

  • Stop the killer robots! Musk-backed lobbyists fight to save Europe from bad AI

    Stop the killer robots! Musk-backed lobbyists fight to save Europe from bad AI

    [ad_1]

    Press play to listen to this article

    Voiced by artificial intelligence.

    A lobby group backed by Elon Musk and associated with a controversial ideology popular among tech billionaires is fighting to prevent killer robots from terminating humanity, and it’s taken hold of Europe’s Artificial Intelligence Act to do so.

    The Future of Life Institute (FLI) has over the past year made itself a force of influence on some of the AI Act’s most contentious elements. Despite the group’s links to Silicon Valley, Big Tech giants like Google and Microsoft have found themselves on the losing side of FLI’s arguments.

    In the EU bubble, the arrival of a group whose actions are colored by fear of AI-triggered catastrophe rather than run-of-the-mill consumer protection concerns was received like a spaceship alighting in the Schuman roundabout. Some worry that the institute embodies a techbro-ish anxiety about low-probability threats that could divert attention from more immediate problems. But most agree that during its time in Brussels, the FLI has been effective. 

    “They’re rather pragmatic and they have legal and technical expertise,” said Kai Zenner, a digital policy adviser to center-right MEP Axel Voss, who works on the AI Act. “They’re sometimes a bit too worried about technology, but they raise a lot of good points.” 

    Launched in 2014 by MIT academic Max Tegmark and backed by tech grandees including Musk, Skype’s Jaan Tallinn, and crypto wunderkind Vitalik Buterin, FLI is a nonprofit devoted to grappling with “existential risks” — events able to wipe out or doom humankind. It counts other hot shots like actors Morgan Freeman and Alan Alda and renowned scientists Martin (Lord) Rees and Nick Bostrom among its external advisers.

    Chief among those menaces — and FLI’s priorities — is artificial intelligence running amok.

    “We’ve seen plane crashes because an autopilot couldn’t be overruled. We’ve seen a storming of the U.S. Capitol because an algorithm was trained to maximize engagement. These are AI safety failures today — as these systems become more powerful, harms might become worse,” Mark Brakel, FLI director of European policy, said in an interview.

    But the lobby group faces two PR problems. First, Musk, its most famous backer, is at the center of a storm since he started mass firings at Twitter as its new owner, catching the eye of regulators, too. Musk’s controversies could cause lawmakers to get skittish about talking to FLI. Second, the group’s connections to a set of beliefs known as effective altruism are raising eyebrows: The ideology faces a reckoning and is most recently being blamed as a driving force behind the scandal around cryptocurrency exchange FTX, which has unleashed financial carnage. 

    How FLI pierced the bubble

    The arrival of a lobby group fighting off extinction, misaligned artificial intelligence and killer robots was bound to be refreshing to otherwise snoozy Brussels policymaking.

    FLI’s Brussels office opened in mid-2021, as discussions about the European Commission’s AI Act proposal were kicking off.

    “We would prefer AI to be developed in Europe, where there will be regulations in place,” Brakel said. “The hope is that people take inspiration from the EU.”

    A former diplomat, the Dutch-born Brakel joined the institute in May 2021. He chose to work in AI policy as a field that was both impactful and underserved. Policy researcher Risto Uuk joined him two months later. A skilled digital operator — he publishes his analyses and newsletter from the domain artificialintelligenceact.eu — Uuk had previously done AI research for the Commission and the World Economic Forum. He joined FLI out of philosophical affinity: like Tegmark, Uuk subscribes to the tenets of effective altruism, a value system prescribing the use of hard evidence to decide how to benefit the largest number of people.

    Since starting in Brussels, the institute’s three-person team (with help from Tegmark and others, including law firm Dentons) has deftly spearheaded lobbying efforts on little-known AI issues.

    Elon Musk is one of the Future of Life Institute’s most prominent backers | Carina Johansen/NTB/AFP via Getty Images

    Exhibit A: general-purpose AI — software like speech-recognition or image-generating tools used in a vast array of contexts and sometimes affected by biases and dangerous inaccuracies (for instance, in medical settings). General-purpose AI was not mentioned in the Commission’s proposal, but wended its way into the EU Council’s final text and is guaranteed to feature in Parliament’s position.

    “We came out and said, ‘There’s this new class of AI — general-purpose AI systems — and the AI Act doesn’t consider them whatsoever. You should worry about this,'” Brakel said. “This was not on anyone’s radar. Now it is.”

    The group is also playing on European fears of technological domination by the U.S. and China. “General-purpose AI systems are built mainly in the U.S. and China, and that could harm innovation in Europe, if you don’t ensure they abide by some requirements,” Brakel said, adding this argument resonated with center-right lawmakers with whom he recently met. 

    Another of FLI’s hobbyhorses is outlawing AI able to manipulate people’s behavior. The original proposal bans manipulative AI, but that is limited to “subliminal” techniques — which Brakel thinks would create loopholes. 

    But the AI Act’s co-rapporteur, Romanian Renew lawmaker Dragoș Tudorache, is now pushing to make the ban more comprehensive. “If that amendment goes through, we would be a lot happier than we are with the current text,” Brakel said.

    So smart it made crypto crash

    While the group’s input on key provisions in the AI bill was welcomed, many in Brussels’ establishment look askance at its worldview.

    Tegmark and other FLI backers adhere to what’s referred to as effective altruism (or EA). A strand of utilitarianism codified by philosopher William MacAskill — whose work Musk called “a close match for my philosophy” — EA dictates that one should better the lives of as many people as possible, using a rationalist fact-based approach. At a basic level, that means donating big chunks of one’s income to competent charities. A more radical, long-termist strand of effective altruism demands that one strive to minimize risks able to kill off a lot of people — and especially future people, who will greatly outnumber existing ones. That means that preventing the potential rise of an AI whose values clash with humankind’s well-being should be at the top of one’s list of concerns.

    A critical take on FLI is that it is furthering this interpretation of the so-called effective altruism agenda, one supposedly uninterested in the world’s current ills — such as racism, sexism and hunger — and focused on sci-fi threats to yet-to-be-born folks. Timnit Gebru, an AI researcher whose acrimonious exit from Google made headlines in 2020, has lambasted FLI on Twitter, voicing “huge concerns” about it.

    “They are backed by billionaires including Elon Musk — that already should make people suspicious,” Gebru said in an interview. “The entire field around AI safety is made up of so many ‘institutes’ and companies billionaires pump money into. But their concept of AI safety has nothing to do with current harms towards marginalized groups — they want to reorient the entire conversation into preventing this AI apocalypse.”

    Effective altruism’s reputation has taken a hit in recent weeks after the fall of FTX, a bankrupt exchange that lost at least $1 billion in customers’ cryptocurrency assets. Its disgraced CEO Sam Bankman-Fried used to be one of EA’s darlings, talking in interviews about his plan to make bazillions and give them to charity. As FTX crumbled, commentators argued that Effective Altruism ideology led Bankman-Fried to cut corners and rationalize his recklessness. 

    Both MacAskill and FLI donor Buterin defended EA on Twitter, saying that Bankman-Fried’s actions contrasted with the philosophy’s tenets. “Automatically downgrading every single thing SBF believed in is an error,” wrote Buterin, who invented the Ethereum blockchain, and bankrolls FLI’s scholarship for AI existential risk research.

    Brakel said that the FLI and EA were two distinct things, and FLI’s advocacy was focused on present problems, from biased software to autonomous weapons, e.g. at the United Nations level. “Do we spend a lot of time thinking about what the world would look like in 400 years? No,” he said. (Neither Brakel nor the FLI’s EU representative, Claudia Prettner, call themselves effective altruists.)

    Californian ideology

    Another critique of FLI’s efforts to stave off evil AI argues that they obscure a techno-utopian drive to develop benevolent human-level AI. At a 2017 conference, FLI advisers — including Musk, Tegmark and Skype’s Tallinn — debated the likelihood and the desirability of smarter-than-human AI. Most panelists deemed “superintelligence” bound to happen; half of them deemed it desirable. The conference’s output was a series of (fairly moderate) guidelines on developing beneficial AI, which Brakel cited as one of FLI’s foundational documents.

    That techno-optimism led Emile P. Torres, a Ph.D. candidate in philosophy who used to collaborate with FLI, to ultimately turn against the organization. “None of them seem to consider that maybe we should explore some kind of moratorium,” Torres said. Raising such points with an FLI staffer, Torres said, led to a sort of excommunication. (Torres’s articles have been taken down from FLI’s website.)

    Within Brussels, the worry is that going ahead, FLI might change course from its current down-to-earth incarnation and steer the AI debate toward far-flung scenarios. “When discussing AI at the EU level, we wanted to draw a clear distinction between boring and concrete AI systems and sci-fi questions,” said Daniel Leufer, a lobbyist with digital rights NGO Access Now. “When earlier EU discussions on AI regulation happened, there were no organizations in Brussels placing focus on topics like superintelligence — it’s good that the debate didn’t go in that direction.”

    Those who regard the FLI as the spawn of Californian futurism point to its board and its wallet. Besides Musk, Tallinn and Tegmark, donors and advisers include researchers from Google and OpenAI, Meta co-founder Dustin Moskovitz’s Open Philanthropy, the Berkeley Existential Risk Initiative (which in turn has received funding from FTX) and actor Morgan Freeman. 

    In 2020 most of FLI’s global funding ($276,000 out of $482,479) came from the Silicon Valley Community Foundation, a charity favored by tech bigwigs like Mark Zuckerberg; 2021 accounts haven’t been released yet. 

    Brakel denied that the FLI is cozy with Silicon Valley, saying that the organization’s work on general-purpose AI made life harder for tech companies. Brakel said he had never spoken to Musk. Tegmark, meanwhile, is in regular touch with the members of the scientific advisory board, which includes Musk. 

    In Brakel’s opinion, what the FLI is doing is akin to early-day climate activism. “We currently see the warmest October ever. We worry about it today, but we also worry about the impact in 80 years’ time,” he said last month. “[There] are AI safety failures today — and as these systems become more powerful, the harms might become worse.”

    [ad_2]

    Gian Volpicelli

    Source link

  • Facebook Says It Has Created A ‘Human-Level’ Board Game AI

    Facebook Says It Has Created A ‘Human-Level’ Board Game AI

    [ad_1]

    Image for article titled Facebook Says It Has Created A 'Human-Level' Board Game AI

    Screenshot: YouTube

    Facebook, or as we’re supposed to call them now Meta, announced earlier today that their CICERO artificial intelligence has achieved “human-level performance” in the board game Diplomacy, which is notable for the fact that’s a game built on human interaction, not moves and manoeuvres (like, say, chess).

    Here’s a quite frankly distressing trailer:

    CICERO: The first AI to play Diplomacy at a human level | Meta AI

    If you’ve never played Diplomacy, and so are maybe wondering what the big deal is, it’s a board game first released in the 1950s that is played mostly by people just sitting around a table (or breaking off into rooms) and negotiating stuff. There are no dice or cards affecting play; everything is determined by humans communicating with other humans.

    So for an AI’s creators to say that it is playing at a “human level” in a game like this is a pretty bold claim! One that Meta backs up by saying that CICERO is actually operating on two different levels, one crunching the progress and status of the game, the other trying to communicate with human levels in a way we would understand and interact with.

    Meta have roped in “Diplomacy World Champion” Andrew Goff to support their claims, who says “A lot of human players will soften their approach or they’ll start getting motivated by revenge and CICERO never does that. It just plays the situation as it sees it. So it’s ruthless in executing to its strategy, but it’s not ruthless in a way that annoys or frustrates other players.”

    That sounds optimal, but as Goff says, maybe too optimal. Which reflects that while CICERO is playing well enough to keep up with humans, it’s far from perfect. As Meta themselves say in a blog post, CICERO “sometimes generates inconsistent dialogue that can undermine its objectives”, and my own criticism would be that every example they provide of its communication (like the one below) makes it look like a psychopathic office worker terrified that if they don’t end every sentence with !!! you’ll think they’re a terrible person.

    Image for article titled Facebook Says It Has Created A 'Human-Level' Board Game AI

    Image: Meta

    Of course the ultimate goal with this program isn’t to win board games. It’s simply using Diplomacy as a “sandbox” for “advancing human-AI interaction”:

    While CICERO is only capable of playing Diplomacy, the technology behind this achievement is relevant to many real world applications. Controlling natural language generation via planning and RL, could, for example, ease communication barriers between humans and AI-powered agents. For instance, today’s AI assistants excel at simple question-answering tasks, like telling you the weather, but what if they could maintain a long-term conversation with the goal of teaching you a new skill? Alternatively, imagine a video game in which the non player characters (NPCs) could plan and converse like people do — understanding your motivations and adapting the conversation accordingly — to help you on your quest of storming the castle.

    I may not be a billionaire Facebook executive, but instead of spending all this time and money making AI assistants better, something nobody outside of AI research and company expenditure seems to care about, could we not just…hire humans I can speak to instead?

    [ad_2]

    Luke Plunkett

    Source link