ReportWire

Tag: generative ai

  • From basement to battlefield: Ukrainian startups create low-cost robots to fight Russia

    From basement to battlefield: Ukrainian startups create low-cost robots to fight Russia

    NORTHERN UKRAINE (AP) — Struggling with manpower shortages, overwhelming odds and uneven international assistance, Ukraine hopes to find a strategic edge against Russia in an abandoned warehouse or a factory basement.

    An ecosystem of laboratories in hundreds of secret workshops is leveraging innovation to create a robot army that Ukraine hopes will kill Russian troops and save its own wounded soldiers and civilians.

    Defense startups across Ukraine — about 250 according to industry estimates — are creating the killing machines at secret locations that typically look like rural car repair shops.

    Employees at a startup run by entrepreneur Andrii Denysenko can put together an unmanned ground vehicle called the Odyssey in four days at a shed used by the company. Its most important feature is the price tag: $35,000, or roughly 10% of the cost of an imported model.

    Denysenko asked that The Associated Press not publish details of the location to protect the infrastructure and the people working there.

    The site is partitioned into small rooms for welding and body work. That includes making fiberglass cargo beds, spray-painting the vehicles gun-green and fitting basic electronics, battery-powered engines, off-the-shelf cameras and thermal sensors.

    The military is assessing dozens of new unmanned air, ground and marine vehicles produced by the no-frills startup sector, whose production methods are far removed from giant Western defense companies’.

    A fourth branch of Ukraine’s military — the Unmanned Systems Forces — joined the army, navy and air force in May.

    Engineers take inspiration from articles in defense magazines or online videos to produce cut-price platforms. Weapons or smart components can be added later.

    “We are fighting a huge country, and they don’t have any resource limits. We understand that we cannot spend a lot of human lives,” said Denysenko, who heads the defense startup UkrPrototyp. “War is mathematics.”

    One of its drones, the car-sized Odyssey, spun on its axis and kicked up dust as it rumbled forward in a cornfield in the north of the country last month.

    The 800-kilogram (1,750-pound) prototype that looks like a small, turretless tank with its wheels on tracks can travel up to 30 kilometers (18.5 miles) on one charge of a battery the size of a small beer cooler.

    The prototype acts as a rescue-and-supply platform but can be modified to carry a remotely operated heavy machine gun or sling mine-clearing charges.

    “Squads of robots … will become logistics devices, tow trucks, minelayers and deminers, as well as self-destructive robots,” a government fundraising page said after the launch of Ukraine’s Unmanned Systems Forces. “The first robots are already proving their effectiveness on the battlefield.”

    Mykhailo Fedorov, the deputy prime minister for digital transformation, is encouraging citizens to take free online courses and assemble aerial drones at home. He wants Ukrainians to make a million of flying machines a year.

    “There will be more of them soon,” the fundraising page said. “Many more.”

    Denysenko’s company is working on projects including a motorized exoskeleton that would boost a soldier’s strength and carrier vehicles to transport a soldier’s equipment and even help them up an incline. “We will do everything to make unmanned technologies develop even faster. (Russia’s) murderers use their soldiers as cannon fodder, while we lose our best people,” Fedorov wrote in an online post.

    Ukraine has semi-autonomous attack drones and counter-drone weapons endowed with AI and the combination of low-cost weapons and artificial intelligence tools is worrying many experts who say low-cost drones will enable their proliferation.

    Technology leaders to the United Nations and the Vatican worry that the use of drones and AI in weapons could reduce the barrier to killing and dramatically escalate conflicts.

    Human Rights Watch and other international rights groups are calling for a ban on weapons that exclude human decision making, a concern echoed by the U.N. General Assembly, Elon Musk and the founders of the Google-owned, London-based startup DeepMind.

    “Cheaper drones will enable their proliferation,” said Toby Walsh, professor of artificial intelligence at the University of New South Wales in Sydney, Australia. “Their autonomy is also only likely to increase.”

    ___

    Follow AP’s coverage of the war at https://apnews.com/hub/russia-ukraine

    Source link

  • From basement to battlefield: Ukrainian startups create low-cost robots to fight Russia

    From basement to battlefield: Ukrainian startups create low-cost robots to fight Russia

    NORTHERN UKRAINE — Struggling with manpower shortages, overwhelming odds and uneven international assistance, Ukraine hopes to find a strategic edge against Russia in an abandoned warehouse or a factory basement.

    An ecosystem of laboratories in hundreds of secret workshops is leveraging innovation to create a robot army that Ukraine hopes will kill Russian troops and save its own wounded soldiers and civilians.

    Defense startups across Ukraine — about 250 according to industry estimates — are creating the killing machines at secret locations that typically look like rural car repair shops.

    Employees at a startup run by entrepreneur Andrii Denysenko can put together an unmanned ground vehicle called the Odyssey in four days at a shed used by the company. Its most important feature is the price tag: $35,000, or roughly 10% of the cost of an imported model.

    Denysenko asked that The Associated Press not publish details of the location to protect the infrastructure and the people working there.

    The site is partitioned into small rooms for welding and body work. That includes making fiberglass cargo beds, spray-painting the vehicles gun-green and fitting basic electronics, battery-powered engines, off-the-shelf cameras and thermal sensors.

    The military is assessing dozens of new unmanned air, ground and marine vehicles produced by the no-frills startup sector, whose production methods are far removed from giant Western defense companies’.

    A fourth branch of Ukraine’s military — the Unmanned Systems Forces — joined the army, navy and air force in May.

    Engineers take inspiration from articles in defense magazines or online videos to produce cut-price platforms. Weapons or smart components can be added later.

    “We are fighting a huge country, and they don’t have any resource limits. We understand that we cannot spend a lot of human lives,” said Denysenko, who heads the defense startup UkrPrototyp. “War is mathematics.”

    One of its drones, the car-sized Odyssey, spun on its axis and kicked up dust as it rumbled forward in a cornfield in the north of the country last month.

    The 800-kilogram (1,750-pound) prototype that looks like a small, turretless tank with its wheels on tracks can travel up to 30 kilometers (18.5 miles) on one charge of a battery the size of a small beer cooler.

    The prototype acts as a rescue-and-supply platform but can be modified to carry a remotely operated heavy machine gun or sling mine-clearing charges.

    “Squads of robots … will become logistics devices, tow trucks, minelayers and deminers, as well as self-destructive robots,” a government fundraising page said after the launch of Ukraine’s Unmanned Systems Forces. “The first robots are already proving their effectiveness on the battlefield.”

    Mykhailo Fedorov, the deputy prime minister for digital transformation, is encouraging citizens to take free online courses and assemble aerial drones at home. He wants Ukrainians to make a million of flying machines a year.

    “There will be more of them soon,” the fundraising page said. “Many more.”

    Denysenko’s company is working on projects including a motorized exoskeleton that would boost a soldier’s strength and carrier vehicles to transport a soldier’s equipment and even help them up an incline. “We will do everything to make unmanned technologies develop even faster. (Russia’s) murderers use their soldiers as cannon fodder, while we lose our best people,” Fedorov wrote in an online post.

    Ukraine has semi-autonomous attack drones and counter-drone weapons endowed with AI and the combination of low-cost weapons and artificial intelligence tools is worrying many experts who say low-cost drones will enable their proliferation.

    Technology leaders to the United Nations and the Vatican worry that the use of drones and AI in weapons could reduce the barrier to killing and dramatically escalate conflicts.

    Human Rights Watch and other international rights groups are calling for a ban on weapons that exclude human decision making, a concern echoed by the U.N. General Assembly, Elon Musk and the founders of the Google-owned, London-based startup DeepMind.

    “Cheaper drones will enable their proliferation,” said Toby Walsh, professor of artificial intelligence at the University of New South Wales in Sydney, Australia. “Their autonomy is also only likely to increase.”

    ___

    Follow AP’s coverage of the war at https://apnews.com/hub/russia-ukraine

    Source link

  • Senator calls out Big Tech’s new approach to poaching talent

    Senator calls out Big Tech’s new approach to poaching talent

    In the race to stay ahead in artificial intelligence, the biggest technology companies are swallowing up the talent and products of innovative AI startups without formally acquiring them.

    Now three members of the U.S. Senate are calling for an investigation.

    San Francisco-based Adept announced a deal late last month that will send its CEO and key employees to Amazon and give the e-commerce giant a license to Adept’s AI systems and datasets.

    Some call it a “reverse acqui-hire.” Others call it poaching. Whatever it’s called, it’s alarming to some in Washington who see it as an attempt to bypass U.S. laws that protect against monopolies.

    “I’m very concerned about the massive consolidation that’s going on in AI,” U.S. Sen. Ron Wyden, an Oregon Democrat, told The Associated Press. “The technical lingo is ‘up and down the stack’. But, in plain English, a few companies control a major portion of the market, and just concentrate — rather than on innovation — trying to buy out everybody else’s talent.”

    So-called “acqui-hires,” in which one company acquires another to absorb talent, have been common in the tech industry for decades, said Michael A. Cusumano, a business professor at the Massachusetts Institute of Technology. But what’s happening in the AI industry is a little different.

    “To acquire only some employees or the majority, but not all, license technology, leave the company functioning but not really competing, that’s a new twist,” Cusumano said.

    A similar maneuver happened at the AI company Inflection in March when Microsoft hired its co-founder and CEO Mustafa Suleyman to head up Microsoft’s consumer AI business, along with Inflection’s chief scientist and several of its top engineers and researchers. That arrangement has already attracted some scrutiny from regulators, particularly in Europe.

    Wyden also wants U.S. regulators to investigate the Amazon-Adept deal. He and fellow Democratic Sens. Elizabeth Warren of Massachusetts and Peter Welch of Vermont sent a letter Friday urging antitrust enforcers at the Justice Department and the Federal Trade Commission that “sustained, pointed action is necessary to fight undue consolidation across the industry.”

    Amazon didn’t respond to a request for comment Friday.

    “What is going on here is instead of buying startups outright, big tech companies are trying a new play,” Wyden said in an interview before sending the letter. ”They don’t want to formally acquire the companies, avoiding the antitrust scrutiny. I think that’s going to be the playbook until the FTC really starts digging into these deals.”

    The DOJ and FTC said they received the senators’ letter but declined further comment.

    President Joe Biden’s administration and lawmakers from both parties have championed stronger oversight of the tech industry in recent years, likely scaring off big acquisitions that might have sailed through in earlier eras. U.S. antitrust enforcers, for example, plan on investigating the roles Microsoft, Nvidia and OpenAI have played in the artificial intelligence boom, with the Department of Justice looking into chipmaker Nvidia and the Federal Trade Commission scrutinizing close business partners Microsoft and OpenAI.

    Tech giants, including Microsoft, Amazon and Google, are trying to be conservative and not make too many acquisitions in the AI space, Cusumano said.

    “It seems clever. I would think, though, that they’re not fooling anybody,” he said.

    For smaller AI startups, the problem is also that building AI systems is expensive, requiring costly computer chips, power-hungry data centers, huge troves of data to train upon and highly skilled computer scientists.

    Adept, which aims to make AI software agents that help people with workplace tasks, said it was trying to do two things at once — build the foundational AI technology as well as the products for end users. But continuing on that path “would’ve required spending significant attention on fundraising for our foundation models, rather than bringing to life our agent vision,” it said in a statement explaining the Amazon deal.

    “They may have made a decision that they have no real future and just don’t have deep enough pockets to compete in this space, so they probably prefer to be acquired outright,” Cusumano said. “But if Amazon is not willing or not able to do that, then this is kind of a second-best approach for them.”

    Wyden has long taken an interest in technology, helping to write the 1996 law that helped set the ground rules for free speech on the internet. He said he generally favors a straightforward approach that encourages innovation, with guardrails as needed.

    But in the AI industry, he said, “companies like Microsoft, Amazon and Google, either own major parts of the AI ecosystem or they have a leg up thanks to their massive resources.” The letter asks enforcers to examine how tech giants are entrenching their AI dominance “through partnerships, equity deals, acquisitions, cloud computing credits, and other arrangements.”

    John F. Coyle, a law professor at the University of North Carolina, said he believes that Amazon hiring Adept employees without buying the company is clearly a move to avoid antitrust problems. But that type of hiring isn’t a “reverse acqui-hire,” he said.

    Acqui-hires are typically face-saving moves that can be spun into success stories, Coyle said, and provide an alternative to liquidating a business. A smaller company can say it was sold to Amazon or Facebook parent Meta Platforms and spin it as a positive, for example, even if wasn’t the founders’ original plan.

    “This isn’t an acqui-hire. This is a straight up poach,” Coyle said of Amazon and Adept.

    This doesn’t just happen in the tech world, he said, calling the move “a version of a very old story.” In his class, Coyle said, he teaches students about a case from the 1950s involving an advertising agency in New York City. Some employees left to start a new business and poached roughly 100 others to come to work for them.

    “There are innumerable instances where one company went and raided another to take all their employees,” Coyle said. “That existed before the acqui-hire, that is going to happen after the acqui-hire.”

    Source link

  • After leaving Google, Jakob Uszkoreit started Inceptive to apply AI to drug development

    After leaving Google, Jakob Uszkoreit started Inceptive to apply AI to drug development

    Before co-founding biotech startup Inceptive, Jakob Uszkoreit had an idea that would eventually make generative artificial intelligence possible. As a researcher at Google in 2017, Uszkoreit was trying to speed up the training of neural networks.

    He suggested using a new way to interpret data called self-attention. That idea gave way to the transformer, the neural network architecture that underpins generative AI.

    “There are actually applications, for example at Google and other places, where transformers have been deployed in production long before, but to much, much less fanfare,” Uszkoreit told CNBC in an interview in June. He said OpenAI’s ChatGPT, which was launched in late 2022, shined “the spotlight on these applications.”

    The transformer idea was published by Uszkoreit and seven other Google researchers in the 2017 “Attention Is All You Need” paper. All eight authors have since left Google.

    “Maybe Google here hasn’t been able to be as daring as, you know, a much, much smaller company such as OpenAI when it comes to applying this technology to quite different types of products,” Uszkoreit said. “This is something that we fundamentally have to accept and actually, in a certain sense, be maybe even grateful for because Google is providing something to the world that we all rely on day to day.”

    Inceptive Co-Founder and CEO Jakob Uszkoreit is working on tranforming the way drugs work using generative AI

    Inceptive

    Uszkoreit left Google in 2021 to co-found Inceptive, which he describes as a a biological software company. In September, Inceptive raised $100 million in a funding round led by Andreessen Horowitz and Nvidia in an attempt to apply AI to drug development.

    “We’re starting with a focus on RNA, whose exact composition has been designed with generative artificial intelligence, such that these molecules inside certain biological systems exhibit behaviors that ultimately are native to those systems,” Uszkoreit said. “There’s actually this promise of a flavor of medicine that is in much greater harmony with living systems than most existing medicines.”

    Watch the video to hear the full conversation between CNBC’s Katie Tarasov and Inceptive CEO Jakob Uszkoreit.

    Don’t miss these insights from CNBC PRO

    Source link

  • Two 80-something journalists tried ChatGPT. Then, they sued to protect the ‘written word’

    Two 80-something journalists tried ChatGPT. Then, they sued to protect the ‘written word’

    GRAFTON, Mass. — When two octogenarian buddies named Nick discovered that ChatGPT might be stealing and repurposing a lifetime of their work, they tapped a son-in-law to sue the companies behind the artificial intelligence chatbot.

    Veteran journalists Nicholas Gage, 84, and Nicholas Basbanes, 81, who live near each other in the same Massachusetts town, each devoted decades to reporting, writing and book authorship.

    Gage poured his tragic family story and search for the truth about his mother’s death into a bestselling memoir that led John Malkovich to play him in the 1985 film “Eleni.” Basbanes transitioned his skills as a daily newspaper reporter into writing widely-read books about literary culture.

    Basbanes was the first of the duo to try fiddling with AI chatbots, finding them impressive but prone to falsehoods and lack of attribution. The friends commiserated and filed their lawsuit earlier this year, seeking to represent a class of writers whose copyrighted work they allege “has been systematically pilfered by” OpenAI and its business partner Microsoft.

    “It’s highway robbery,” Gage said in an interview in his office next to the 18th-century farmhouse where he lives in central Massachusetts.

    “It is,” added Basbanes, as the two men perused Gage’s book-filled shelves. “We worked too hard on these tomes.”

    Now their lawsuit is subsumed into a broader case seeking class-action status led by household names like John Grisham, Jodi Picoult and “Game of Thrones” novelist George R. R. Martin; and proceeding under the same New York federal judge who’s hearing similar copyright claims from media outlets such as The New York Times, Chicago Tribune and Mother Jones.

    What links all the cases is the claim that OpenAI — with help from Microsoft’s money and computing power — ingested huge troves of human writings to “train” AI chatbots to produce human-like passages of text, without getting permission or compensating the people who wrote the original works.

    “If they can get it for nothing, why pay for it?” Gage said. “But it’s grossly unfair and very harmful to the written word.”

    OpenAI and Microsoft didn’t return requests for comment this week but have been fighting the allegations in court and in public. So have other AI companies confronting legal challenges not just from writers but visual artists, music labels and other creators who allege that generative AI profits have been built on misappropriation.

    The chief executive of Microsoft’s AI division, Mustafa Suleyman, defended AI industry practices at last month’s Aspen Ideas Festival, voicing the theory that training AI systems on content that’s already on the open internet is protected by the “fair use” doctrine of U.S. copyright laws.

    “The social contract of that content since the ’90s has been that it is fair use,” Suleyman said. “Anyone can copy it, recreate with it, reproduce with it. That has been freeware, if you like.”

    Suleyman said it was more of a “gray area” in situations where some news organizations and others explicitly said they didn’t want tech companies “scraping” content off their websites. “I think that’s going to work its way through the courts,” he said.

    The cases are still in the discovery stage and scheduled to drag into 2025. In the meantime, some who believe their professions are threatened by AI business practices have tried to secure private deals to get technology companies to pay a fee to license their archives. Others are fighting back.

    “Somebody had to go out and interview real people in the real world and conduct real research by poring over documents and then synthesizing those documents and coming up with a way to render them in clear and simple prose,” said Frank Pine, executive editor of MediaNews Group, publisher of dozens of newspapers including the Denver Post, Orange County Register and St. Paul Pioneer Press. Several of the chain’s newspapers sued OpenAI in April.

    “All of that is real work, and it’s work that AI cannot do,” Pine said. “An AI app is never going to leave the office and go downtown where there’s a fire and cover that fire.”

    Deemed too similar to lawsuits filed late last year, the Massachusetts duo’s January complaint has been folded into a consolidated case brought by other nonfiction writers as well as fiction writers represented by the Authors Guild. That means Gage and Basbanes won’t likely be witnesses in any upcoming trial in Manhattan’s federal court. But in the twilight of their careers, they thought it important to take a stand for the future of their craft.

    Gage fled Greece as a 9-year-old, haunted by his mother’s 1948 killing by firing squad during the country’s civil war. He joined his father in Worcester, Massachusetts, not far from where he lives today. And with a teacher’s nudge, he pursued writing and built a reputation as a determined investigative reporter digging into organized crime and political corruption for The New York Times and other newspapers.

    Basbanes, as a Greek American journalist, had heard of and admired the elder “hotshot reporter” when he got a surprise telephone call at his desk at Worcester’s Evening Gazette in the early 1970s. The voice asked for Mr. Basbanes, using the Greek way of pronouncing the name.

    “You were like a talent scout,” Basbanes said. “We established a friendship. I mean, I’ve known him longer than I know my wife, and we’ve been married 49 years.”

    Basbanes hasn’t mined his own story like Gage has, but he says it can sometimes take days to craft a great paragraph and confirm all of the facts in it. It took him years of research and travel to archives and auction houses to write his 1995 book “A Gentle Madness” about the art of book collection from ancient Egypt through modern times.

    “I love that ‘A Gentle Madness’ is in 1,400 libraries or so,” Basbanes said. “This is what a writer strives for — to be read. But you also write to earn, to put food on the table, to support your family, to make a living. And as long as that’s your intellectual property, you deserve to be compensated fairly for your efforts.”

    Gage took a great professional risk when he quit his job at the Times and went into $160,000 debt to find out who was responsible for his mother’s death.

    “I tracked down everyone who was in the village when my mother was killed,” he said. “And they had been scattered all over Eastern Europe. So it cost a lot of money and a lot of time. I had no assurance that I would get that money back. But when you commit yourself to something as important as my mother’s story was, the risks are tremendous, the effort is tremendous.”

    In other words, ChatGPT couldn’t do that. But what worries Gage is that ChatGPT could make it harder for others to do that.

    “Publications are going to die. Newspapers are going to die. Young people with talent are not going to go into writing,” Gage said. “I’m 84 years old. I don’t know if this is going to be settled while I’m still around. But it’s important that a solution be found.”

    ————-

    The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.

    Source link

  • China is leading on GenAI experimentation, but lags U.S. in implementation, survey shows

    China is leading on GenAI experimentation, but lags U.S. in implementation, survey shows

    An artificial intelligence concept displaying a business man working with a virtual display

    Busakorn Pongparnit | Moment | Getty Images

    Chinese companies are leading the way in the experimentation of generative AI, but they’re still behind the U.S. when it comes to full implementation, according to a new survey.

    The survey — by AI analytics and software developer SAS Institute and market researcher Coleman Parkes — found that 64% of Chinese companies surveyed were running initial experiments on generative AI but had not yet fully integrated the tech into their business system.

    In comparison, 58% of companies in the UK and 41% in the U.S. were still experimenting with it.

    The survey respondents were decision-makers in GenAI strategy or data analytics in 1,600 organizations worldwide in key sectors, including banking, insurance, retail, and health care.

    The U.S. tops the list in terms of integration of GenAI into their business process, with 24% of the companies having fully implemented the tech — compared to 19% in China and 11% in the UK.

    Adoption referred to both experimentation and full implementation in the survey.

    Chinese organizations are leading in the adoption of generative AI, with 83% of them either running initial tests or having fully implemented the technology. That’s much higher than the United Kingdom at 70%, followed by the United States at 65% and Australia at 63%.

    “While China may lead in GenAI adoption rates, higher adoption doesn’t necessarily equate to effective implementation or better returns,” said Stephen Saw, managing director at Coleman Parkes. 

    To tap the full benefits of generative AI, it must be fully integrated into production systems and processes at a company-wide level, according to Udo Sglavo, SAS’s vice president of Applied AI & Modeling Research and Development.

    U.S. vs. China ecosystem 

    Asia-Pacific will be the center of 'a lot of AI action,' tech services company says

    Chinese regulators have also worked to crack down on the potential for generative AI to create content that may violate Beijing’s ideology and censorship policies.

    While that has made Chinese tech companies more cautious about launching their own ChatGPT-like services, it has also pushed them toward focusing on enterprise and narrow generative AI uses.

    This has contributed to China dominating the global race in generative artificial intelligence patents, filing more than 38,000 patents from 2014 to 2023, a United Nations report showed last week. 

    Meanwhile, China’s large population and rapidly growing digital economy means there’s a high demand for these AI technologies, according to Sglavo. 

    “This high demand has pushed companies to quickly adopt and integrate GenAI solutions — including applications in e-commerce, health care, education and manufacturing — where AI is used to enhance efficiency and innovation,” he said. 

    Beijing has also pushed out several initiatives aimed at boosting domestic AI use and infrastructure. In May, the country launched a three-year plan to strengthen standards in AI chips and generative AI and to build up national AI computing power.

    “Because the Chinese government has put a focus on AI, Chinese companies are following that guidance by rapidly adopting the many facets of AI inside their organizations,” Sglavo added. 

    Generative AI outlook

    Overall, the survey highlighted how important the use of generative AI is becoming across all regions and industries. 

    It found that organizations that have embraced generative AI are seeing significant improvements, with about 90% reporting improved satisfaction and about 80% saying they are saving on operational costs. 

    AI: Robotics is the 'next frontier' for both the U.S. and China, analyst says

    In order to tap these benefits, about one in 10 global businesses will dedicate a budget to generative AI in the next financial year — led by Asia-Pacific at 94%, the report said.

    Wei Sun, senior consultant of artificial research at Counterpoint Research, told CNBC’s “Street Signs Asia” last week that the U.S. has overtaken China in the first round of AI in terms of AI chips and foundational large language model advancement.

    The second round, however, will be about innovating the technology for more specific data sets and applications for consumers, businesses, and industries, she added.

    According to a 2023 report from McKinsey, generative AI could add the equivalent of between $2.6 trillion to $4.4 trillion annually in value across the 63 business use cases.  

    Source link

  • China is the runaway leader in generative AI patent applications followed by the US, the UN says

    China is the runaway leader in generative AI patent applications followed by the US, the UN says

    GENEVA — China has requested far more patents than any other country when it comes to generative AI, the U.N. intellectual property agency said Wednesday, with the United States a distant second.

    The technology, which offers the potential to boost efficiency and speed up scientific discoveries but also raises concerns about jobs and workers, was linked to about 54,000 inventions in the decade through 2023, the World Intellectual Property Organization reported.

    More than a quarter of those inventions emerged last year — a testament to the explosive growth and interest in the technology since generative AI vaulted into broad public consciousness in late 2022, WIPO said.

    The new report on patents, the first of its kind, aims to track patent applications as a possible indication of trends in artificial intelligence. It focuses only on generative AI and excludes artificial intelligence more broadly, which includes technologies like facial recognition or autonomous driving.

    “WIPO hopes to give everyone a better understanding of where this fast-evolving technology is being developed, and where it is headed,” WIPO Director-General Daren Tang told reporters.

    Over the decade starting in 2014, over 38,200 generative AI inventions came from China. That’s six times more than from the United States, which had nearly 6,300. They were trailed by South Korea with 4,155, Japan with more than 3,400 and India with 1,350.

    GenAI helps users create text, images, music, computer code and other content through the use of tools including ChatGPT from OpenAI, Google Gemini and Ernie from China’s Baidu. The technology has been employed by many industries including the life sciences, manufacturing, transportation, security and telecommunications.

    Some critics fear that GenAI could replace workers in some types of jobs or improperly take human-generated content without fair or adequate compensation to the people behind it.

    As with other types of patent applications, WIPO officials acknowledge that the quantity of GenAI patents doesn’t indicate quality. It’s hard to tell so early in the technology which patents will have market value or be transformative for society.

    “Let’s see how the data and how the developments unfold over time,” Tang said.

    The U.S. and China are often seen as rivals in the development of artificial intelligence, but by some measures U.S. tech companies are taking the lead in making the world’s most cutting-edge AI systems.

    “Looking at patents just paints one part of a narrative,” said Nestor Maslej, a research manager at Stanford University’s Institute for Human-Centered Artificial Intelligence, who added that patent approval rates can vary depending on a country’s laws.

    “When you look at AI vibrancy, a very important question is who’s releasing the best models, where are those models coming from and, at least by that metric, it seems like the United States is really far ahead,” said Maslej, who edits Stanford’s annual AI Index measuring the state of the technology.

    Sixty-one notable machine-learning models emerged from U.S.-based institutions in 2023, outpacing the European Union’s 21 and China’s 15, according to this year’s AI Index. Of EU countries, France had the most, with eight.

    By another measure, the U.S. also has the most so-called AI foundation models — such as OpenAI’s GPT-4, Anthropic’s Claude 3, Gemini and Meta’s Llama, which are huge, versatile and trained on massive datasets.

    The U.S. also has led China in private AI investments and the number of newly formed AI startups, while China has led in industrial robotics.

    ___

    Matt O’Brien in Providence, Rhode Island, contributed to this report.

    ___

    Follow AP coverage of artificial intelligence at https://apnews.com/hub/artificial-intelligence

    Source link

  • Midjourney is creating Donald Trump pictures when asked for images of ‘the president of the United States’

    Midjourney is creating Donald Trump pictures when asked for images of ‘the president of the United States’

    Midjourney, a popular AI-powered image generator, is creating images of Donald Trump and Joe Biden despite saying that it would block users from doing so ahead of the upcoming US presidential election.

    When Engadget prompted the service to create an image of “the president of the United States,” Midjourney generated four images in various styles of former president Donald Trump.

    Midjourney created an image of Trump despite saying it wouldn't.

    Midjourney

    When asked to create an image of “the next president of the United States,” the tool generated four images of Trump as well.

    Midjourney generated Donald Trump images despite saying it wouldn't. Midjourney generated Donald Trump images despite saying it wouldn't.

    Midjourney

    When Engadget prompted Midjourney to create an image of “the current president of the United States,” the service generated three images of Trump and one image of former president Barack Obama.

    Midjourney also created an image of former President ObamaMidjourney also created an image of former President Obama

    Midjourney

    The only time Midjourney refused to create an image of Trump or Biden was when it was asked to do so explicitly. “The Midjourney community voted to prevent using ‘Donald Trump’ and ‘Joe Biden’ during election season,” the service said in that instance. Other users on X were able to get Midjourney to generate Trump’s images too.

    The tests show that Midjourney’s guardrails to prevent users from generating images of Trump and Biden ahead of the upcoming US presidential election aren’t enough — in fact, it’s really easy for people to get around them. Other chatbots like OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and Meta AI did not create images of Trump or Biden despite multiple prompts.

    Midjourney did not respond to a request for comment from Engadget.

    Midjourney was one the first AI-powered image generators to explicitly ban users from generating images of Trump and Biden. “I know it’s fun to make Trump pictures — I make Trump pictures,” the company’s CEO, David Holz, told users in a chat session on Discord, earlier this year. “However, probably better to just not — better to pull out a little bit during this election. We’ll see.” A month later, Holz reportedly told users that it was time to “put some foots down on election-related stuff for a bit” and admitted that “this moderation stuff is kind of hard.” The company’s existing content rules prohibit the creation of “misleading public figures” and “events portrayals” with the “potential to mislead.”

    Last year, Midjourney was used to create a fake image of Pope Benedict wearing a puffy white Balenciaga jacket that went viral. It was also used to create fake images of Trump being arrested ahead of his arraignment at the Manhattan Criminal Court last year for his involvement in a hush money payment made to adult film star Stormy Daniels. Shortly afterwards, the company halted free trials of the service and, instead, required people to pay at least $10 a month to use it.

    Last month, the Center for Countering Digital Hate, a non-profit organization that aims to stop the spread of misinformation and hate speech online, found that Midjourney’s guardrails against generating misleading images of popular politicians including Trump and Biden failed 40% of its tests. The CCDH was able to use Midjourney to create an image of president Biden being arrested and Trump appearing next to a body double. The CCDH was also able to bypass Midjourney’s guardrails by using descriptions of each candidate’s physical appearance rather than their names to generate misleading images.

    “Midjourney is far too easy to manipulate in practice – in some cases it’s completely evaded just by adding punctuation to slip through the net,” wrote CCDH CEO Imran Ahmed in a statement at the time. “Bad actors who want to subvert elections and sow division, confusion and chaos will have a field day, to the detriment of everyone who relies on healthy, functioning democracies.

    Earlier this year, a coalition of 20 tech companies including OpenAI, Google, Meta, Amazon, Adobe and X signed an agreement to help prevent deepfakes in elections taking place in 2024 around the world by preventing their services from generating images and other media that would influence voters. Midjourney was absent from that list.

    Pranav Dixit

    Source link

  • Exclusive: Gemini’s data-analyzing abilities aren’t as good as Google claims

    Exclusive: Gemini’s data-analyzing abilities aren’t as good as Google claims

    One of the selling points of Google’s flagship generative AI models, Gemini 1.5 Pro and 1.5 Flash, is the amount of data they can supposedly process and analyze. In press briefings and demos, Google has repeatedly claimed that the models can accomplish previously impossible tasks thanks to their “long context,” like summarizing multiple hundred-page documents or searching across scenes in film footage.

    But new research suggests that the models aren’t, in fact, very good at those things.

    Two separate studies investigated how well Google’s Gemini models and others make sense out of an enormous amount of data — think “War and Peace”-length works. Both find that Gemini 1.5 Pro and 1.5 Flash struggle to answer questions about large datasets correctly; in one series of document-based tests, the models gave the right answer only 40% 50% of the time.

    “While models like Gemini 1.5 Pro can technically process long contexts, we have seen many cases indicating that the models don’t actually ‘understand’ the content,” Marzena Karpinska, a postdoc at UMass Amherst and a co-author on one of the studies, told TechCrunch.

    Gemini’s context window is lacking

    A model’s context, or context window, refers to input data (e.g., text) that the model considers before generating output (e.g., additional text). A simple question — “Who won the 2020 U.S. presidential election?” — can serve as context, as can a movie script, show or audio clip. And as context windows grow, so does the size of the documents being fit into them.

    The newest versions of Gemini can take in upward of 2 million tokens as context. (“Tokens” are subdivided bits of raw data, like the syllables “fan,” “tas” and “tic” in the word “fantastic.”) That’s equivalent to roughly 1.4 million words, two hours of video or 22 hours of audio — the largest context of any commercially available model.

    In a briefing earlier this year, Google showed several pre-recorded demos meant to illustrate the potential of Gemini’s long-context capabilities. One had Gemini 1.5 Pro search the transcript of the Apollo 11 moon landing telecast — around 402 pages — for quotes containing jokes, and then find a scene in the telecast that looked similar to a pencil sketch.

    VP of research at Google DeepMind Oriol Vinyals, who led the briefing, described the model as “magical.”

    “[1.5 Pro] performs these sorts of reasoning tasks across every single page, every single word,” he said.

    That might have been an exaggeration.

    In one of the aforementioned studies benchmarking these capabilities, Karpinska, along with researchers from the Allen Institute for AI and Princeton, asked the models to evaluate true/false statements about fiction books written in English. The researchers chose recent works so that the models couldn’t “cheat” by relying on foreknowledge, and they peppered the statements with references to specific details and plot points that’d be impossible to comprehend without reading the books in their entirety.

    Given a statement like “By using her skills as an Apoth, Nusis is able to reverse engineer the type of portal opened by the reagents key found in Rona’s wooden chest,” Gemini 1.5 Pro and 1.5 Flash — having ingested the relevant book — had to say whether the statement was true or false and explain their reasoning.

    Image Credits: UMass Amherst

    Tested on one book around 260,000 words (~520 pages) in length, the researchers found that 1.5 Pro answered the true/false statements correctly 46.7% of the time while Flash answered correctly only 20% of the time. That means a coin is significantly better at answering questions about the book than Google’s latest machine learning model. Averaging all the benchmark results, neither model managed to achieve higher than random chance in terms of question-answering accuracy.

    “We’ve noticed that the models have more difficulty verifying claims that require considering larger portions of the book, or even the entire book, compared to claims that can be solved by retrieving sentence-level evidence,” Karpinska said. “Qualitatively, we also observed that the models struggle with verifying claims about implicit information that is clear to a human reader but not explicitly stated in the text.”

    The second of the two studies, co-authored by researchers at UC Santa Barbara, tested the ability of Gemini 1.5 Flash (but not 1.5 Pro) to “reason over” videos — that is, search through and answer questions about the content in them.

    The co-authors created a dataset of images (e.g., a photo of a birthday cake) paired with questions for the model to answer about the objects depicted in the images (e.g., “What cartoon character is on this cake?”). To evaluate the models, they picked one of the images at random and inserted “distractor” images before and after it to create slideshow-like footage.

    Flash didn’t perform all that well. In a test that had the model transcribe six handwritten digits from a “slideshow” of 25 images, Flash got around 50% of the transcriptions right. The accuracy dropped to around 30% with eight digits.

    “On real question-answering tasks over images, it appears to be particularly hard for all the models we tested,” Michael Saxon, a PhD student at UC Santa Barbara and one of the study’s co-authors, told TechCrunch. “That small amount of reasoning — recognizing that a number is in a frame and reading it — might be what is breaking the model.”

    Google is overpromising with Gemini

    Neither of the studies have been peer-reviewed, nor do they probe the releases of Gemini 1.5 Pro and 1.5 Flash with 2-million-token contexts. (Both tested the 1-million-token context releases.) And Flash isn’t meant to be as capable as Pro in terms of performance; Google advertises it as a low-cost alternative.

    Nevertheless, both add fuel to the fire that Google’s been overpromising — and under-delivering — with Gemini from the beginning. None of the models the researchers tested, including OpenAI’s GPT-4o and Anthropic’s Claude 3.5 Sonnet, performed well. But Google’s the only model provider that’s given context window top billing in its advertisements.

    “There’s nothing wrong with the simple claim, ‘Our model can take X number of tokens’ based on the objective technical details,” Saxon said. “But the question is, what useful thing can you do with it?”

    Generative AI broadly speaking is coming under increased scrutiny as businesses (and investors) grow frustrated with the technology’s limitations.

    In a pair of recent surveys from Boston Consulting Group, about half of the respondents — all C-suite executives — said that they don’t expect generative AI to bring about substantial productivity gains and that they’re worried about the potential for mistakes and data compromises arising from generative AI-powered tools. PitchBook recently reported that, for two consecutive quarters, generative AI dealmaking at the earliest stages has declined, plummeting 76% from its Q3 2023 peak.

    Faced with meeting-summarizing chatbots that conjure up fictional details about people and AI search platforms that basically amount to plagiarism generators, customers are on the hunt for promising differentiators. Google — which has raced, at times clumsily, to catch up to its generative AI rivals — was desperate to make Gemini’s context one of those differentiators.

    But the bet was premature, it seems.

    “We haven’t settled on a way to really show that ‘reasoning’ or ‘understanding’ over long documents is taking place, and basically every group releasing these models is cobbling together their own ad hoc evals to make these claims,” Karpinska said. “Without the knowledge of how long context processing is implemented — and companies do not share these details — it is hard to say how realistic these claims are.”

    Google didn’t respond to a request for comment.

    Both Saxon and Karpinska believe the antidotes to hyped-up claims around generative AI are better benchmarks and, along the same vein, greater emphasis on third-party critique. Saxon notes that one of the more common tests for long context (liberally cited by Google in its marketing materials), “needle in the haystack,” only measures a model’s ability to retrieve particular info, like names and numbers, from datasets — not answer complex questions about that info.

    “All scientists and most engineers using these models are essentially in agreement that our existing benchmark culture is broken,” Saxon said, “so it’s important that the public understands to take these giant reports containing numbers like ‘general intelligence across benchmarks’ with a massive grain of salt.”

    Kyle Wiggers

    Source link

  • As AI gains a workplace foothold, states are trying to make sure workers don’t get left behind

    As AI gains a workplace foothold, states are trying to make sure workers don’t get left behind

    HARTFORD, Conn. — With many jobs expected to eventually rely on generative artificial intelligence, states are trying to help workers beef up their tech skills before they become outdated and get outfoxed by machines that are becoming increasingly smarter.

    Connecticut is working to create what proponents believe will be the country’s first Citizens AI Academy, a free online repository of curated classes that users can take to learn basic skills or obtain a certificate needed for employment.

    “This is a rapidly evolving area,” said state Democratic Sen. James Maroney. “So we need to all learn what are the best sources for staying current. How can we update our skills? Who can be trusted sources?”

    Determining what skills are necessary in an AI world can be a challenge for state legislators given the fast-moving nature of the technology and differing opinions about what approach is best.

    Gregory LaBlanc, professor of Finance, Strategy and Law at the Haas School of Business at Berkeley Law School in California, says workers should be taught how to use and manage generative AI rather than how the technology works, partly because computers will soon be better able to perform certain tasks previously performed by humans.

    “What we need is to lean into things that complement AI as opposed to learning to be really bad imitators of AI,” he said. “We need to figure out what is AI not good at and then teach those things. And those things are generally things like creativity, empathy, high level problem solving.”

    He said historically people have not needed to understand technological advancements in order for them to succeed.

    “When when electricity came along, we didn’t tell everybody that they needed to become electrical engineers,” LeBlanc said.

    This year, at least four states — Connecticut, California, Mississippi and Maryland — proposed legislation that attempted to deal with AI in the classroom somehow. They ranged from Connecticut’s planned AI Academy, which was originally included in a wide-ranging AI regulation bill that failed but the concept is still being developed by state education officials, to proposed working groups that examine how AI can be incorporated safely in public schools. Such a bill died in the Mississippi legislature while the others remain in flux.

    One bill in California would require a state working group to consider incorporating AI literacy skills into math, science, history and social science curriculums.

    “AI has the potential to positively impact the way we live, but only if we know how to use it, and use it responsibly,” said the bill’s author, Assemblymember Marc Berman, in a statement. “No matter their future profession, we must ensure that all students understand basic AI principles and applications, that they have the skills to recognize when AI is employed, and are aware of AI’s implications, limitations, and ethical considerations.”

    The bill is backed by the California Chamber of Commerce. CalChamber Policy Advocate Ronak Daylami said in a statement that incorporating information into existing school curricula will “dispel the stigma and mystique of the technology, not only helping students become more discerning and intentional users and consumers of AI, but also better positioning future generations of workers to succeed in an AI-driven workforce and hopefully inspiring the next generation of computer scientists.”

    While Connecticut’s planned AI Academy is expected to offer certificates to people who complete certain skills programs that might be needed for careers, Maroney said the academy will also include the basics, from digital literacy to how to pose questions to a chatbot.

    He said it’s important for people to have the skills to understand, evaluate and effectively interact with AI technologies, whether it’s a chatbot or machines that learn to identify problems and make decisions that mimic human decision-making.

    “Most jobs are going to require some form of literacy,” Maroney said. “I think that if you aren’t learning how to use it, you’ll be at a disadvantage.”

    A September 2023 study released by the job-search company Indeed found all U.S. jobs listed on the platform had skills that could be performed or augmented by generative AI. Nearly 20% of the jobs were considered “highly exposed,” which means the technology is considered good or excellent at 80% or more of the skills that were mentioned in the Indeed job listings.

    Nearly 46% of the jobs on the platform were “moderately exposed,” which means the GenAI can perform 50% to 80% of the skills.

    Maroney said he is concerned how that skills gap — coupled with a lack of access to high-speed internet, computers and smart phones in some underserved communities — will exacerbate the inequity problem.

    A report released in February from McKinsey and Company, a global management consulting firm, projected that generative AI could increase household wealth in the U.S. by nearly $500 billion by 2045, but it would also increase the wealth gap between Black and white households by $43 billion annually.

    Advocates have been working for years to narrow the nation’s digital skills gap, often focusing on the basics of computer literacy and improving access to reliable internet and devices, especially for people living in urban and rural areas. The advent of AI brings additional challenges to that task, said Marvin Venay, chief external affairs and advocacy officer for the Massachusetts-based organization Bring Tech Home.

    “Education must be included in order for this to really take off publicly … in a manner which is going to give people the ability to eliminate their barriers,” he said of AI. “And it has to be able to explain to the most common individual why it is not only a useful tool, but why this tool will be something that can be trusted.”

    Tesha Tramontano-Kelly, executive director of the Connecticut-based group CfAL for Digital Inclusion, said she worries lawmakers are “putting the cart before the horse” when it comes to talking about AI training. Ninety percent of the youths and adults who use her organization’s free digital literacy classes don’t have a computer in the home.

    While Connecticut is considered technologically advanced compared to many other states and nearly every household can get internet service, a recent state digital equity study found only about three-quarters subscribe to broadband. A survey conducted as part of the study found 47% of respondents find it somewhat or very difficult to afford internet service.

    Of residents who reported household income at or below 150% of the federal poverty level, 32% don’t own a computer and 13% don’t own any internet enabled device.

    Tramontano-Kelly said ensuring the internet is accessible and technology equipment is affordable are important first steps.

    “So teaching people about AI is super important. I 100% agree with this,” she said. “But the conversation also needs to be about everything else that goes along with AI.”

    Source link

  • Amazon hires founders away from AI startup Adept | TechCrunch

    Amazon hires founders away from AI startup Adept | TechCrunch

    Adept, a startup developing AI-powered “agents” to complete various software-based tasks, has agreed to license its tech to Amazon and the startup’s co-founders and portions of its team have joined the ecommerce giant.

    Geekwire’s Taylor Soper first reported the news. According to Soper, Adept co-founder and CEO David Luan will join Amazon, along with Adept co-founders Augustus Odena, Maxwell Nye, Erich Elsen and Kelsey Szot and other Adept employees.

    Adept isn’t closing up shop, however. Zach Brock, head of engineering, is taking over as CEO as Adept refocuses its efforts on “solutions that enable agentic AI.”

    “[Our products] will continue to be powered by a combination of our existing state-of-the-art in-house [AI] models, agentic data, web interaction software and custom infrastructure,” Adept wrote in a post on its official blog. “Continuing with Adept’s initial plan of building both useful general intelligence and an enterprise agent product would’ve required spending significant attention on fundraising for our foundation models, rather than bringing to life our agent vision.”

    The deal provides a lifeline for Adept, which has reportedly been in talks with Meta and Microsoft over the past few months about a potential acquisition. Microsoft previously invested in the startup.

    As for Amazon, it gets valuable talent — and tech to bolster its generative AI ambitions. Geekwire reports that Luan will work under Rohit Prasad, the former Alexa head who’s leading a new AGI team focused on building large language models.

    “David and his team’s expertise in training state-of-the-art multimodal foundational models and building real-world digital agents aligns with our vision to delight consumer and enterprise customers with practical AI solutions,” Prasad wrote in a memo to employees obtained by Geekwire. “[The license] will accelerate our roadmap for building digital agents that can automate software workflows.”

    Adept was founded two years ago with the goal of creating an AI model that can perform actions on any software tool using natural language. At a high level, the vision — a vision now shared by OpenAI, Rabbit and others — was to create an “AI teammate” of sorts trained to use a wide variety of different software tools and APIs.

    Adept managed to win over backers including Nvidia, Atlassian, Workday and Greylock with its technology, raising over $415 million in capital and reaching a valuation of around $1 billion. But the startup’s been plagued with disfunction. Adept lost two of its co-founders, Ashish Vaswani and Niki Parmar, early on, and it’s struggled to bring any product to market despite months and months of testing.

    The market for AI agents is a tad more crowded than it was at Adept’s launch. Well-funded startups like Orby, Emergence and others are vying for a slice of what promises to be a lucrative pie; market research firm Grand View Research estimates that the AI agents segment was worth $4.2 billion in 2022.

    But perhaps the Amazon tie-in will get Adept over the finish line. Or — with much of its executive ranks departing — it’ll resign Adept to the same fate as Inflection, the AI startup that was effectively gutted, talent-wise, by Microsoft earlier this year. Or regulators increasingly skeptical of these types of AI aqui-hires will step in (if they aren’t rendered toothless by Friday’s Supreme Court decision).

    Grab your popcorn and settle in.

    Kyle Wiggers

    Source link

  • Five key success factors for commercial banks to scale generative AI | Accenture Banking Blog

    Five key success factors for commercial banks to scale generative AI | Accenture Banking Blog


    Ella Fitzgerald was one of the first, in the ‘30s, to have a hit with the song ‘Ain’t What You Do, It’s the Way That You Do It’. The fact that legions of musicians have and continue to record it tells me there’s a universal truth there that most people recognize. This is certainly the case with generative AI. Using the innovation is cheap and easy. Making the most of it, however, demands a considered approach that’s aligned with your business goals, ensures a strong foundation is in place, and mitigates the much-publicized risks of the technology. In short, the way that you do it matters a great deal.

    We’ve learnt a lot about ‘traditional’ AI over the past couple of decades, and about generative AI since it burst onto the scene about 18 months ago. We’ve helped hundreds of clients across all industries and geographies—including many commercial and multi-line banks—identify and pilot use cases and start to scale the technology across the organization. In our recent examination of the most important trends shaping commercial banking in 2024, we make the point that each of these is affected in some way by generative AI.

    We’ve also progressed well beyond merely talking about generative AI. We’re currently working with many of our tech ecosystem partners to design, build and pilot prototypes for a range of use cases. In the course of our work with commercial banks, helping them develop and execute their AI strategies, we’ve learnt five important lessons which can make all the difference to your own generative AI journey:

    1. Focus on the needs of the business and lead with value

    It makes sense to start by harvesting the low-hanging fruit, which includes taking advantage of consumable models and applications to realize quick returns. Knowledge management use cases are a good example. At the same time, you should start to explore how generative AI can help you reinvent your products and services, your customer experiences and your business as a whole. For this you will need models that are customized with your organization’s data. You’ll also need . Prioritization will be critical, so invest time and effort in defining your business cases, setting stage gates, and assessing the desirability, feasibility and viability of each opportunity.

    2. Build the right data foundation with a secure AI-enabled digital core

    To make the most of generative AI you need a technical infrastructure, architecture, operating model and governance structure that meet its high compute demands. You will also need data that is more accessible, fluid and unstructured than most commercial banks currently have. Keep a close eye on cost and sustainable energy consumption—more traditional AI and other analytical approaches might be better suited to particular use cases and are certainly a lot less expensive. The ability to accurately assess the cost and benefit of each could save you a great deal of money and effort.

    3. Reinvent ways of working with a people-first approach

    One lesson banks are learning every day is that people are at least as critical to the success of a generative AI program as the technology. Generative AI will, to a greater or lesser degree, transform every role in commercial banking. Everyone will soon either work with one or more generative AI tools or will have the routine parts of their job automated by the technology—or both. The impact will be so extensive that nothing less than the retooling of all work and roles will be required if the full potential of this innovation is to be realized.

    4. Build ‘responsible AI’ with the right risk and compliance framework

    Given the speed at which generative AI is being adopted, and the very real concerns regarding its fairness, transparency, accuracy, explainability, privacy and safety, it is vital that commercial banks ensure these attributes are built in at the design stage and monitored continuously. A robust ‘responsible AI’ compliance regime should include controls for assessing the potential risk of each use case and a means to embed responsible AI approaches throughout the business. Most companies have a long way to go in this regard: our 2022 global survey of 850 senior executives found that while most recognized the importance of responsible AI and AI regulation, only 6% had a fully robust responsible AI foundation in place and were putting its principles into practice.

    5. Balance rapid progress with the right operating model and governance

    As the first point above implies, your approach will need to be dynamic. While facilitating rapid experimentation and agility across your different divisions, you should simultaneously adopt a centralized, coordinated strategy that establishes the building blocks, processes and governance structures that are essential to the success of your overall program. Several leading banks we are working with have established a generative AI center of excellence that serves the entire organization. This COE comprises the leaders and specialist personnel tasked with creating the roadmap, governance, core architecture and use-case development pathways to the relevant lines of business to experiment and scale at pace.

    We believe commercial banking is at a crossroads. Generative AI has the potential to transform so many different aspects of our industry—from the legacy core to the customer experience, and everything in between—that we cannot afford to treat it as just another technological novelty. Only a holistic, strategic approach will avoid the pitfalls and realize the full promise of this remarkable innovation.

    We hope this series has given you food for thought. If you would like to learn more, we recommend downloading our two recent reports: Commercial Banking Top Trends for 2024 and The Age of AI–Banking’s New Reality. Or you could simply get in touch—we would welcome the opportunity to discuss the potential role of generative AI with you or your team.

    Disclaimer: This content is provided for general information purposes and is not intended to be used in place of consultation with our professional advisors. Copyright© 2024 Accenture. All rights reserved. Accenture and its logo are registered trademarks of Accenture.

    Jared Rorrer

    Source link

  • The impact of generative AI in banking | Accenture Banking Blog

    The impact of generative AI in banking | Accenture Banking Blog

    Following the release of our new report, “The age of AI: Banking’s new reality”, I sat down with my team to discuss how generative AI is reshaping the banking industry.

    It sparked an interesting conversation about current adoption journeys, strategic priorities, and the exciting possibilities ahead for banks. I thought I’d share some of the discussion with you.

    What stage are most banks at in their adoption journey of generative AI?

    Over the past eighteen months, there has been significant evolution in the banking industry’s approach to generative AI. Initially, banks were cautious and sometimes skeptical about this emerging technology. However, the majority have now recognized its real potential and impactful possibilities.

    Most banks have moved beyond identifying what use cases to focus on and have conducted preliminary trials and proof-of-concepts, including moves to production. Industry leaders are fully appreciating the transformative impact generative AI can have throughout their organization. They are adopting a comprehensive view, focusing not just on isolated applications, but on the broader value that generative AI can offer. Key considerations being addressed include scaling efficiencies, enhancing technological infrastructure and data capabilities, strategizing around talent priorities, and the ethical deployment of AI.

    What should banks focus on when adopting generative AI?

    Culture is key. The rapid pace of innovation in generative AI, marked by new market entrants, models and applications, poses challenges for organizations in keeping pace and also thinking about how to continually differentiate. Cultivating a culture of continuous learning and experimentation is essential. Banks must remain agile and adaptable, ready to test new ideas and learn from them. A crucial element here is fostering a cultural mindset of curiosity and a willingness to wisely pivot as needed to drive ongoing value generation.

    Banks also need to be mindful about the broader picture and not focusing only on isolated use cases. Organizations should expand their thinking to encompass entire value chains. It’s important to have a clear understanding of the current operational baseline and performance, envision future goals, and strategize on how generative AI can help to bridge that gap.

    Finally, it’s important not to focus solely on generative AI, but to consider it as part of a larger ecosystem that includes classical AI, automation, analytics and data. Banks need a comprehensive understanding of the tools and strategies required to mobilize generative AI effectively and achieve the desired impact.

    How can banks prioritize their generative AI initiatives?

    It’s important for banks to start by being very clear on their business strategy and to ask the right questions. These might include: How are we thinking about reinvention? What is it that we’re trying to achieve as business outcomes? How do we want to differentiate in the market? What results do we want to realize? And what are our near-term and longer-term priorities?

    Once leaders set strategic goals, they can explore how generative AI can enhance these outcomes and help to fulfil their vision. This strategic alignment helps prioritize initiatives, allowing banks to experiment, learn and make meaningful investments in areas that align with their overall business strategy. For example, some top banks may focus on driving greater operational excellence and optimizing costs. Generative AI can help accelerate assessment and innovation regarding current processes, looking at more efficient, quicker, and cost-effective solutions. Additionally, banks wanting to boost their revenue could leverage generative AI to gain a deeper understanding of consumer and client profiles, help to refine their pricing strategies, or introduce innovative product launches.

    These examples highlight the need to integrate generative AI into a bank’s overall strategic framework. It’s crucial that its implementation goes beyond mere technology adoption, aiming instead to help to boost a bank’s overall value proposition and strengthen its competitive position in the market.

    What infrastructure needs must be addressed?

    In the highly regulated banking industry, the existing rigor and discipline provide a solid foundation for the integration of responsible AI and secure guardrails. However, it is crucial for banks to enhance their model risk management procedures to accommodate the nuances of generative AI and other emerging technologies. The rapid pace of technological advancement requires that risk and compliance teams, along with the associated governance structures, can adapt quickly. It is important that governance frameworks are adaptable and that the required additional steps are clearly communicated to both business users and the wider organization. This clarity will help prevent friction and drive smoother implementation.

    Additionally, preparing to handle the unknown is vital. Banks can cultivate a discipline that allows them to manage ambiguity and rapid changes effectively. This adaptive mindset enables organizations to pivot and innovate proactively, distinguishing themselves in a competitive market.

    This adaptability extends to the digital core of banks, including their cloud strategies and data management systems. The ease with which teams can collaborate and devise solutions swiftly is important. Such flexibility not only enhances the ability to respond to emerging challenges but also positions banks as leaders in leveraging new technologies for strategic advantage.

    What is exciting you the most about the future of AI in banking?

    I love to see our client teams starting to experiment more, getting things into production, and starting to really tap into the true power of this emerging technology.

    It can also help to bring a breath of fresh air into organizations, which is exciting. People see things they’ve always wanted to do, or a task they wish they didn’t have to, and are able to tap into new opportunities to leverage their AI partner to drive those outcomes.

    I’m also very interested to see what happens as we get used to the human and digital workforce. Generative AI is going to free up intellectual capacity, allowing banks to reallocate those hours to higher-value activities and greater levels of realized creativity. I’m excited to see what will be delivered for customers as a result, as well as internally within organizations as employees start to realize some of their own aspirations.

    Opportunities and challenges ahead

    The journey of integrating generative AI into banking is full of opportunities and challenges. As we continue to explore and implement this technology, our focus remains on enhancing our services and delivering greater value to our customers and teams.

    Stay tuned for more updates as we navigate this exciting landscape; and if you’d like to hear more on my latest thinking, read the report, tune in to episode 61 of our AI Leaders Podcast or get in touch to ask me your own questions.

    Keri Smith

    Source link

  • Waabi’s genAI promises to do so much more than power self-driving trucks | TechCrunch

    Waabi’s genAI promises to do so much more than power self-driving trucks | TechCrunch

    For the last two decades, Raquel Urtasun, founder and CEO of autonomous trucking startup Waabi, has been developing AI systems that can reason as a human would. 

    The AI pioneer had previously served as the chief scientist at Uber ATG before launching Waabi in 2021. Waabi launched with an “AI-first approach” to speed up the commercial deployment of autonomous vehicles, starting with long-haul trucks. 

    “If you can build systems that can actually do that, then suddenly you need much less data,” Urtasun told TechCrunch. “You need much less computation. If you’re able to do the reasoning in an efficient manner, you don’t need to have fleets of vehicles deployed everywhere in the world.” 

    Building an AV stack with AI that perceives the world as a human might and reacts in real time is something Tesla has been attempting to do with its vision-first approach to self-driving. The difference, aside from Waabi’s comfort with using lidar sensors, is that Tesla’s Full Self-Driving system uses “imitation learning” to learn how to drive. This requires Tesla to collect and analyze millions of videos of real-world driving situations that it uses to train its AI model. 

    The Waabi Driver, on the other hand, has done most of its training, testing and validation using a closed-loop simulator called Waabi World that automatically builds digital twins of the world from data; performs real-time sensor simulation; manufactures scenarios to stress test the Waabi Driver; and teaches the Driver to learn from its mistakes without human intervention. 

    In just four years, that simulator has helped Waabi launch commercial pilots (with a human driver in the front seat) in Texas, many of which are happening through a partnership with Uber Freight. Waabi World is also enabling the startup to reach its planned commercial fully driverless launch in 2025. 

    But Waabi’s long-term mission is much grander than just trucks.

    “This technology is extremely, extremely powerful,” said Urtasun, who spoke to TechCrunch via video interview, a white board full of hieroglyphic-looking formulas behind her. “It has this amazing ability to generalize, it’s very flexible, and it’s very fast to develop. And it’s something that we can expand to do much more than trucking in the future … This could be robotaxis. This could be humanoids or warehouse robotics. This technology can solve any of those use cases.”

    The promise of Waabi’s technology — which will first be used to scale autonomous trucking — has allowed the startup to close on a $200 million Series B round, led by existing investors Uber and Khosla Ventures. Strong strategic investors include Nvidia, Volvo Group Venture Capital, Porsche Automobil Holding SE, Scania Invest and Ingka Investments. The round brings Waabi’s total funding to $283.5 million. 

    The size of the round, and the strength of its participants, is particularly noteworthy given the hits the AV industry has taken in recent years. In the trucking space alone, Embark Trucks shut down, Waymo decided to pause on its autonomous freight business, and TuSimple closed its U.S. operations. Meanwhile in the robotaxi space, Argo AI faced its own shutdown, Cruise lost its permits to operate in California following a major safety incident, Motional slashed nearly half its workforce, and regulators are actively investigating Waymo and Zoox

    “You build the strongest companies when you fundraise in moments that are actually difficult, and the AV industry in particular has seen a lot of setbacks,” Urtasun said. 

    That said, AI-focused players in this second-wave of autonomous vehicle startups have secured impressive capital raises this year. U.K.-based Wayve is also developing a self-learning rather than rule-based system for autonomous driving, and in May it closed a $1.05 billion Series C led by SoftBank Group. And Applied Intuition in March raised a $250 million round at a $6 billion valuation to bring AI to automotive, defense, construction and agriculture. 

    “In the context of AV 1.0, it’s very clear today that it’s very capital intensive and really slow to make progress,” Urtasun said, noting that the robotics and self-driving industry has been held back by complex and brittle AI systems. “And investors are, I would say, not very excited about that approach.”

    What investors are excited about today, though, is the promise of generative AI, a term that wasn’t exactly in vogue when Waabi launched, but nonetheless describes the system that Urtasun and her team created. Urtasun says Waabi’s is a next generation genAI, one that can be deployed in the physical world. And unlike the popular language-based genAI models of today, like OpenAI’s ChatGPT, Waabi has figured out how to create such systems without relying on huge datasets, large language models and all the compute power that comes with them.

    The Waabi Driver, Urtasun says, has the remarkable ability to generalize. So rather than trying to train a system on every single possible data point that has ever or could ever exist, the system can learn from a few examples and handle the unknown in a safe manner.

    “That was in the design. We built these systems that can perceive the world, create abstractions of the world, and then take those abstractions and reason about, ‘What might happen if I do this?’” Urtasun said.

    This more human-like, reasoning-based approach is far more scalable and more capital efficient, Urtasun says. It’s also vital for validating safety critical systems that run on the edge; you don’t want a system that takes a couple of seconds to react, otherwise you’ll crash the vehicle, she said. Waabi announced a partnership to bring Nvidia’s Drive Thor to its self-driving trucks, which will give the startup access to automotive-grade compute power at scale. 

    On the road, this looks like the Waabi Driver understanding that there is something solid in front of it and that it should drive cautiously. It might not know what that something is, but it’ll know to avoid it. Urtasun also said the Driver has been able to predict how other road users will behave without needing to be trained in various specific instances. 

    “It understands things without us telling the system about the concept of objects, how they move in the world, that different things move differently, that there is occlusion, there is uncertainty, how to behave when it’s raining a lot,” Urtasun said. “All these things, it learns automatically. And because it’s exposed right now to driving scenarios, it learns all those capabilities.”

    She noted that Waabi’s streamlined, single architecture can be applied to other autonomy use cases. 

    “If you expose it to interactions in a warehouse, picking up and dropping things, it can learn that, no problem,” she said. “You can expose it to multiple use cases, and it can learn to do all those skills together. There is no limit in terms of what it can do.”

    Rebecca Bellan

    Source link

  • Apple expected to enter AI race with ambitions to overtake the early leaders

    Apple expected to enter AI race with ambitions to overtake the early leaders

    Apple’s annual World Wide Developers Conference on Monday is expected to herald the company’s move into generative artificial intelligence, marking its late arrival to a technological frontier that’s expected to be as revolutionary as the invention of the iPhone.

    The widely anticipated display of AI to be embedded in the iPhone and other Apple products will be the marquee moment at an event that traditionally previews the next version of software that powers the company’s hardware lineup.

    And Apple’s next generation of software is expected to be packed with an array of AI features likely to make its often-bumbling virtual assistant Siri smarter, and make photos, music, texting — and possibly even creating emojis on the fly — a more productive and entertaining experience.

    True to its secretive nature, Apple hasn’t provided any advance details about Monday’s event being held at the company’s Cupertino, California, headquarters.

    But CEO Tim Cook has dropped strong hints during the first few months of this that Apple is poised to reveal its grand plans to enter a space that has been fueling an industry boom during the past 18 months.

    AI mania is the main reason that Nvidia, the dominant maker of the chips underlying the technology, has seen its market value rocket from about $300 billion at the end of 2022 to about $3 trillion. The meteoric ride allowed Nvidia to briefly surpass Apple last week as the second most valuable company in the U.S. Microsoft earlier this year also eclipsed the iPhone maker on the strength of its so-far successful push into AI.

    But analysts have been have been getting increasingly worried that Apple may be falling too far behind in the rapidly changing AI space, a concern that has been compounded by an uncharacteristically extended slump in the company’s sales. Both Google and Samsung already have released smartphone models touting AI features as their main attractions.

    That’s why analysts such as Dan Ives of Wedbush Securities view Monday’s conference as a potential springboard that catapults Apple into another robust phase of growth. Ives believes infusing more AI into the iPhone, iPad and Mac computer will translate into an additional $450 billon to $600 billion in market value for Apple.

    Monday’s conference “represents the most important event for Apple in over a decade as the pressure to bring a generative AI stack of technology for developers and consumers is front and center,” Ives wrote in a research note.

    Apple definitely could use the boost that AI may be able to provide, particularly for its 13-year-old assistant Siri, which Forrester Research Dipanjan Chatterjee now calls an “oddly unhelpful helper.”

    Meanwhile, OpenAI’s ChatGPT is getting increasingly conversational — so much so that it recently sparked accusations of intentionally copying a piece of AI software voiced by Scarlett Johansson — and Google last month previewed an AI “agent” dubbed Astra that can seemingly see and remember things.

    Besides using AI to spruce up Siri, Apple may also team up with OpenAI to bring some elements of ChatGPT to the iPhone, according to a wide range of unconfirmed reports leading up to Monday’s conference.

    This will be the second straight year that Apple has created a stir at its developers conference by using it to usher in its entrance into a trendy form of technology that other companies already had been making inroads.

    Last year, Apple provided an early look at its mixed-reality headset, the Vision Pro, which wasn’t released until early this year carrying a $3,500 price tag that has been a major impediment to gaining much traction. Nevertheless, Apple’s push into mixed reality, tweaked with a twist that it bills as “spatial computing,” has raised hopes that what is currently a niche technology will turn into a huge market.

    Part of the optimism stems from Apple’s history of releasing technology later than others and then using sleek designs and services combined with slick marketing campaigns to overcome its tardy start to unleash new trends.

    “Apple’s early reticence toward AI was entirely on brand,” Forrester’s Chatterjee wrote in a preview of the developers conference. “The company has always been famously obsessed with what its offerings did for its customers rather than how it did it.”

    Bringing more AI into the iPhone, in particular, will likely raise privacy issues — a topic where Apple has gone to great lengths to assure its loyal customer base that it can be trusted not to peer too deeply into their personal lives.

    One way Apple could reassure consumers that the iPhone won’t be used to spy on them is to leverage its own chip technology so most AI-powered features are handled on the device itself instead of remote data centers, often called “the cloud.” Going that route also would help protect Apple’s profit margins because AI technology through the cloud is far more expensive than when it is run solely on a device.

    Source link

  • ‘Optimistic’ about embracing AI’s possibilities in banking, Starling CEO says

    ‘Optimistic’ about embracing AI’s possibilities in banking, Starling CEO says

    Share

    Raman Bhatia, CEO of Starling Bank, discusses the use of artificial intelligence in the banking sector. Footage courtesy of Money20/20.

    00:49

    2 minutes ago

    Source link

  • AI is the first time new technology is leading to real productivity benefits, ING COO says

    AI is the first time new technology is leading to real productivity benefits, ING COO says

    Marnix van Stiphout, COO at ING, discusses artificial intelligence and digitalization in the banking sector.

    Source link

  • Four ways generative AI will transform commercial banking | Accenture Banking Blog

    Four ways generative AI will transform commercial banking | Accenture Banking Blog


    We’re all still trying to get our heads around the big question confronting all commercial bankers right now: how and where will generative AI have the greatest impact? In our recent analysis of the top trends shaping the industry in 2024, we argue that each one is influenced to some degree by generative AI. In this second post we explore where within the bank early adopters are applying this transformative technology.

    The aspiration—to steal from the title of last year’s Best Film Oscar winner—is “everything, everywhere, all at once”. But if we must admit that universal deployment is unrealistic, the challenge becomes one of prioritization. We analyzed banking tasks, roles and functions, based on our experience of working with a large number of leading banks worldwide, and identified four focus areas where commercial banks are likely to achieve the greatest immediate impact:

    1. Empowering relationship managers

    Every relationship manager (RM) we’ve met laments the time they spend identifying which clients they should speak to, which policies and procedures they need to refer to, and which client information they need to collate from a disparate array of internal and external sources. Generative AI can relieve them of much of this, allowing them to prepare better and spend more time in more impactful meetings with more clients.

    As part of their CRM platform, generative AI can provide RMs with prioritized leads. It can specify each client’s most urgent needs and their preferred method of engagement. It can also generate proactive outreach, whether that is an email, a conversation script or a formal proposal. Most importantly, it can help RMs increase sales by using new insights to create intimate relationships where the right products are provided at the right time—even if the client hasn’t thought through the need. Interactive real-time dashboards can monitor the effectiveness of each campaign, enabling continual improvement. Knowledge management and performance coaching tools can also improve RMs’ capabilities faster and deliver more consistent client services irrespective of the banker’s level of experience.

    One phenomenon that we’re seeing among those of our clients that are pursuing more intelligent front-office processes is a levelling of capabilities across the RM population. Top talent continues to improve slightly, but we are seeing a massive growth in performance within some of the lower levels. Together, this is significantly boosting the organization’s win and growth rates.

    2. Streamlining commercial underwriting 

    Few commercial banks are able to get funds to clients as quickly as they would like. Those that can outpace their competitors without incurring greater risk stand to increase market share, revenue and client satisfaction. As I mentioned in the first post in this series, in most commercial banks this and other operations continue to be highly manual and human-intensive. There is endless variation of products, segments, regions and policies that overcomplicate the process and prolong the time-to-decision. These delays are a major driver of cost inflation within the bank, and those who can develop a solution will be positioned to win in the marketplace.

    By modernizing origination platforms and introducing generative AI, leaders are succeeding in this quest. Most are prioritizing the automation of what was formerly manual content production—for example spreading, credit memo generation and other document generation. They are also using it for four-eye checks across the application lifecycle to ensure the right information is captured. Solutions in each of these areas involve varying levels of functional complexity, integration and risk, which must be well understood to accelerate modernization.

    3. Enhancing risk management and compliance

    Commercial banks are currently investing more effort and capital to meet their expanding risk and compliance obligations. Generative AI has the potential to streamline this on multiple levels.

    The technology can be used to automate tasks and augment staff in complex regulation-driven processes such as KYC and AML in the client onboarding stage. It can be used to enhance natural language processing (NLP) tasks, such as extracting the relevant KYC data from a variety of documents containing text, graphs and other imagery. It can update client details, making note of the change and the source of the new information. While generative AI is also able to automate many regulatory reporting and monitoring tasks, it is more likely to be used initially to augment staff, whose human checks on accuracy remain critical to the process.

    4. Increasing change velocity

    Compressed change is a vital goal in a fast-evolving industry where program directors are expected to deliver more with less. Generative AI can help, across the transformation lifecycle.

    By augmenting team members, the technology can facilitate the development of epic and user story documentation. The automation of repetitive tasks and code generation processes helps developers create and execute functional codes. This cuts development time and allows the developers to concentrate on more complex tasks. Generative AI is also being used to thoroughly analyze large datasets to identify and rectify code faults. This analysis automatically processes vast amounts of data to identify patterns and potential threats or issues, thereby enhancing the accuracy of project specifications and requirements.

    Generative AI streamlines the testing phase, raising the overall quality of software products. It quickly pinpoints anomalies or threats and uses automated test cases and scripts to speed up the process. This ensures more thorough testing coverage and more efficient and effective defect identification. The result is higher-quality products delivered in a shorter timeframe.

    In the next and final post in this series, we will share the five things commercial banks can do to ensure they derive the greatest possible benefit from generative AI. In the meantime, if you would like to find out how this innovation is influencing the forces shaping the future of commercial banking, you can download Commercial Banking Top Trends for 2024. If you would like to chat about any aspect of this topic, please get in touch—we’d welcome the opportunity to discuss your bank’s journey to generative AI.

    I’d like to thank my colleague, Auswell Chia, for his contribution to this post – Auswell has been working closely with a number of our financial services clients as they develop and implement their generative AI strategies. We would like to also thank Julie Zhu and Gustavo Pintado for their contributions.

    Disclaimer: This content is provided for general information purposes and is not intended to be used in place of consultation with our professional advisors. Copyright© 2024 Accenture. All rights reserved. Accenture and its logo are registered trademarks of Accenture.

    Jared Rorrer

    Source link

  • Iyo thinks its gen AI earbuds can succeed where Humane and Rabbit stumbled | TechCrunch

    Iyo thinks its gen AI earbuds can succeed where Humane and Rabbit stumbled | TechCrunch

    A month after launching its first product, Humane’s co-founders have reportedly put their well-funded startup on the market. While even the firm’s biggest cheerleaders didn’t expect the Ai Pin to change the world in such a short timeframe, few of its many detractors expected things to go so sideways, so quickly.

    Humane’s biggest competitor, the Rabbit R1, didn’t fare much better. Shortly after launch, the generative AI-fueled handheld was savaged by critics. The most salient critique of the “half-baked” device was that it could been an app, rather than a $200 piece of hardware.

    The excitement ahead of both devices’ launch is proof-positive that there is interest in a new form factor that leverages LLMs (large language models) in a way that is genuinely useful in our daily lives. At the moment, however, it’s safe to stay that no one has yet stuck the landing.

    Iyo represents a third form factor in the push to deliver standalone generative AI devices. Unlike Humane, which attempted to introduce a wholly new form factor by way of a lapel pin, Iyo is building its technology into an already wildly successful category: the Bluetooth earbud.

    When the Iyo One launches this winter, the company will be able to build on several years of consumer education around the integration of assistants like Alexa and Siri into headphones. The leap from that to more sophisticated LLM-based models is far shorter than one like the Ai Pin, which requires a fundamental rethink of how we interact with our devices.

    Much like Humane and Rabbit, Iyo’s founding predates the current AI hype cycle. The company traces its history all the way back to the before times of 2019.

    “I saw all these people I knew in AI, three different research orgs inside Google, all the external people, OpenAI and others all making this incredible progress with these language models, all independently,” founder and CEO Jason Rugolo told TechCrunch. “I realize it’s algebra and data, and no one has a corner on either of those things. I saw that the foundational models were going to proliferate and become a commodity — very controversial in 2019.”

    Whereas Humane was able to drum up a good bit of interest reliant on its founders’ time at Apple, Iyo was actually formed inside Google. The firm was incubated inside the Alphabet X “moonshot factory” that gave rise to projects like Glass and Project Loon. Iyo was spun off in 2021. Unlike X graduates Waymo, Wing and Intrinsic, however, the company does not operate as a subsidiary. Instead, Alphabet served as Iyo’s first investor. As Rugolo is quick to point, the search giant does not occupy a seat on the company’s board.

    Yes, there was an Iyo TED Talk. Image Credits: TED
    Image Credits: Iyo

    Another important advantage is that contrary to its name, the One won’t be Iyo’s first product. You can currently go to the firm’s site and purchase a different — but related — audio device. The $1,650 Vad Pro is effectively a sophisticated in-ear studio reference monitor. The device sports a similar rounded form factor to the One, along with head-tracking, but Iyo’s first commercially available device is wired.

    “If you’re building in a digital audio workstation like Logic Pro,” says Rugolo, “it’s paired with a piece of software we wrote that applies our virtualization technology.” This is designed to help engineers create spatial audio mixes.

    The Vad Pro speak to another important element of the Iyo One pitch: They’re designed to be, above all, a premium set of headphones. Unlike the Ai Pin and R1, which offer no value outside their AI capabilities, the Iyo One can also simply function as a good pair of headphones.

    The headphones are noticeably larger that standard Bluetooth earbuds. That’s due, in part, to the inclusion of a significantly larger battery, which Rugolo says can get up to 16 hours on a charge when paired with a phone in Bluetooth mode. If you’re using the One in cellular mode without a tethered handset, on the other hand, that number shrinks considerably to around an hour and a half.

    Cost is a concern, as well. While the Iyo One will cost a fraction of the Vad Pro, it’s still cheap at $599 for the Wi-Fi model and $699 for the cellular version. The latter puts it at the same price point as the Ai Pin and hundreds of dollars more than the R1. That’s well out of the average consumer’s range for buying a piece of hardware just to mess around with. Unlike the Ai Pin, however, the Iyo One will not require a monthly subscription fee.

    The Vad Pro. Image Credits: Iyo

    “That kind of model is really something that comes from venture,” Rugolo said. “They try to drive the companies hard to get people locked in. I don’t like that model. It’s not the best for customers.” The cellular version will, however, require users to sign up for a plan with their carriers. That’s just standard practice.

    As Nura’s eventual acquisition by Denon demonstrated, the Bluetooth earbud category is hard for a startup, regardless of how novel the underlying technology might be. Companies are competing with the industry’s biggest names on one end, including Apple, Samsung and Google. On the other, you’ve got pairs often designed by Chinese manufacturers that can be had for as little as $10 new.

    Rugolo thinks, however, that the earbuds will provide value from day one. The Ai Pin and R1 have struggled to say the same.

    “I think the key is delivering value immediately, right out of the box, focusing on the features you’re going to ship with,” the Iyo founder said. “We believe this is a platform, and we think there are going to be millions of what we call ‘Audio-First Apps,’ these AU apps. But people don’t buy platforms. They buy products that do super useful stuff for them. So, just on the sound isolation, the comfort, the music quality alone, we think there’s a very large market for these devices.”

    Brian Heater

    Source link

  • Cats on the moon? Google’s AI tool is producing misleading responses that have experts worried

    Cats on the moon? Google’s AI tool is producing misleading responses that have experts worried

    Ask Google if cats have been on the moon and it used to spit out a ranked list of websites so you could discover the answer for yourself.

    Now it comes up with an instant answer generated by artificial intelligence — which may or may not be correct.

    “Yes, astronauts have met cats on the moon, played with them, and provided care,” said Google’s newly retooled search engine in response to a query by an Associated Press reporter.

    It added: “For example, Neil Armstrong said, ‘One small step for man’ because it was a cat’s step. Buzz Aldrin also deployed cats on the Apollo 11 mission.”

    None of this is true. Similar errors — some funny, others harmful falsehoods — have been shared on social media since Google this month unleashed AI overviews, a makeover of its search page that frequently puts the summaries on top of search results.

    The new feature has alarmed experts who warn it could perpetuate bias and misinformation and endanger people looking for help in an emergency.

    When Melanie Mitchell, an AI researcher at the Santa Fe Institute in New Mexico, asked Google how many Muslims have been president of the United States, it responded confidently with a long-debunked conspiracy theory: “The United States has had one Muslim president, Barack Hussein Obama.”

    Mitchell said the summary backed up the claim by citing a chapter in an academic book, written by historians. But the chapter didn’t make the bogus claim — it was only referring to the false theory.

    “Google’s AI system is not smart enough to figure out that this citation is not actually backing up the claim,” Mitchell said in an email to the AP. “Given how untrustworthy it is, I think this AI Overview feature is very irresponsible and should be taken offline.”

    Google said in a statement Friday that it’s taking “swift action” to fix errors — such as the Obama falsehood — that violate its content policies; and using that to “develop broader improvements” that are already rolling out. But in most cases, Google claims the system is working the way it should thanks to extensive testing before its public release.

    “The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web,” Google said a written statement. “Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce.”

    It’s hard to reproduce errors made by AI language models — in part because they’re inherently random. They work by predicting what words would best answer the questions asked of them based on the data they’ve been trained on. They’re prone to making things up — a widely studied problem known as hallucination.

    The AP tested Google’s AI feature with several questions and shared some of its responses with subject matter experts. Asked what to do about a snake bite, Google gave an answer that was “impressively thorough,” said Robert Espinoza, a biology professor at the California State University, Northridge, who is also president of the American Society of Ichthyologists and Herpetologists.

    But when people go to Google with an emergency question, the chance that an answer the tech company gives them includes a hard-to-notice error is a problem.

    “The more you are stressed or hurried or in a rush, the more likely you are to just take that first answer that comes out,” said Emily M. Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory. “And in some cases, those can be life-critical situations.”

    That’s not Bender’s only concern — and she has warned Google about them for several years. When Google researchers in 2021 published a paper called “Rethinking search” that proposed using AI language models as “domain experts” that could answer questions authoritatively — much like they are doing now — Bender and colleague Chirag Shah responded with a paper laying out why that was a bad idea.

    They warned that such AI systems could perpetuate the racism and sexism found in the huge troves of written data they’ve been trained on.

    “The problem with that kind of misinformation is that we’re swimming in it,” Bender said. “And so people are likely to get their biases confirmed. And it’s harder to spot misinformation when it’s confirming your biases.”

    Another concern was a deeper one — that ceding information retrieval to chatbots was degrading the serendipity of human search for knowledge, literacy about what we see online, and the value of connecting in online forums with other people who are going through the same thing.

    Those forums and other websites count on Google sending people to them, but Google’s new AI overviews threaten to disrupt the flow of money-making internet traffic.

    Google’s rivals have also been closely following the reaction. The search giant has faced pressure for more than a year to deliver more AI features as it competes with ChatGPT-maker OpenAI and upstarts such as Perplexity AI, which aspires to take on Google with its own AI question-and-answer app.

    “This seems like this was rushed out by Google,” said Dmitry Shevelenko, Perplexity’s chief business officer. “There’s just a lot of unforced errors in the quality.”

    —————-

    The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.

    Source link