ReportWire

Tag: generative ai

  • Can AI make video games more immersive? Some studios turn to AI-fueled NPCs for more interaction

    Can AI make video games more immersive? Some studios turn to AI-fueled NPCs for more interaction

    LOS ANGELES (AP) — For decades, video games have relied on scripted, stilted interactions with non-player characters to help shepherd gamers in their journeys. But as artificial intelligence technology improves, game studios are experimenting with generative AI to help build environments, assist game writers in crafting NPC dialogue and lend video games the improvisational spontaneity once reserved for table-top role-playing games.

    In the multiplayer game “Retail Mage,” players help run a magical furniture store and assist customers in hopes of earning a five-star review. As a salesperson — and wizard — they can pick up and examine items or tell the system what they’d like to do with a product, such as deconstruct chairs for parts or tear a page from a book to write a note to a shopper.

    A player’s interactions with the shop and NPCs around them — from gameplay mechanics to content and dialogue creation — are fueled by AI rather than a predetermined script to create more options for chatting and using objects in the shop.

    “We believe generative AI can unlock a new kind of gameplay where the world is more responsive and more able to meet players at their creativity and the things that they come up with and the stories they want to tell inside a fantasy setting that we create for them,” said Michael Yichao, cofounder of Jam & Tea Studios, which created “Retail Mage.”

    The typical NPC experience often leaves something to be desired. Pre-scripted interactions with someone meant to pass along a quest typically come with a handful of chatting options that lead to the same conclusion: players get the information they need and continue on. Game developers and AI companies say that by using generative AI tech, they aim to create a richer experience that allows for more nuanced relationships with the people and worlds that designers build.

    Generative AI could also provide more opportunities for players to go off-script and create their own stories if designers can craft environments that feel more alive and can react to players’ choices in real-time.

    Tech companies continue to develop AI for games, even as developers debate how, and whether, they’ll use AI in their products. Nvidia created its ACE technologies to bring so-called “digital humans” to life with generative AI. Inworld AI provides developers with a platform for generative NPC behavior and dialogue. Gaming company Ubisoft said last year that it uses Ghostwriter, an in-house AI tool, to help write some NPC dialogue without replacing the video game writer.

    A report released by the Game Developers Conference in January found that nearly half of developers surveyed said generative AI tools are currently being used in their workplace, with 31% saying they personally use those tools. Developers at indie studios were most likely to use generative AI, with 37% reporting use the tech.

    Still, roughly four out of five developers said they worry about the ethical use of AI. Carl Kwoh, Jam & Tea’s CEO, said AI should be used responsibly alongside creators to elevate stories — not to replace them.

    “That’s always been the goal: How can we use this tool to create an experience that makes players more connected to each other?” said Kwoh, who is also one of the company’s founders. “They can tell stories that they couldn’t tell before.”

    Using AI to provide NPCs with endless things to say is “definitely a perk,” Yichao said, but “content without meaning is just endless noise.” That’s why Jam & Tea uses AI — through Google’s Gemma 2 and their own servers in Amazon — to give NPCs the ability to do more than respond, he said. They can look for objects as they’re shopping or respond to other NPCs to add “more life and reactivity than a typically scripted encounter.”

    “I’ve watched players turn our shopping experience into a bit of a dating sim as they flirt with customers and then NPCs come up with very realistic responses,” he said. “It’s been really fun to see the game react dynamically to what players bring to the table.”

    Demonstrating a conversation with a NPC in the game “Mecha BREAK,” in which players battle war machines, Ike Nnole said that Nvidia has made its AI “humans” respond faster than they previously could by using small language models. Using Nvidia’s AI, players can interact with the mechanic, Martel, by asking her to do things like customize the color of a mech machine.

    “Typically, a gamer would go through menus to do all this,” Nnole, a senior product marketing manager at Nvidia said. “Now it could be a much more interactive, much quicker experience.”

    Artificial Agency, a Canadian AI company, built an engine that allows developers to bring AI into any part of their game — not only NPCs, but also companions and “overseer agents” that can steer a player towards content they’re missing. The AI can also create tutorials to teach players a skill that they are missing so they can have more fun in-game, the company said.

    “One way we like to put it is putting a game designer on the shoulder of everyone as they’re playing the game,” said Alex Kearney, cofounder of Artificial Agency. The company’s AI engine can be integrated at any stage of the game development cycle, she said.

    Brian Tanner, Artificial Agency’s CEO, said scripting every possible outcome of a game can be tedious and difficult to test. Their system allows designers to act more like directors, he said, by telling characters more about their motivation and background.

    “These characters can improvise on the spot depending on what’s actually happening in the game,” Tanner said.

    It’s easy to run into a game’s guardrails, Tanner said, where NPCs keep repeating the same phrase regardless of how players interact with them. But as AI continues to evolve, that will change, he added.

    “It is truly going to feel like the world’s alive and like everything really reacts to exactly what’s happening,” he said. “That’s going to add tremendous realism.”

    Source link

  • Can AI make video games more immersive? Some studios turn to AI-fueled NPCs for more interaction

    Can AI make video games more immersive? Some studios turn to AI-fueled NPCs for more interaction

    LOS ANGELES — For decades, video games have relied on scripted, stilted interactions with non-player characters to help shepherd gamers in their journeys. But as artificial intelligence technology improves, game studios are experimenting with generative AI to help build environments, assist game writers in crafting NPC dialogue and lend video games the improvisational spontaneity once reserved for table-top role-playing games.

    In the multiplayer game “Retail Mage,” players help run a magical furniture store and assist customers in hopes of earning a five-star review. As a salesperson — and wizard — they can pick up and examine items or tell the system what they’d like to do with a product, such as deconstruct chairs for parts or tear a page from a book to write a note to a shopper.

    A player’s interactions with the shop and NPCs around them — from gameplay mechanics to content and dialogue creation — are fueled by AI rather than a predetermined script to create more options for chatting and using objects in the shop.

    “We believe generative AI can unlock a new kind of gameplay where the world is more responsive and more able to meet players at their creativity and the things that they come up with and the stories they want to tell inside a fantasy setting that we create for them,” said Michael Yichao, cofounder of Jam & Tea Studios, which created “Retail Mage.”

    The typical NPC experience often leaves something to be desired. Pre-scripted interactions with someone meant to pass along a quest typically come with a handful of chatting options that lead to the same conclusion: players get the information they need and continue on. Game developers and AI companies say that by using generative AI tech, they aim to create a richer experience that allows for more nuanced relationships with the people and worlds that designers build.

    Generative AI could also provide more opportunities for players to go off-script and create their own stories if designers can craft environments that feel more alive and can react to players’ choices in real-time.

    Tech companies continue to develop AI for games, even as developers debate how, and whether, they’ll use AI in their products. Nvidia created its ACE technologies to bring so-called “digital humans” to life with generative AI. Inworld AI provides developers with a platform for generative NPC behavior and dialogue. Gaming company Ubisoft said last year that it uses Ghostwriter, an in-house AI tool, to help write some NPC dialogue without replacing the video game writer.

    A report released by the Game Developers Conference in January found that nearly half of developers surveyed said generative AI tools are currently being used in their workplace, with 31% saying they personally use those tools. Developers at indie studios were most likely to use generative AI, with 37% reporting use the tech.

    Still, roughly four out of five developers said they worry about the ethical use of AI. Carl Kwoh, Jam & Tea’s CEO, said AI should be used responsibly alongside creators to elevate stories — not to replace them.

    “That’s always been the goal: How can we use this tool to create an experience that makes players more connected to each other?” said Kwoh, who is also one of the company’s founders. “They can tell stories that they couldn’t tell before.”

    Using AI to provide NPCs with endless things to say is “definitely a perk,” Yichao said, but “content without meaning is just endless noise.” That’s why Jam & Tea uses AI — through Google’s Gemma 2 and their own servers in Amazon — to give NPCs the ability to do more than respond, he said. They can look for objects as they’re shopping or respond to other NPCs to add “more life and reactivity than a typically scripted encounter.”

    “I’ve watched players turn our shopping experience into a bit of a dating sim as they flirt with customers and then NPCs come up with very realistic responses,” he said. “It’s been really fun to see the game react dynamically to what players bring to the table.”

    Demonstrating a conversation with a NPC in the game “Mecha BREAK,” in which players battle war machines, Ike Nnole said that Nvidia has made its AI “humans” respond faster than they previously could by using small language models. Using Nvidia’s AI, players can interact with the mechanic, Martel, by asking her to do things like customize the color of a mech machine.

    “Typically, a gamer would go through menus to do all this,” Nnole, a senior product marketing manager at Nvidia said. “Now it could be a much more interactive, much quicker experience.”

    Artificial Agency, a Canadian AI company, built an engine that allows developers to bring AI into any part of their game — not only NPCs, but also companions and “overseer agents” that can steer a player towards content they’re missing. The AI can also create tutorials to teach players a skill that they are missing so they can have more fun in-game, the company said.

    “One way we like to put it is putting a game designer on the shoulder of everyone as they’re playing the game,” said Alex Kearney, cofounder of Artificial Agency. The company’s AI engine can be integrated at any stage of the game development cycle, she said.

    Brian Tanner, Artificial Agency’s CEO, said scripting every possible outcome of a game can be tedious and difficult to test. Their system allows designers to act more like directors, he said, by telling characters more about their motivation and background.

    “These characters can improvise on the spot depending on what’s actually happening in the game,” Tanner said.

    It’s easy to run into a game’s guardrails, Tanner said, where NPCs keep repeating the same phrase regardless of how players interact with them. But as AI continues to evolve, that will change, he added.

    “It is truly going to feel like the world’s alive and like everything really reacts to exactly what’s happening,” he said. “That’s going to add tremendous realism.”

    Source link

  • Russian disinformation slams Paris and amplifies Khelif debate to undermine the Olympics

    Russian disinformation slams Paris and amplifies Khelif debate to undermine the Olympics

    WASHINGTON (AP) — The actor in the viral music video denouncing the 2024 Olympics looks a lot like French President Emmanuel Macron. The images of rats, trash and the sewage, however, were dreamed up by artificial intelligence.

    Portraying Paris as a crime-ridden cesspool, the video mocking the Games spread quickly on social media platforms like YouTube and X, helped on its way by 30,000 social media bots linked to a notorious Russian disinformation group that has set its sights on France before. Within days, the video was available in 13 languages, thanks to quick translation by AI.

    “Paris, Paris, 1-2-3, go to Seine and make a pee,” taunts an AI-enhanced singer as the faux Macron actor dances in the background, seemingly a reference to water quality concerns in the Seine River where some competitions are taking place.

    Moscow is making its presence felt during the Paris Games, with groups linked to Russia’s government using online disinformation and state propaganda to spread incendiary claims and attack the host country — showing how global events like the Olympics are now high-profile targets for online disinformation and propaganda.

    Over the weekend, disinformation networks linked to the Kremlin seized on a divide over Algerian boxer Imane Khelif, who has faced unsubstantiated questions about her gender. Baseless claims that she is a man or transgender surfaced after a controversial boxing association with Russian ties said she failed an opaque eligibility test before last year’s world boxing championships.

    Russian networks amplified the debate, which quickly became a trending topic online. British news outlets, author J.K. Rowling and right-wing politicians like Donald Trump added to the deluge. At its height late last week, X users were posting about the boxer tens of thousands of times per hour, according to an analysis by PeakMetrics, a cyber firm that tracks online narratives.

    The boxing group at the root of the claims — the International Boxing Association — has been permanently barred from the Olympics, has a Russian president who is an ally of Russian President Vladimir Putin and its biggest sponsor is the state energy company Gazprom. Questions also have surfaced about its decision to disqualify Khelif last year after she had beaten a Russian boxer.

    Approving only a small number of Russian athletes to compete as neutrals and banning them from team sports following the invasion of Ukraine all but guaranteed the Kremlin’s response, said Gordon Crovitz, co-founder of NewsGuard, a firm that analyzes online misinformation. NewsGuard has tracked dozens of examples of disinformation targeting the Paris Games, including the fake music video.

    Russia’s disinformation campaign targeting the Olympics stands out for its technical skill, Crovitz said.

    “What’s different now is that they are perhaps the most advanced users of generative AI models for malign purposes: fake videos, fake music, fake websites,” he said.

    AI can be used to create lifelike images, audio and video, rapidly translate text and generate culturally specific content that sounds and reads like it was created by a human. The once labor-intensive work of creating fake social media accounts or websites and writing conversational posts can now be done quickly and cheaply.

    Another video amplified by accounts based in Russia in recent weeks claimed the CIA and U.S. State Department warned Americans not to use the Paris metro. No such warning was issued.

    Russian state media has trumpeted some of the same false and misleading content. Instead of covering the athletic competitions, much of the coverage of the Olympics has focused on crime, immigration, litter and pollution.

    One article in the state-run Sputnik news service summed it up: “These Paris ‘games’ sure are going swimmingly. Here’s an idea. Stop awarding the Olympics to the decadent, rotting west.”

    Russia has used propaganda to disparage past Olympics, as it did when the then-Soviet Union boycotted the 1984 Games in Los Angeles. At the time, it distributed printed material to Olympic officials in Africa and Asia suggesting that non-white athletes would be hunted by racists in the U.S., according to an analysis from Microsoft Threat Intelligence, a unit within the technology company that studies malicious online actors.

    Russia also has targeted past Olympic Games with cyberattacks.

    “If they cannot participate in or win the Games, then they seek to undercut, defame, and degrade the international competition in the minds of participants, spectators, and global audiences,” analysts at Microsoft concluded.

    A message left with the Russian government was not immediately returned on Monday.

    Authorities in France have been on high alert for sabotage, cyberattacks or disinformation targeting the Games. A 40-year-old Russian man was arrested in France last month and charged with working for a foreign power to destabilize the European country ahead of the Games.

    Other nations, criminal groups, extremist organizations and scam artists also are exploiting the Olympics to spread their own disinformation. Any global event like the Olympics — or a climate disaster or big election — that draws a lot of people online is likely to generate similar amounts of false and misleading claims, said Mark Calandra, executive vice president at CSC Digital Brand Services, a firm that tracks fraudulent activity online.

    CSC’s researchers noticed a sharp increase in fake website domain names being registered ahead of the Olympics. In many cases, groups set up sites that appear to provide Olympic content, or sell Olympic merchandise.

    Instead, they’re designed to collect information on the user. Sometimes it’s a scam artist looking to steal personal financial data. In others, the sites are used by foreign governments to collect information on Americans — or as a way to spread more disinformation.

    “Bad actors look for these global events,” Calandra said. “Whether they’re positive events like the Olympics or more concerning ones, these people use everyone’s heightened awareness and interest to try to exploit them.”

    Source link

  • Top AI business leaders meet with Biden administration to discuss the emerging industry’s needs

    Top AI business leaders meet with Biden administration to discuss the emerging industry’s needs

    WASHINGTON (AP) — Top Biden administration officials on Thursday discussed the future of artificial intelligence at a meeting with a group of executives from OpenAI, Nvidia, Microsoft and other companies. The focus was on building data centers in the United States and the infrastructure needed to develop the technology.

    White House press secretary Karine Jean-Pierre told reporters at the daily press briefing that the meeting focused on increasing public-private collaboration and the workforce and permitting needs of the industry. The computer power for the sector will likely depend on reliable access to electricity, so the utility companies Exelon and AES were also part of the meeting to discuss power grid needs.

    The emergence of AI holds a mix of promise and peril: The automatically generated text, images, audio and video could help to increase economic productivity but it also has the potential to displace some workers. It also could serve as both a national security tool and a threat to guard against.

    President Joe Biden last October signed an executive order to address the develop of the technology, seeking to establish protections through steps such as the watermarking of AI content and addressing consumer rights issues.

    Attending the meeting for the administration were White House chief of staff Jeff Zients, National Economic Council Director Lael Brainard, national security adviser Jake Sullivan, deputy chief of staff Bruce Reed, Commerce Secretary Gina Raimondo and Energy Secretary Jennifer Granholm, among others.

    Nvidia CEO Jensen Huang, OpenAI CEO Sam Altman, Alphabet President and Chief Investment Officer Ruth Porat, Meta Chief Operating Officer Javier Olivan, and Microsoft President and Vice Chairman Brad Smith were among the corporate attendees.

    Matt Garman, the CEO of AWS, a subsidiary of Amazon, also attended. The company said in a statement that attendees discussed modernizing the nation’s utility grid, expediting permits for new projects and ensuring that carbon-free energy projects are integrated into the grid.

    Source link

  • Most Americans don’t trust AI-powered election information: AP-NORC/USAFacts survey

    Most Americans don’t trust AI-powered election information: AP-NORC/USAFacts survey

    WASHINGTON — Jim Duggan uses ChatGPT almost daily to draft marketing emails for his carbon removal credit business in Huntsville, Alabama. But he’d never trust an artificial intelligence chatbot with any questions about the upcoming presidential election.

    “I just don’t think AI produces truth,” the 68-year-old political conservative said in an interview. “Grammar and words, that’s something that’s concrete. Political thought, judgment, opinions aren’t.”

    Duggan is part of the majority of Americans who don’t trust artificial intelligence, chatbots or search results to give them accurate answers, according to a new survey from The Associated Press-NORC Center for Public Affairs Research and USAFacts. About two-thirds of U.S. adults say they’re not very or not at all confident that these tools provide reliable and factual information, the poll shows.

    The findings reveal that even as Americans have started using generative AI-fueled chatbots and search engines in their personal and work lives, most have remained skeptical of these rapidly advancing technologies. That’s particularly true when it comes to information about high-stakes events such as elections.

    Earlier this year, a gathering of election officials and AI researchers found that AI tools did poorly when asked relatively basic questions, such as where to find the nearest polling place. Last month, several secretaries of state warned that the AI chatbot developed for the social media platform X was spreading bogus election information, prompting X to tweak the tool so it would first direct users to a federal government website for reliable information.

    Large AI models that can generate text, images, videos or audio clips at the click of a button are poorly understood and minimally regulated. Their ability to predict the most plausible next word in a sentence based on vast pools of data allows them to provide sophisticated responses on almost any topic — but it also makes them vulnerable to errors.

    Americans are split on whether they think the use of AI will make it more difficult to find accurate information about the 2024 election. About 4 in 10 Americans say the use of AI will make it “much more difficult” or “somewhat more difficult” to find factual information, while another 4 in 10 aren’t sure — saying it won’t make it easier or more challenging, according to the poll. A distinct minority, 16%, say AI will make it easier to find accurate information about the election.

    Griffin Ryan, a 21-year-old college student at Tulane University in New Orleans, said he doesn’t know anyone on his campus who uses AI chatbots to find information about candidates or voting. He doesn’t use them either, since he’s noticed that it’s possible to “basically just bully AI tools into giving you the answers that you want.”

    The Democrat from Texas said he gets most of his news from mainstream outlets such as CNN, the BBC, NPR, The New York Times and The Wall Street Journal. When it comes to misinformation in the upcoming election, he’s more worried that AI-generated deepfakes and AI-fueled bot accounts on social media will sway voter opinions.

    “I’ve seen videos of people doing AI deepfakes of politicians and stuff, and these have all been obvious jokes,” Ryan said. “But it does worry me when I see those that maybe someone’s going to make something serious and actually disseminate it.”

    A relatively small portion of Americans — 8% — think results produced by AI chatbots such as OpenAI’s ChatGPT or Anthropic’s Claude are always or often based on factual information, according to the poll. They have a similar level of trust in AI-assisted search engines such as Bing or Google, with 12% believing their results are always or often based on facts.

    There already have been attempts to influence U.S. voter opinions through AI deepfakes, including AI-generated robocalls that imitated President Joe Biden’s voice to convince voters in New Hampshire’s January primary to stay home from the polls.

    More commonly, AI tools have been used to create fake images of prominent candidates that aim to reinforce particular negative narratives — from Vice President Kamala Harris in a communist uniform to former President Donald Trump in handcuffs.

    Ryan, the Tulane student, said his family is fairly media literate, but he has some older relatives who heeded false information about COVID-19 vaccines on Facebook during the pandemic. He said that makes him concerned that they might be susceptible to false or misleading information during the election cycle.

    Bevellie Harris, a 71-year-old Democrat from Bakersfield, California, said she prefers getting election information from official government sources, such as the voter pamphlet she receives in the mail ahead of every election.

    “I believe it to be more informative,” she said, adding that she also likes to look up candidate ads to hear their positions in their own words.

    ___

    The poll of 1,019 adults was conducted July 29-Aug. 8, 2024, using a sample drawn from NORC’s probability-based AmeriSpeak Panel, which is designed to be representative of the U.S. population. The margin of sampling error for all respondents is plus or minus 4.0 percentage points.

    ___

    Swenson reported from New York.

    ___

    The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.

    Source link

  • Apple embraces the AI craze with its newly unleashed iPhone 16 lineup

    Apple embraces the AI craze with its newly unleashed iPhone 16 lineup

    CUPERTINO, Calif. — Apple on Monday charged into the artificial intelligence craze with a new iPhone lineup that marks the company’s latest attempt to latch onto a technology trend and transform it into a cultural phenomenon.

    The four different iPhone 16 models will all come equipped with special chips needed to power a suite of AI tools that Apple hopes will make its marquee product even more indispensable and reverse a recent sales slump.

    Apple’s AI features are designed to turn its often-blundering virtual assistant Siri into a smarter and more versatile sidekick, automate a wide range of tedious tasks and pull off other crowd-pleasing tricks, such as creating customized emojis within seconds.

    After receiving a standing ovation for Monday’s event, Apple CEO Tim Cook promised the AI package will unleash “innovations that will make a true difference in people’s lives.”

    But the breakthroughs won’t begin as soon as the new iPhones — ranging in price from $800 to $1,200 — hit the stores on Sept. 20.

    Most of Apple’s AI functions will roll out as part of a free software update to iOS 18, the operating system that will power the iPhone 16 rolling out from October through December. U.S. English will be the featured language at launch but an update enabling other languages will come out next year, according to Apple.

    It’s all part of a new approach that Apple previewed at a developers conference three months ago to create more anticipation for a next generation of iPhones amid a rare sales slump for the well-known devices.

    Since Apple’s June conference, competitors such as Samsung and Google have made greater strides in AI – a technology widely expected to trigger the most dramatic changes in computing since the first iPhone came out 17 years ago.

    Just as Apple elevated fledgling smartphones into a must-have technology in 21st-century society, the Cupertino, California, company is betting it can do something similar with its tardy arrival to artificial intelligence.

    In an attempt to set itself apart from the early leaders in AI, the technology being baked into the iPhone 16 is being promoted as “Apple Intelligence.” Despite the unique branding, Apple’s new approach mimics many of the features already available in the Samsung Galaxy S24 released in January and the Google Pixel 9 that came out last month.

    “Apple could have waited another year for further development, but initial take up of AI-powered devices from the likes of Samsung has been encouraging, and Apple is keen to capitalize on this market,” said PP Foresight analyst Paolo Pescatore.

    As it treads into new territory, Apple is trying to preserve its longtime commitment to privacy by tailoring its AI so that most of its technological tricks can be processed on the device itself instead of relying on giant banks of computers located in remote data centers. When a task needs to connect to a data center, Apple promises it will be done in a tightly controlled way that ensures no personal data is stored remotely.

    While corralling the personal information shared through Apple’s AI tools inherently reduces the chances the data will be exploited or misused against a user’s wishes, it doesn’t guarantee iron-clad security. A device could still be stolen, for instance, or hacked through digital chicanery.

    For those seeking to access even more AI tools than being offered by the iPhone, Apple is teaming up with OpenAI to give users the option of farming out more complicated tasks to the popular ChatGPT chatbot.

    Although Apple is releasing a free version of its operating system to propel its on-device AI features, the chip needed to run the technology is only available on the iPhone 16 lineup and the high-end iPhone 15 models that came out a year ago.

    That means most consumers who are interested in taking advantage of Apple’s approach to AI will have to buy one of the iPhone 16 models — a twist that investors are counting on will fuel a surge in demand heading into the holiday season.

    The anticipated sales boom is the main reason Apple’s stock price has climbed by more than 10% since the June developers conference, including a slight uptick Monday after the shares initially slipped following the showcase for the latest iPhones. Wedbush Securities analyst Dan Ives was so impressed with what he saw Monday that he predicted the new AI iPhones will propel Apple’s market value through the $4 trillion threshold next year for the first time. That forecast translates into increase of roughly 20% from Monday’s closing price of $220.91 for Apple’s stock.

    Besides its latest iPhones, Apple also introduced a new version of its smartwatch that will include a feature to help detect sleep apnea as well the next generation of its wireless headphones, the AirPods Pro, that will be able to function as a hearing aid with an upcoming software update.

    Source link

  • UK competition watchdog clears Microsoft’s hiring of AI startup’s core staff

    UK competition watchdog clears Microsoft’s hiring of AI startup’s core staff

    LONDON (AP) — British regulators on Wednesday cleared Microsoft’s hiring of key staff from startup Inflection AI, saying the deal wouldn’t stifle competition in the country’s artificial intelligence market.

    The Competition and Markets Authority had opened a preliminary investigation in July into Microsoft’s recruitment of Inflection’s core team, including co-founder and CEO Mustafa Suleyman, chief scientist Karen Simonyan and several top engineers and researchers.

    The watchdog said its investigation found that the hirings amounted to a “merger situation” but that the “transaction does not give rise to a realistic prospect of a substantial lessening of competition.”

    Big technology companies have been facing scrutiny on both sides of the Atlantic lately for gobbling up talent and products at innovative AI startups without formally acquiring them.

    Three U.S. Senators called for the practice to be investigated after Amazon pulled a similar maneuver this year in a deal with San Francisco-based Adept that sent its CEO and key employees to the e-commerce giant. Amazon also got a license to Adept’s AI systems and datasets.

    The U.K. watchdog said Microsoft hired “almost all of Inflection’s team” and licensed its intellectual property, which gave it access to the startup’s AI model and chatbot development capabilities.

    Inflection’s main product is a chatbot named Pi that specializes in “emotional intelligence” by being being “kind and supportive.”

    However, the CMA said the deal won’t result in a big loss of competition because Inflection has a “very small” share of the U.K. consumer market for chatbots, and it lacks chatbot features that make it more attractive than rivals.

    Source link

  • California lawmakers approve legislation to ban deepfakes, protect workers and regulate AI

    California lawmakers approve legislation to ban deepfakes, protect workers and regulate AI

    SACRAMENTO, Calif. — California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology.

    The California Legislature, which is controlled by Democrats, is voting on hundreds of bills during its final week of the session to send to Gov. Gavin Newsom’s desk. Their deadline is Saturday.

    The Democratic governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature. Newsom signaled in July he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation.

    He warned earlier this summer that overregulation could hurt the homegrown industry. In recent years, he often has cited the state’s budget troubles when rejecting legislation that he would otherwise support.

    Here is a look at some of the AI bills lawmakers approved this year.

    Citing concerns over how AI tools are increasingly being used to trick voters and generate deepfake pornography of minors, California lawmakers approved several bills this week to crack down on the practice.

    Lawmakers approved legislation to ban deepfakes related to elections and require large social media platforms to remove the deceptive material 120 days before Election Day and 60 days thereafter. Campaigns also would be required to publicly disclose if they’re running ads with materials altered by AI.

    A pair of proposals would make it illegal to use AI tools to create images and videos of child sexual abuse. Current law does not allow district attorneys to go after people who possess or distribute AI-generated child sexual abuse images if they cannot prove the materials are depicting a real person.

    Tech companies and social media platforms would be required to provide AI detection tools to users under another proposal.

    California could become the first state in the nation to set sweeping safety measures on large AI models.

    The legislation sent by lawmakers to the governor’s desk requires developers to start disclosing what data they use to train their models. The efforts aim to shed more light into how AI models work and prevent future catastrophic disasters.

    Another measure would require the state to set safety protocols preventing risks and algorithmic discrimination before agencies could enter any contract involving AI models used to define decisions.

    Inspired by the months-long Hollywood actors strike last year, lawmakers approved a proposal to protect workers, including voice actors and audiobook performers, from being replaced by their AI-generated clones. The measure mirrors language in the contract the SAG-AFTRA made with studios last December.

    State and local agencies would be banned from using AI to replace workers at call centers under one of the proposals.

    California also may create penalties for digitally cloning dead people without consent of their estates.

    As corporations increasingly weave AI into Americans’ daily lives, state lawmakers also passed several bills to increase AI literacy.

    One proposal would require a state working group to consider incorporating AI skills into math, science, history and social science curriculums. Another would develop guideline on how schools could use AI in the classrooms.

    Source link

  • All eyes are on Nvidia as it prepares to report its earnings. Here’s what to expect

    All eyes are on Nvidia as it prepares to report its earnings. Here’s what to expect

    Nvidia has led the artificial intelligence boom to become one of the stock market’s biggest companies, as tech giants continue to spend heavily on the company’s chips and data centers needed to train and operate their AI systems

    LOS ANGELES — Nvidia has led the artificial intelligence boom to become one of the stock market’s biggest companies, as tech giants continue to spend heavily on the company’s chips and data centers needed to train and operate their AI systems.

    The company is now worth over $3 trillion, with its dominance as a chipmaker cementing Nvidia’s place as the poster child of the AI industry ahead of the release of its latest financial results after the close of trading Wednesday.

    Wall Street expects the company to report second-quarter adjusted earnings of 65 cents per share up from 27 cents a year ago. Revenue is expected to have surged to $28.74 billion, more than double what it earned in the comparable quarter one year ago. By comparison, S&P 500 companies overall are expected to deliver just 5% growth in revenue for the quarter, according to FactSet.

    The problem, critics say, is such stellar growth has set off too much euphoria among investors. Through the year’s first six months, Nvidia’s stock soared nearly 150%. At that point, the stock was trading at a little more than 100 times the company’s earnings over the prior 12 months. That’s much more expensive than it’s been historically and than the S&P 500 in general. That’s why analysts warn of a selloff if Wall Street sees any indication that AI demand is waning.

    Demand for generative AI products that can compose documents, make images and serve as personal assistants has fueled sales of Nvidia’s specialized chips over the last year. In the past three quarters, Nvidia’s revenue has more than tripled on an annual basis, with the vast majority of growth coming from the data center business.

    The Santa Clara, California-based company carved out an early lead in AI applications race, in part because of founder and CEO Jensen Huang’s successful bet on the chip technology used to fuel the industry. The company is no stranger to big bets. Nvidia’s invention of the graphics processor unit, or GPU, in 1999 helped spark the growth of the PC gaming market and redefined computer graphics.

    Source link

  • People with ADHD are turning to AI apps to help with tasks. Experts say try it cautiously

    People with ADHD are turning to AI apps to help with tasks. Experts say try it cautiously

    Becky Litvintchouk didn’t think she’d be able to manage the mountain of tasks needed to become an entrepreneur. Every other part of her life has been overwhelming because of ADHD, which can impact her ability to concentrate.

    So, she turned to AI. The app Claude helps her decide which contracts made the most sense for her hygienic-wipes business, GetDirty, without having to read them word for word. She also created business plans by telling the generative AI bot what her goals were and having it create steps for her to get there.

    “It’s been just massively instrumental. I probably would not be where I am today,” she said of using AI for about two years.

    Experts say generative AI tools can help people with attention deficit hyperactivity disorder — who experience difficulties with focusing, organizing and controlling impulses — to get through tasks quicker. But they also caution that it shouldn’t replace traditional treatment for ADHD, and also expressed concerns about potential overreliance and invasion of privacy.

    Will apps replace ADHD treatment?

    Emily Kircher-Morris, a counselor who focuses on neurodivergent patients, said she’s seen the tools be useful to her clients with ADHD. She even uses them herself since she has ADHD.

    Her clients, she said, seem to have varying levels of comfort with the idea of using AI. But for those who take to the technology, “it really can help to hook people in, like, ‘Oh, this is kind of a fancy new thing that catches my interest. And so I really want to dig in and explore it.’”

    This article is part of AP’s Be Well coverage, focusing on wellness, fitness, diet and mental health. Read more Be Well.

    She also said it’s good to use caution. John Mitchell, an associate professor at Duke University School of Medicine, added that AI apps should be used more as “one tool in a toolbox” instead of replacing traditional treatments such as developing organizational skills or taking prescription medications.

    “If you’re kind of treading water in your job and AI’s a life preserver, well, that’s great you’re staying above water, but, you know, you still don’t know how to swim,” he said.

    What else can the apps do?

    Litvintchouk, a married mother of four living in New York City, dropped out of high school and left the workforce — all things that research shows are more likely to happen to people with ADHD, putting them at higher risk of economic instability.

    Aside from helping with her business, she uses ChatGPT to help with grocery shopping — another thing that can be fraught for people with ADHD because of the organization and planning skills needed — by having it brainstorm easy-to-prepare recipes with a corresponding grocery list.

    When she shared her technique with another mom who also has ADHD, she felt more people needed to know about it, so she started creating videos on TikTok about various AI tools she uses to help manage her ADHD struggles.

    “That’s when I was like, you know what? I need to, like, educate people,” she said.

    Generative AI tools can help people with ADHD break down big tasks into smaller, more manageable steps. Chatbots can offer specific advice and can sound like you’re talking with a human. Some AI apps can also help with reminders and productivity.

    Software engineer Bram de Buyser, said he created Goblin.tools with his neurodivergent friends in mind. Its most popular feature is the “magic to-do,” where a user can enter a task and the bot will spit out a to-do list. They can even break down items on the list into smaller tasks.

    “I’m not trying to build a cure,” he said, “but something that helps them out (for) two minutes out of the day that they would otherwise struggle with.”

    What kinds of problems could apps create?

    Husson University professor Russell Fulmer describes the research around AI and ADHD as “inconclusive.” While experts say they see how artificial intelligence could have a positive impact on the lives of people with anxiety and ADHD, Fulmer said, it may not work perfectly for everyone, like people of color with ADHD.

    He pointed to chatbot responses that have been racist and biased at times.

    Valese Jones, a publicist and founder of Sincerely Nicole Media, was diagnosed with ADHD as a child and uses AI bots to help with reading and responding to emails and proofreading public relations plans. But its responses don’t always capture who she really is.

    “I’m southern, so I talk like a southerner. There are cadences in my writing where you can kind of pick up on the fact that I’m southern, and that’s on purpose,” said Jones, who is Black. “It doesn’t pick up on Black women’s tone, and if you do put in like, ‘say it like African American,’ it automatically goes to talking like ‘Malibu’s Most Wanted.’”

    And de Buyser said while he sees a future where AI chatbots function more like a personal assistant that is “never tired, never sleeps,” it could also have privacy implications.

    “If you say, ‘Oh, I want an AI that gives me personal information and checks my calendar’ and all of that, you are giving that big company access to your emails, your calendar, personal correspondence, essentially your deepest, darkest secrets just so it can give you something useful back,” he warned.

    ___

    The Associated Press Health and Science Department receives support from the Robert Wood Johnson Foundation. The AP is solely responsible for all content.

    Source link

  • Wyoming reporter caught using artificial intelligence to create fake quotes and stories

    Wyoming reporter caught using artificial intelligence to create fake quotes and stories

    HELENA, Mont. — A quote from Wyoming’s governor and a local prosecutor were the first things that seemed slightly off to Powell Tribune reporter CJ Baker. Then, it was some of the phrases in the stories that struck him as nearly robotic.

    The dead giveaway, though, that a reporter from a competing news outlet was using generative artificial intelligence to help write his stories came in a June 26 article about the comedian Larry the Cable Guy being chosen as the grand marshal of the Cody Stampede Parade.

    “The 2024 Cody Stampede Parade promises to be an unforgettable celebration of American independence, led by one of comedy’s most beloved figures,” the Cody Enterprise reported. “This structure ensures that the most critical information is presented first, making it easier for readers to grasp the main points quickly.”

    After doing some digging, Baker, who has been a reporter for more than 15 years, met with Aaron Pelczar, a 40-year-old who was new to journalism and who Baker says admitted that he had used AI in his stories before he resigned from the Enterprise.

    The publisher and editor at the Enterprise, which was co-founded in 1899 by Buffalo Bill Cody, have since apologized and vowed to take steps to ensure it never happens again. In an editorial published Monday, Enterprise Editor Chris Bacon said he “failed to catch” the AI copy and false quotes.

    “It matters not that the false quotes were the apparent error of a hurried rookie reporter that trusted AI. It was my job,” Bacon wrote. He apologized that “AI was allowed to put words that were never spoken into stories.”

    Journalists have derailed their careers by making up quotes or facts in stories long before AI came about. But this latest scandal illustrates the potential pitfalls and dangers that AI poses to many industries, including journalism, as chatbots can spit out spurious if somewhat plausible articles with only a few prompts.

    AI has found a role in journalism, including in the automation of certain tasks. Some newsrooms, including The Associated Press, use AI to free up reporters for more impactful work, but most AP staff are not allowed to use generative AI to create publishable content.

    The AP has been using technology to assist in articles about financial earnings reports since 2014, and more recently for some sports stories. It is also experimenting with an AI tool to translate some stories from English to Spanish. At the end of each such story is a note that explains technology’s role in its production.

    Being upfront about how and when AI is used has proven important. Sports Illustrated was criticized last year for publishing AI-generated online product reviews that were presented as having been written by reporters who didn’t actually exist. After the story broke, SI said it was firing the company that produced the articles for its website, but the incident damaged the once-powerful publication’s reputation.

    In his Powell Tribune story breaking the news about Pelczar’s use of AI in articles, Baker wrote that he had an uncomfortable but cordial meeting with Pelczar and Bacon. During the meeting, Pelczar said, “Obviously I’ve never intentionally tried to misquote anybody” and promised to “correct them and issue apologies and say they are misstatements,” Baker wrote, noting that Pelczar insisted his mistakes shouldn’t reflect on his Cody Enterprise editors.

    After the meeting, the Enterprise launched a full review of all of the stories Pelczar had written for the paper in the two months he had worked there. They have discovered seven stories that included AI-generated quotes from six people, Bacon said Tuesday. He is still reviewing other stories.

    “They’re very believable quotes,” Bacon said, noting that the people he spoke to during his review of Pelczar’s articles said the quotes sounded like something they’d say, but that they never actually talked to Pelczar.

    Baker reported that seven people told him that they had been quoted in stories written by Pelczar, but had not spoken to him.

    Pelczar did not respond to an AP phone message left at a number listed as his asking to discuss what happened. Bacon said Pelczar declined to discuss the matter with another Wyoming newspaper that had reached out.

    Baker, who regularly reads the Enterprise because it’s a competitor, told the AP that a combination of phrases and quotes in Pelczar’s stories aroused his suspicions.

    Pelczar’s story about a shooting in Yellowstone National Park included the sentence: “This incident serves as a stark reminder of the unpredictable nature of human behavior, even in the most serene settings.”

    Baker said the line sounded like the summaries of his stories that a certain chatbot seems to generate, in that it tacks on some kind of a “life lesson” at the end.

    Another story — about a poaching sentencing — included quotes from a wildlife official and a prosecutor that sounded like they came from a news release, Baker said. However, there wasn’t a news release and the agencies involved didn’t know where the quotes had come from, he said.

    Two of the questioned stories included fake quotes from Wyoming Gov. Mark Gordon that his staff only learned about when Baker called them.

    “In one case, (Pelczar) wrote a story about a new OSHA rule that included a quote from the Governor that was entirely fabricated,” Michael Pearlman, a spokesperson for the governor, said in an email. “In a second case, he appeared to fabricate a portion of a quote, and then combined it with a portion of a quote that was included in a news release announcing the new director of our Wyoming Game and Fish Department.”

    The most obvious AI-generated copy appeared in the story about Larry the Cable Guy that ended with the explanation of the inverted pyramid, the basic approach to writing a breaking news story.

    It’s not difficult to create AI stories. Users could put a criminal affidavit into an AI program and ask it to write an article about the case including quotes from local officials, said Alex Mahadevan, director of a digital media literacy project at the Poynter Institute, the preeminent journalism think tank.

    “These generative AI chatbots are programmed to give you an answer, no matter whether that answer is complete garbage or not,” Mahadevan said.

    Megan Barton, the Cody Enterprise’s publisher, wrote an editorial calling AI “the new, advanced form of plagiarism and in the field of media and writing, plagiarism is something every media outlet has had to correct at some point or another. It’s the ugly part of the job. But, a company willing to right (or quite literally write) these wrongs is a reputable one.”

    Barton wrote that the newspaper has learned its lesson, has a system in place to recognize AI-generated stories and will “have longer conversations about how AI-generated stories are not acceptable.”

    The Enterprise didn’t have an AI policy, in part because it seemed obvious that journalists shouldn’t use it to write stories, Bacon said. Poynter has a template from which news outlets can build their own AI policy.

    Bacon plans to have one in place by the end of the week.

    “This will be a pre-employment topic of discussion,” he said.

    Source link

  • A judge has branded Google a monopolist, but AI may bring about quicker change in internet search

    A judge has branded Google a monopolist, but AI may bring about quicker change in internet search

    SAN FRANCISCO — A federal judge has branded Google as a ruthless monopolist bent on suffocating it competitors. But how do you go about creating alternatives to a search engine that’s synonymous with internet exploration?

    It’s a process that may take years to unfold as Google appeals the landmark decision issued Monday by U.S. District Judge Amit Mehta.

    And with that kind of time frame looming, the forces of technological upheaval may make the exercise moot.

    The rise of artificial intelligence may reshape the landscape more quickly and profoundly than any judge ever could. The way consumers navigate the internet is more likely to be affected by advances in AI products — such as OpenAI’s ChatGPT and Google’s own Gemini — before a nearly 4-year-old case brought by the U.S. Justice Department is finally resolved.

    Even so, Mehta’s 277-page ruling Monday creates challenges for Google that company founders Larry Page and Sergey Brin probably didn’t envision when they set out to revolutionize internet search while attending Stanford University as graduate students. They eventually dropped out to start a Silicon Valley company in 1998 that adopted “Don’t Be Evil” as a motto that also was meant to serve as its corporate conscience.

    Page and Brin, who remain the controlling shareholders of Google’s corporate parent Alphabet Inc., also cast their cuddly startup as a crusader for technology that would be far better than the products coming out of Microsoft, the industry’s reigning kingpin at the time. Microsoft’s dominance of personal computer software and anticompetitive tactics during the 1990s spurred another Justice Department case that ended up hobbling Microsoft and helped make it easier for Google to build its lead in search and then expand into maps, cloud computing, email (Gmail), web browsers (Chrome) and video (YouTube).

    Now, the script has been flipped, with Google facing potential legal constraints, while a resurgent Microsoft has been making early headway in AI with a major helping hand from its investment in OpenAI. In one of the most dramatic scenarios that most experts think is unlikely to happen, Google might be forced to break up its business similar to how AT&T — once known as “Ma Bell” — ended up spinning off its telephone subsidiaries into separate “Baby Bells” more than 40 years ago.

    It will be left to Google CEO Sundar Pichai, who took over the company’s leadership from Page in 2015, to minimize the distractions caused by the legal skirmishing still to come and remain focused on an industrywide pivot to AI technology that’s expected to be as revolutionary as the mobile computing shift by Apple’s introduction of the iPhone in 2007.

    The debate about how Google should be overhauled will begin Sept. 6 with a hearing scheduled in Washington, D.C., before Mehta, who also presided over the 10-week trial last year that led to his antitrust decision.

    Google also will be pursing an appeal, based on its long-held contention that it has done nothing wrong but build and maintain a search engine that has been far superior to anything else for more than 20 years. The Mountain View, California, company also maintains that competition is just a few clicks away, with consumers still free to go to other options, such as Microsoft’s Bing, DuckDuckGo and, more recently, AI-powered alternatives such as Perplexity and ChatGPT.

    Although Mehta praised the quality of Google’s search engine in his ruling and acknowledged the company initially became the people’s preferred choice in its early days, he concluded it resorted to unfair tactics to maintain its leadership during the past decade. Google did it, Mehta said, mainly by negotiating lucrative deals to cement a position as the default search engine on the iPhone and wide range of other devices, including PCs.

    Those deals, which totaled $26 billion in 2021 alone, meant Google automatically processed search requests unless consumers took the time to manually go into their settings and choose another option — something that few do. The default option then helped Google collect valuable insights that enabled the company to improve its search engine in ways that rivals couldn’t because they lacked the same data.

    Default requests processed accounted for 60% of Google’s search traffic in 2017, Mehta pointed out in his ruling, and that volume in turn created more opportunities to sell the ads that generate the majority of its parent company’s $307 billion in annual revenue.

    Mehta’s focus on the default search deals in his ruling make it likely he may decide to ban them after the next trial phase is completed, according to antitrust experts. That could have implications for other companies besides Google, especially Apple, which pockets about $20 billion annually from an arrangement that is currently scheduled to continue through 2026, with options to extend the alliance into 2028.

    Apple didn’t respond to a request for comment about Mehta’s decision, but its executives have depicted the decision to make Google the default search engine on the iPhone and other products as a convenience to its customers — most of whom prefer to use Google.

    But an order preventing Apple from doing default search engine deals with Google could do more than just siphon away revenue. It might also require Apple to spend heavily to develop its own search technology — an endeavor that Google estimated would cost more than $30 billion as part of 2020 analysis that Mehta cited in his ruling. Then, it would cost Apple an additional $7 billion annually to sustain its own search engine, according to Google’s analysis.

    Source link

  • Meta is reportedly offering millions to use Hollywood voices in AI projects

    Meta is reportedly offering millions to use Hollywood voices in AI projects

    A future artificial intelligence product by Meta could have you chatting with celebrities. According to Bloomberg and The New York Times, the company is in talks with Awkwafina, Judi Dench and Keegan-Michael Key, among other celebrities from various Hollywood agencies for its AI projects. The company apparently intends to incorporate their voices into a conversational generative AI-slash-digital assistant called MetaAI, which is similar to Siri and Google Assistant.

    Meta plans to record their voices and to secure the right to use them for as many situations as possible across Facebook, Messenger, Instagram, WhatsApp and even the Ray-Ban Meta glasses. Bloomberg says negotiations have started and stopped many times, because both sides can’t seem to agree with the terms for use. For now, they seemed to have settled on a time limit, meaning any voice the company records can only be used over a set period. However, the deals with the actors could be renewed or extended by the time their contract is up.

    The actors’ representatives are still looking to negotiate for stricter limits, though SAG-AFTRA has reportedly reached an agreement with Meta on terms. SAG-AFTRA, if you’ll recall, fought for the establishment of provisions to protect actors from the threat of job loss due to AI when it went on strike last year. Under those terms, a company will have to pay actors and obtain their consent before it can use their AI-generated likeness. If Meta reaches a deal with the actors it’s talking to, it could pay them millions of dollars in fees.

    Meta is looking to finalize deals before its Connect conference in September, The Times says, where it’s expected to launch a bunch of AI products. During the same event last year, the company also introduced a chatbot platform with 28 “characters” voiced by celebrities, including Snoop Dogg, Paris Hilton, Dwyane Wade and Kendall Jenner. The Information reports that Meta has just quietly scrapped that project, and the celebrity chatbots’ pages on Facebook and Instagram are no longer available.

    This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.

    Mariella Moon

    Source link

  • Amazon reports boost in quarterly profits but misses revenue estimates

    Amazon reports boost in quarterly profits but misses revenue estimates

    Amazon reported a boost in its quarterly profits Thursday, but the company missed revenue estimates, sending its stock lower in after-hours trading.

    The Seattle-based tech company said it earned $13.5 billion for the April-June period, higher than the $10.99 billion industry analysts surveyed by FactSet had anticipated. Amazon earned $6.7 billion during the same period last year.

    Earnings per share for the second quarter came out to $1.26, higher than analysts’ expectations of $1.03.

    However, investors reacted negatively to other results, leading Amazon shares to fall more than 6% after the closing bell. The company posted revenue of $148 billion, a 10% increase that fell slightly below analyst expectations of $148.67 billion.

    Amazon also said it expects revenue for the current quarter, which ends Sept. 30, to be between $154 billion and $158.5 billion — lower than the $158.22 billion forecast by analysts.

    Amazon boosted its spending during the COVID-19 pandemic to keep up with higher demand from consumers who became more reliant on online shopping. But as demand cooled and wider economic conditions pressured other parts of its business, the company aggressively cut costs by eliminating unprofitable businesses and laying off more than 27,000 corporate employees.

    The cost-cutting has led to growth in profits. However, Amazon is also feeling the benefits of the buzz around generative artificial intelligence, which has helped reaccelerate its cloud computing unit, Amazon Web Services, after it experienced a slowdown.

    The company said Thursday that Amazon Web Services saw a 19% jump in revenue compared to the same period last year.

    “We’re continuing to make progress on a number of dimensions, but perhaps none more so than the continued reacceleration in AWS growth,” Amazon CEO Andy Jassy said in a statement.

    The cloud computing unit, whose customers are mostly businesses, has been attempting to lure in more customers with new tools, including a service called Amazon Bedrock that provides companies with access to AI models they can use to make their own applications. In April, Jassy said AWS was on pace for $100 billion in annual revenue.

    But Amazon is also expected to spend more this year to support the unit. During a call with reporters, Chief Financial Officer Brian Olsavsky said the company spent more than $30 billion during the first half of the year on capital expenditures, the majority of it to boost infrastructure for AWS. It expects that to increase during the second half, he said.

    Like other tech companies, Amazon has been ramping up investments in data centers, chips and the power needed for AI workloads, Olsavsky said. Among other projects, the company plans to put billions toward additional infrastructure in Saudi Arabia, Mexico and Mississippi, where it has secured state incentives to build two data center “complexes.”

    “The key for us is always to make sure that we’re matching that supply and demand, and running it efficiently so we don’t have excess capacity,” Olsavsky said. “That’s not a concern right now. Our concern is more on getting the supply.”

    Meanwhile, revenue for the company’s core e-commerce business grew by 5%, which was more sluggish compared to recent quarters. The numbers did not include sales from Amazon’s annual Prime Day discount event, which took place last month.

    Olsavsky said the company came up short on revenue growth in North America because customers were still being cautious with their spending and trading down to cheaper items.

    Amazon said sales from its advertising business — which mostly comes from ad listings on its online platform — jumped by 20%. Earlier this year, it began placing ads on movies and TV shows found on its Prime Video service to bring in extra dollars.

    Last month, Prime Video also became one of three companies to sign an 11-year media rights deal with the National Basketball Association.

    But the company faces other challenges.

    This week, federal regulators said Amazon was responsible for the recall of more than 400,000 hazardous products that were sold on its platform by third-party sellers and shipped using its fulfillment service.

    Amazon is also facing an antitrust lawsuit, which alleges it has been overcharging sellers and stifling competition.

    Amazon’s results followed other earning reports this week from tech giants such as Microsoft, Meta and Google’s corporate parent, Alphabet Inc.

    Source link

  • Websites accuse AI startup Anthropic of bypassing their anti-scraping rules and protocol

    Websites accuse AI startup Anthropic of bypassing their anti-scraping rules and protocol

    Freelancer has accused Anthropic, the AI startup behind the Claude large language models, of ignoring its “do not crawl” robots.txt protocol to scrape its websites’ data. Meanwhile, iFixit CEO Kyle Wiens said Anthropic has ignored the website’s policy prohibiting the use of its content for AI model training. Matt Barrie, the chief executive of Freelancer, told The Information that Anthropic’s ClaudeBot is “the most aggressive scraper by far.” His website allegedly got 3.5 million visits from the company’s crawler within a span of four hours, which is “probably about five times the volume of the number two” AI crawler. Similarly, Wiens posted on X/Twitter that Anthropic’s bot hit iFixit’s servers a million times in 24 hours. “You’re not only taking our content without paying, you’re tying up our devops resources,” he wrote.

    Back in June, Wired accused another AI company, Perplexity, of crawling its website despite the presence of the Robots Exclusion Protocol, or robots.txt. A robots.txt file typically contains instructions for web crawlers on which pages they can and can’t access. While compliance is voluntary, it’s mostly just been ignored by bad bots. After Wired’s piece came out, a startup called TollBit that connects AI firms with content publishers reported that it’s not just Perplexity that’s bypassing robots.txt signals. While it didn’t name names, Business Insider said it learned that OpenAI and Anthropic were ignoring the protocol, as well.

    Barrie said Freelancer tried to refuse the bot’s access requests at first, but it ultimately had to block Anthropic’s crawler entirely. “This is egregious scraping [which] makes the site slower for everyone operating on it and ultimately affects our revenue,” he added. As for iFixit, Wiens said the website has set alarms for high traffic, and his people got woken up at 3AM due to Anthropic’s activities. The company’s crawler stopped scraping iFixit after it added a line in its robots.txt file that disallows Anthropic’s bot, in particular.

    The AI startup told The Information that it respects robots.txt and that its crawler “respected that signal when iFixit implemented it.” It also said that it aims “for minimal disruption by being thoughtful about how quickly [it crawls] the same domains,” which is why it’s now investigating the case.

    AI firms use crawlers to collect content from websites that they can use to train their generative AI technologies. They’ve been the target of multiple lawsuits as a result, with publishers accusing them of copyright infringement. To prevent more lawsuits from being filed, companies like OpenAI have been striking deals with publishers and websites. OpenAI’s content partners, so far, include News Corp, Vox Media, the Financial Times and Reddit. iFixit’s Wiens seems open to the idea of signing a deal for the how-to-repair’s website’s articles, as well, telling Anthropic in a tweet he’s willing to have a conversation about licensing content for commercial use.

    Mariella Moon

    Source link

  • Video game performers will go on strike over artificial intelligence concerns

    Video game performers will go on strike over artificial intelligence concerns

    LOS ANGELES — Hollywood’s video game performers announced they would go on strike Thursday, throwing part of the entertainment industry into another work stoppage after talks for a new contract with major game studios broke down over artificial intelligence protections.

    The strike — the second for video game voice actors and motion capture performers under the Screen Actors Guild-American Federation of Television and Radio Artists — will begin at 12:01 a.m. Friday. The move comes after nearly two years of negotiations with gaming giants, including divisions of Activision, Warner Bros. and Walt Disney Co., over a new interactive media agreement.

    SAG-AFTRA negotiators say gains have been made over wages and job safety in the video game contract, but that the two sides remained split over the regulation of generative AI. A spokesperson for the video game producers, Audrey Cooling, said the studios offered AI protections, but SAG-AFTRA’s negotiating committee said that the studios’ definition of who constitutes a “performer” is key to understanding the issue of who would be protected.

    “The industry has told us point blank that they do not necessarily consider everyone who is rendering movement performance to be a performer that is covered by the collective bargaining agreement,” SAG-AFTRA Chief Contracts Officer Ray Rodriguez said at a news conference Thursday afternoon. He said some physical performances are being treated as “data.”

    Without guardrails, game companies could train AI to replicate an actor’s voice, or create a digital replica of their likeness without consent or fair compensation, the union said.

    “We strike as a matter of last resort. We have given this process absolutely as much time as we responsibly can,” Rodriguez told reporters. “We have exhausted the other possibilities, and that is why we’re doing it now.”

    Cooling said the companies’ offer “extends meaningful AI protections.”

    “We are disappointed the union has chosen to walk away when we are so close to a deal, and we remain prepared to resume negotiations,” she said.

    Andi Norris, an actor and member of the union’s negotiating committee, said that those who do stunt work or creature performances would still be at risk under the game companies’ offer.

    “The performers who bring their body of work to these games create a whole variety of characters, and all of that work must be covered. Their proposal would carve out anything that doesn’t look and sound identical to me as I sit here, when, in truth, on any given week I am a zombie, I am a soldier, I am a zombie soldier,” Norris said. “We cannot and will not accept that a stunt or movement performer giving a full performance on stage next to a voice actor isn’t a performer.”

    The global video game industry generates well over $100 billion dollars in profit annually, according to game market forecaster Newzoo. The people who design and bring those games to life are the driving force behind that success, SAG-AFTRA said.

    Members voted overwhelmingly last year to give leadership the authority to strike. Concerns about how movie studios will use AI helped fuel last year’s film and television strikes by the union, which lasted four months.

    The last interactive contract, which expired in November 2022, did not provide protections around AI but secured a bonus compensation structure for voice actors and performance capture artists after an 11-month strike that began in October 2016. That work stoppage marked the first major labor action from SAG-AFTRA following the merger of Hollywood’s two largest actors unions in 2012.

    The video game agreement covers more than 2,500 “off-camera (voiceover) performers, on-camera (motion capture, stunt) performers, stunt coordinators, singers, dancers, puppeteers, and background performers,” according to the union.

    Amid the tense interactive negotiations, SAG-AFTRA created a separate contract in February that covered independent and lower-budget video game projects. The tiered-budget independent interactive media agreement contains some of the protections on AI that video game industry titans have rejected. Games signed to an interim interactive media agreement, tiered-budget independent interactive agreement or interim interactive localization agreement are not part of the strike, the union said.

    Source link

  • Video game performers will go on strike over artificial intelligence concerns

    Video game performers will go on strike over artificial intelligence concerns

    LOS ANGELES — Hollywood’s video game performers announced they would go on strike Thursday, throwing part of the entertainment industry into another work stoppage after talks for a new contract with major game studios broke down over artificial intelligence protections.

    The strike — the second for video game voice actors and motion capture performers under the Screen Actors Guild-American Federation of Television and Radio Artists — will begin at 12:01 a.m. Friday. The move comes after nearly two years of negotiations with gaming giants, including divisions of Activision, Warner Bros. and Walt Disney Co., over a new interactive media agreement.

    SAG-AFTRA negotiators say gains have been made over wages and job safety in the video game contract, but that the two sides remained split over the regulation of generative AI. A spokesperson for the video game producers, Audrey Cooling, said the studios offered AI protections, but SAG-AFTRA’s negotiating committee said that the studios’ definition of who constitutes a “performer” is key to understanding the issue of who would be protected.

    “The industry has told us point blank that they do not necessarily consider everyone who is rendering movement performance to be a performer that is covered by the collective bargaining agreement,” SAG-AFTRA Chief Contracts Officer Ray Rodriguez said at a news conference Thursday afternoon. He said some physical performances are being treated as “data.”

    Without guardrails, game companies could train AI to replicate an actor’s voice, or create a digital replica of their likeness without consent or fair compensation, the union said.

    “We strike as a matter of last resort. We have given this process absolutely as much time as we responsibly can,” Rodriguez told reporters. “We have exhausted the other possibilities, and that is why we’re doing it now.”

    Cooling said the companies’ offer “extends meaningful AI protections.”

    “We are disappointed the union has chosen to walk away when we are so close to a deal, and we remain prepared to resume negotiations,” she said.

    Andi Norris, an actor and member of the union’s negotiating committee, said that those who do stunt work or creature performances would still be at risk under the game companies’ offer.

    “The performers who bring their body of work to these games create a whole variety of characters, and all of that work must be covered. Their proposal would carve out anything that doesn’t look and sound identical to me as I sit here, when, in truth, on any given week I am a zombie, I am a soldier, I am a zombie soldier,” Norris said. “We cannot and will not accept that a stunt or movement performer giving a full performance on stage next to a voice actor isn’t a performer.”

    The global video game industry generates well over $100 billion dollars in profit annually, according to game market forecaster Newzoo. The people who design and bring those games to life are the driving force behind that success, SAG-AFTRA said.

    Members voted overwhelmingly last year to give leadership the authority to strike. Concerns about how movie studios will use AI helped fuel last year’s film and television strikes by the union, which lasted four months.

    The last interactive contract, which expired in November 2022, did not provide protections around AI but secured a bonus compensation structure for voice actors and performance capture artists after an 11-month strike that began in October 2016. That work stoppage marked the first major labor action from SAG-AFTRA following the merger of Hollywood’s two largest actors unions in 2012.

    The video game agreement covers more than 2,500 “off-camera (voiceover) performers, on-camera (motion capture, stunt) performers, stunt coordinators, singers, dancers, puppeteers, and background performers,” according to the union.

    Amid the tense interactive negotiations, SAG-AFTRA created a separate contract in February that covered independent and lower-budget video game projects. The tiered-budget independent interactive media agreement contains some of the protections on AI that video game industry titans have rejected. Games signed to an interim interactive media agreement, tiered-budget independent interactive agreement or interim interactive localization agreement are not part of the strike, the union said.

    Source link

  • Video game performers will go on strike over artificial intelligence concerns

    Video game performers will go on strike over artificial intelligence concerns

    LOS ANGELES — Hollywood’s video game performers announced they would go on strike Thursday, throwing part of the entertainment industry into another work stoppage after talks for a new contract with major game studios broke down over artificial intelligence protections.

    The strike — the second for video game voice actors and motion capture performers under the Screen Actors Guild-American Federation of Television and Radio Artists — will begin at 12:01 a.m. Friday. The move comes after nearly two years of negotiations with gaming giants, including divisions of Activision, Warner Bros. and Walt Disney Co., over a new interactive media agreement.

    SAG-AFTRA negotiators say gains have been made over wages and job safety in the video game contract, but that the two sides remained split over the regulation of generative AI. A spokesperson for the video game producers, Audrey Cooling, said the studios offered AI protections, but SAG-AFTRA’s negotiating committee said that the studios’ definition of who constitutes a “performer” is key to understanding the issue of who would be protected.

    “The industry has told us point blank that they do not necessarily consider everyone who is rendering movement performance to be a performer that is covered by the collective bargaining agreement,” SAG-AFTRA Chief Contracts Officer Ray Rodriguez said at a news conference Thursday afternoon. He said some physical performances are being treated as “data.”

    Without guardrails, game companies could train AI to replicate an actor’s voice, or create a digital replica of their likeness without consent or fair compensation, the union said.

    “We strike as a matter of last resort. We have given this process absolutely as much time as we responsibly can,” Rodriguez told reporters. “We have exhausted the other possibilities, and that is why we’re doing it now.”

    Cooling said the companies’ offer “extends meaningful AI protections.”

    “We are disappointed the union has chosen to walk away when we are so close to a deal, and we remain prepared to resume negotiations,” she said.

    Andi Norris, an actor and member of the union’s negotiating committee, said that those who do stunt work or creature performances would still be at risk under the game companies’ offer.

    “The performers who bring their body of work to these games create a whole variety of characters, and all of that work must be covered. Their proposal would carve out anything that doesn’t look and sound identical to me as I sit here, when, in truth, on any given week I am a zombie, I am a soldier, I am a zombie soldier,” Norris said. “We cannot and will not accept that a stunt or movement performer giving a full performance on stage next to a voice actor isn’t a performer.”

    The global video game industry generates well over $100 billion dollars in profit annually, according to game market forecaster Newzoo. The people who design and bring those games to life are the driving force behind that success, SAG-AFTRA said.

    Members voted overwhelmingly last year to give leadership the authority to strike. Concerns about how movie studios will use AI helped fuel last year’s film and television strikes by the union, which lasted four months.

    The last interactive contract, which expired in November 2022, did not provide protections around AI but secured a bonus compensation structure for voice actors and performance capture artists after an 11-month strike that began in October 2016. That work stoppage marked the first major labor action from SAG-AFTRA following the merger of Hollywood’s two largest actors unions in 2012.

    The video game agreement covers more than 2,500 “off-camera (voiceover) performers, on-camera (motion capture, stunt) performers, stunt coordinators, singers, dancers, puppeteers, and background performers,” according to the union.

    Amid the tense interactive negotiations, SAG-AFTRA created a separate contract in February that covered independent and lower-budget video game projects. The tiered-budget independent interactive media agreement contains some of the protections on AI that video game industry titans have rejected. Games signed to an interim interactive media agreement, tiered-budget independent interactive agreement or interim interactive localization agreement are not part of the strike, the union said.

    Source link

  • Google gives free Gemini users access to its faster, lighter 1.5 Flash AI model

    Google gives free Gemini users access to its faster, lighter 1.5 Flash AI model

    Google is making its Gemini AI faster and more efficient across the board. You now have access to 1.5 Flash, its generative AI model designed to be able to generate responses more quickly and efficiently, even if you’re not paying for Gemini Advanced. The company says you’ll notice improvements in latency, as well as the tool’s reasoning and image understanding, on both the web and mobile.

    In addition, it’s expanding the AI assistant’s context window, so that you can have longer conversations with it and ask it more complex questions. In the near future, Google will also give you the ability to upload files to Gemini from Google Drive or from your device. If you give it access to your notes, for instance, it will be able to create a study guide or a practice exam for you. Plus, the assistant will be able to analyze data and make it easier to digest with graphics and charts.

    As part of its work to reduce hallucinations, Google is now displaying links to related content if you ask it questions that require factual answers. It will display a “gray chip” at the end of a paragraph in its response that links to websites where you can read more about the topic. Those chips could even lead to your emails, if you’ve linked Gemini to your Gmail account. The feature is currently limited to select locations for English prompts only.

    The company is making Gemini more accessible overall, as well. It has started gradually rolling out Gemini in Google Messages for Android devices in the European Economic Area (EU, Iceland, Liechtenstein and Norway), the UK and Switzerland. You’ll now also be able to chat with Gemini in French, Polish and Spanish within the Messages app. Finally, Google is expanding access to Gemini’s mobile app to more regions and is giving more teenagers the ability to use the AI tool. As long as you meet its minimum age requirement of 13, you’ll be able to chat with the assistant. Google has even introduced a teen-specific onboarding process and an AI literacy guide, so you can get an idea on how to use the tool to accomplish your tasks.

    Mariella Moon

    Source link

  • AI could supercharge disinformation and disrupt EU elections, experts warn

    AI could supercharge disinformation and disrupt EU elections, experts warn

    BRUSSELS (AP) — Voters in the European Union are set to elect lawmakers starting Thursday for the bloc’s parliament, in a major democratic exercise that’s also likely to be overshadowed by online disinformation.

    Experts have warned that artificial intelligence could supercharge the spread of fake news that could disrupt the election in the EU and many other countries this year. But the stakes are especially high in Europe, which has been confronting Russian propaganda efforts as Moscow’s war with Ukraine drags on.

    Here’s a closer look:

    WHAT’S HAPPENING?

    Some 360 million people in 27 nations — from Portugal to Finland, Ireland to Cyprus — will choose 720 European Parliament lawmakers in an election that runs Thursday to Sunday. In the months leading up to the vote, experts have observed a surge in the quantity and quality of fake news and anti-EU disinformation being peddled in member countries.

    A big fear is that deceiving voters will be easier than ever, enabled by new AI tools that make it easy to create misleading or false content. Some of the malicious activity is domestic, some international. Russia is most widely blamed, and sometimes China, even though hard evidence directly attributing such attacks is difficult to pin down.

    “Russian state-sponsored campaigns to flood the EU information space with deceptive content is a threat to the way we have been used to conducting our democratic debates, especially in election times,” Josep Borrell, the EU’s foreign policy chief, warned on Monday.

    He said Russia’s “information manipulation” efforts are taking advantage of increasing use of social media penetration “and cheap AI-assisted operations.” Bots are being used to push smear campaigns against European political leaders who are critical of Russian President Vladimir Putin, he said.

    HAS ANY DISINFO HAPPENED YET?

    There have been plenty of examples of election-related disinformation.

    Two days before national elections in Spain last July, a fake website was registered that mirrored one run by authorities in the capital Madrid. It posted an article falsely warning of a possible attack on polling stations by the disbanded Basque militant separatist group ETA.

    In Poland, two days before the October parliamentary election, police descended on a polling station in response to a bogus bomb threat. Social media accounts linked to what authorities call the Russian interference “infosphere” claimed a device had exploded.

    Just days before Slovakia’s parliamentary election in November, AI-generated audio recordings impersonated a candidate discussing plans to rig the election, leaving fact-checkers scrambling to debunk them as false as they spread across social media.

    Just last week, Poland’s national news agency carried a fake report saying that Prime Minister Donald Tusk was mobilizing 200,000 men starting on July 1, in an apparent hack that authorities blamed on Russia. The Polish News Agency “killed,” or removed, the report minutes later and issued a statement saying that it wasn’t the source.

    It’s “really worrying, and a bit different than other efforts to create disinformation from alternative sources,” said Alexandre Alaphilippe, executive director of EU DisinfoLab, a nonprofit group that researches disinformation. “It raises notably the question of cybersecurity of the news production, which should be considered as critical infrastructure.”

    WHAT’S THE GOAL OF DISINFORMATION?

    Experts and authorities said Russian disinformation is aimed at disrupting democracy, by deterring voters across the EU from heading to the ballot boxes.

    “Our democracy cannot be taken for granted, and the Kremlin will continue using disinformation, malign interference, corruption and any other dirty tricks from the authoritarian playbook to divide Europe,” European Commission Vice-President Vera Jourova warned the parliament in April.

    Tusk, meanwhile, called out Russia’s “destabilization strategy on the eve of the European elections.”

    On a broader level, the goal of “disinformation campaigns is often not to disrupt elections,” said Sophie Murphy Byrne, senior government affairs manager at Logically, an AI intelligence company. “It tends to be ongoing activity designed to appeal to conspiracy mindsets and erode societal trust,” she told an online briefing last week.

    Narratives are also fabricated to fuel public discontent with Europe’s political elites, attempt to divide communities over issues like family values, gender or sexuality, sow doubts about climate change and chip away at Western support for Ukraine, EU experts and analysts say.

    WHAT HAS CHANGED?

    Five years ago, when the last European Union election was held, most online disinformation was laboriously churned out by “troll farms” employing people working in shifts writing manipulative posts in sometimes clumsy English or repurposing old video footage. Fakes were easier to spot.

    Now, experts have been sounding that alarm about the rise of generative AI that they say threatens to supercharge the spread of election disinformation worldwide. Malicious actors can use the same technology that underpins easy-to-use platforms, like OpenAI’s ChatGPT, to create authentic-looking deepfake images, videos and audio. Anyone with a smartphone and a devious mind can potentially create false, but convincing, content aimed at fooling voters.

    “What is changing now is the scale that you can achieve as a propaganda actor,” said Salvatore Romano, head of research at AI Forensics, a nonprofit research group. Generative AI systems can now be used to automatically pump out realistic images and videos and push them out to social media users, he said.

    AI Forensics recently uncovered a network of pro-Russian pages that it said took advantage of Meta’s failure to moderate political advertising in the European Union.

    Fabricated content is now “indistinguishable” from the real thing, and takes disinformation watchers experts a lot longer to debunk, said Romano.

    WHAT ARE AUTHORITIES DOING ABOUT IT?

    The EU is using a new law, the Digital Services Act, to fight back. The sweeping law requires platforms to curb the risk of spreading disinformation and can be used to hold them accountable under the threat of hefty fines.

    The bloc is using the law to demand information from Microsoft about risks posed by its Bing Copilot AI chatbot, including concerns about “automated manipulation of services that can mislead voters.”

    The DSA has also been used to investigate Facebook and Instagram owner Meta Platforms for not doing enough to protect users from disinformation campaigns.

    The EU has passed a wide-ranging artificial intelligence law, which includes a requirement for deepfakes to be labelled, but it won’t arrive in time for the vote and will take effect over the next two years.

    HOW ARE SOCIAL MEDIA COMPANIES RESPONDING?

    Most tech companies have touted the measures they’re taking to protect the European Union’s “election integrity.”

    Meta Platforms — owner of Facebook, Instagram and WhatsApp — has said it will set up an election operations center to identify potential online threats. It also has thousands of content reviewers working in the EU’s 24 official languages and is tightening up policies on AI-generated content, including labeling and “downranking” AI-generated content that violates its standards.

    Nick Clegg, Meta’s president of global affairs, has said there’s no sign that generative AI tools are being used on a systemic basis to disrupt elections.

    TikTok said it will set up fact-checking hubs in the video-sharing platform’s app. YouTube owner Google said it’s working with fact-checking groups and will use AI to “fight abuse at scale.”

    Elon Musk went the opposite way with his social media platform X, previously known as Twitter. “Oh you mean the ‘Election Integrity’ Team that was undermining election integrity? Yeah, they’re gone,” he said in a post in September.

    ___

    A previous version of this story misspelled the given name of EU foreign policy chief Josep Borrell.

    Source link