ReportWire

Tag: Technology

  • California backs down on AI laws so more tech leaders don’t flee the state

    [ad_1]

    California’s tech companies, the epicenter of the state’s economy, sent politicians a loud message this year: Back down from restrictive artificial intelligence regulation or they’ll leave.

    The tactic appeared to have worked, activists said, because some politicians weakened or scrapped guardrails to mitigate AI’s biggest risks.

    California Gov. Gavin Newsom rejected a bill aimed at making companion chatbots safer for children after the tech industry fought it. In his veto message, the governor raised concerns about placing broad limits on AI, which has sparked a massive investment spree and created new billionaires overnight around the San Francisco Bay Area.

    Assembly Bill 1064 would have barred companion chatbot operators from making these AI systems available to minors unless the chatbots weren’t “foreseeably capable” of certain conduct, including encouraging a child to engage in self-harm. Newsom said he supported the goal, but feared it would unintentionally bar minors from using AI tools and learning how to use technology safely.

    “We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether,” he wrote in his veto message.

    The bill’s veto was a blow to child safety advocates who had pushed it through the state Legislature and a win for tech industry groups that fought it. In social media ads, groups such as TechNet had urged the public to tell the governor to veto the bill because it would harm innovation and lead to students falling behind in school.

    Organizations trying to rein in the world’s largest tech companies as they advance the powerful technology say the tech industry has become more empowered at the national and state levels.

    Meta, Google, OpenAI, Apple and other major tech companies have strengthened their relationships with the Trump administration. Companies are funding new organizations and political action committees to push back against state AI policy while pouring money into lobbying.

    In Sacramento, AI companies have lobbied behind the scenes for more freedom. California’s massive pool of engineering talent, tech investors and companies make it an attractive place for the tech industry, but companies are letting policymakers know that other states are also interested in attracting those investments and jobs. Big Tech is particularly sensitive to regulations in the Golden State because so many companies are headquartered there and must abide by its rules.

    “We believe California can strike a better balance between protecting consumers and enabling responsible technological growth,” Robert Boykin, TechNet’s executive director for California and the Southwest, said in a statement.

    Common Sense Media founder and Chief Executive Jim Steyer said tech lobbyists put tremendous pressure on Newsom to veto AB 1064. Common Sense Media, a nonprofit that rates and reviews technology and entertainment for families, sponsored the bill.

    “They threaten to hurt the economy of California,” he said. “That’s the basic message from the tech companies.”

    Advertising is among the tactics tech companies with deep pockets use to convince politicians to kill or weaken legislation. Even if the governor signs a bill, companies have at times sued to block new laws from taking effect.

    “If you’re really trying to do something bold with tech policy, you have to jump over a lot of hurdles,” said David Evan Harris, senior policy advisor at the California Initiative for Technology and Democracy, which supported AB 1064. The group focuses on finding state-level solutions to threats that AI, disinformation and emerging technologies pose to democracy.

    Tech companies have threatened to move their headquarters and jobs to other states or countries, a risk looming over politicians and regulators.

    The California Chamber of Commerce, a broad-based business advocacy group that includes tech giants, launched a campaign this year that warned over-regulation could stifle innovation and hinder California.

    “Making competition harder could cause California companies to expand elsewhere, costing the state’s economy billions,” the group said on its website.

    From January to September, the California Chamber of Commerce spent $11.48 million lobbying California lawmakers and regulators on a variety of bills, filings to the California secretary of state show. During that period, Meta spent $4.13 million. A lobbying disclosure report shows that Meta paid the California Chamber of Commerce $3.1 million, making up the bulk of their spending. Google, which also paid TechNet and the California Chamber of Commerce, spent $2.39 million.

    Amazon, Uber, DoorDash and other tech companies spent more than $1 million each. TechNet spent around $800,000.

    The threat that California companies could move away has caught the attention of some politicians.

    California Atty. Gen. Rob Bonta, who has investigated tech companies over child safety concerns, indicated that despite initial concern, his office wouldn’t oppose ChatGPT maker OpenAI’s restructuring plans. The new structure gives OpenAI’s nonprofit parent a stake in its for-profit public benefit corporation and clears the way for OpenAI to list its shares.

    Bonta blessed the restructuring partly because of OpenAI’s pledge to stay in the state.

    “Safety will be prioritized, as well as a commitment that OpenAI will remain right here in California,” he said in a statement last week. The AG’s office, which supervises charitable trusts and ensures these assets are used for public benefit, had been investigating OpenAI’s restructuring plan over the last year and a half.

    OpenAI Chief Executive Sam Altman said he’s glad to stay in California.

    “California is my home, and I love it here, and when I talked to Attorney General Bonta two weeks ago I made clear that we were not going to do what those other companies do and threaten to leave if sued,” he posted on X.

    Critics — which included some tech leaders such as Elon Musk, Meta and former OpenAI executives as well as nonprofits and foundations — have raised concerns about OpenAI’s restructuring plan. Some warned it would allow startups to exploit charitable tax exemptions and let OpenAI prioritize financial gain over public good.

    Lawmakers and advocacy groups say it’s been a mixed year for tech regulation. The governor signed Assembly Bill 56, which requires platforms to display labels for minors that warn about social media’s mental health harms. Another piece of signed legislation, Senate Bill 53, aims to make AI developers more transparent about safety risks and offers more whistleblower protections.

    The governor also signed a bill that requires chatbot operators to have procedures to prevent the production of suicide or self-harm content. But advocacy groups, including Common Sense Media, removed their support for Senate Bill 243 because they said the tech industry pushed for changes that weakened its protections.

    Newsom vetoed other legislation that the tech industry opposed, including Senate Bill 7, which requires employers to notify workers before deploying an “automated decision system” in hiring, promotions and other employment decisions.

    Called the “No Robo Bosses Act,” the legislation didn’t clear the governor, who thought it was too broad.

    “A lot of nuance was demonstrated in the lawmaking process about the balance between ensuring meaningful protections while also encouraging innovation,” said Julia Powles, a professor and executive director of the UCLA Institute for Technology, Law & Policy.

    The battle over AI safety is far from over. Assemblymember Rebecca Bauer-Kahan (D-Orinda), who co-wrote AB 1064, said she plans to revive the legislation.

    Child safety is an issue that both Democrats and Republicans are examining after parents sued AI companies such as OpenAI and Character.AI for allegedly contributing to their children’s suicides.

    “The harm that these chatbots are causing feels so fast and furious, public and real that I thought we would have a different outcome,” Bauer-Kahan said. “It’s always fascinating to me when the outcome of policy feels to be disconnected from what I believe the public wants.”

    Steyer from Common Sense Media said a new ballot initiative includes the AI safety protections that Newsom vetoed.

    “That was a setback, but not an overall defeat,” he said about the veto of AB 1064. “This is a David and Goliath situation, and we are David.”

    [ad_2]

    Queenie Wong

    Source link

  • As Russian Drone Incursions Rattle Europe, Poland and Romania Deploy a New Defensive System

    [ad_1]

    Poland and Romania are deploying a new weapons system to defend against Russian drones, following a spate of incursions into NATO airspace in recent months that exposed the alliance’s vulnerabilities and put Europe on edge.

    The American Merops system, which is small enough to fit in the back of a midsized pickup truck, can identify drones and close in on them, using artificial intelligence to navigate when satellite and electronic communications are jammed.

    As well as being deployed in Poland and Romania, Merops will also be used by Denmark, NATO military officials told The Associated Press, part of a move to boost defenses on the alliance’s eastern flank.

    The aim is to make the border with Russia so well-armed that Moscow’s forces will be deterred from ever contemplating crossing, from Norway in the north to Turkey in the south, the officials said.

    Romania later faced a drone incursion, while drones temporarily closed airports in Copenhagen, Munich, Berlin and Brussels. There were also sightings near military bases in Belgium and Denmark.

    While the origin of the drones could not always be traced to Russia or linked to its war in Ukraine, the urgent need to bolster defenses is clear. A protracted drone battle — or full-scale war as in Ukraine — would drain Western coffers and limited stocks of missiles.

    “What this system does is give us very accurate detection,” said Col. Mark McLellan, assistant chief of staff operations at NATO Allied Land Command. “It’s able to target the drones and take them down and at a low cost as well … It’s a lot cheaper than flying an F-35 into the air to take them down with a missile.”


    A bird, a plane, or a drone?

    Drones fly low and slow, making them hard to pinpoint on radar systems calibrated for spotting high-speed missiles. They can also be mistaken for birds or planes. The Merops system, NATO officials said, helps plug those gaps.

    Merops “basically flies drones against drones,” said McLellan, either by firing directly at the hostile drone or information from the system can be passed to ground or air forces so that they can shoot it down.

    Merops gives commanders “a certain amount of time to be able to assess the threat and decide — to shoot or not shoot,” said Brig. Gen. Thomas Lowin, deputy chief of staff operations at NATO Allied Land Command.

    It can be used to protect both critical infrastructure, such as airports, and armed forces maneuvering in a combat zone, he added.

    NATO is now deploying the first systems along the borders of Poland and Romania, while Denmark has also decided to acquire the Merops technology, Lowin said.

    Former Google CEO Eric Schmidt has invested in Merops, but both he and the company are keeping a low public profile, declining requests for interviews. Defense officials from Poland and Romania also refused to comment publicly.

    The Russian incursions have concentrated minds in Europe, highlighting the need for new defenses against a rapidly developing form of warfare. The Merops system is one of many that European militaries would need to tip the scales of a drone war in NATO’s favor.

    A protracted drone battle — or full-scale war, as in Ukraine — would drain Western coffers and limited stocks of expensive missiles.

    European companies are now developing new technologies, including drone-against-drone systems like Merops and anti-drone missiles, while European Union countries have agreed to work together to create a “drone wall” on the bloc’s eastern border.

    U.S. military leaders in Europe are also advocating for the creation of an Eastern Flank Deterrence Line, a layered zone of defenses along NATO’s border.

    The commanding general for the U.S. Army in Europe and Africa — and head of NATO’s Allied Land Command — Gen. Chris Donahue said in July that he wants to create a network of sensors and a command-and-control system that will work with almost any hardware available — allowing systems to be swapped in and out as they are updated or become obsolete.

    Russia has conscription and a large military, which means it has more forces immediately deployable than NATO along its borders. The alliance needs to build defenses which offset that manpower advantage by using its technological capabilities, Donahue said.

    Merops is the first phase of building those defenses, said Lowin, a process which is forecast to take two to five years.

    The drone incursions and the instability on NATO’s eastern flank stem from Russia’s war in Ukraine, now approaching the end of its fourth year. The conflict has become a crucible for drone development, transforming the battlefield into a testing zone for new technology which now has applications elsewhere in Europe.

    The Merops system has been chosen because it has been used successfully in Ukraine. If something doesn’t work there, it’s “probably not worthwhile acquiring,” Lowin said.

    Drones are evolving rapidly, and each new type demands a different response: The challenge is to identify the threat and then almost immediately work out how to attack it, said Brig. Gen. Zacarias Hernandez, deputy chief of staff plans at NATO Allied Land Command.

    That requires extremely fast production cycles — from development to battlefield within weeks.

    Meanwhile, Russia is also mass-producing attack drones, equipping them with cameras, jet-propelled engines and advanced anti-jamming antennae.

    It, too, has been forced to adapt, as Russian President Vladimir Putin acknowledged in early October.

    Speaking about the military’s initial failures in Ukraine, Putin publicly admitted that “there were entire fields where our knowledge was simply non-existent” but claimed Russia was now able to field more advanced technology “within a matter of days.”

    Ukraine, NATO and Russia are in a game of technological cat-and-mouse, the NATO officials suggested.

    “We see what Russia is doing in Ukraine,” said Hernandez. “We have to be ready for that.”

    Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – Oct. 2025

    [ad_2]

    Associated Press

    Source link

  • Denmark Eyes New Law to Protect Citizens From AI Deepfakes

    [ad_1]

    COPENHAGEN, Denmark (AP) — In 2021, Danish video game live-streamer Marie Watson received an image of herself from an unknown Instagram account.

    She instantly recognized the holiday snap from her Instagram account, but something was different: Her clothing had been digitally removed to make her appear naked. It was a deepfake.

    “It overwhelmed me so much,” Watson recalled. “I just started bursting out in tears, because suddenly, I was there naked.”

    In the four years since her experience, deepfakes — highly realistic artificial intelligence-generated images, videos or audio of real people or events — have become not only easier to make worldwide but also look or sound exponentially more realistic. That’s thanks to technological advances and the proliferation of generative AI tools, including video generation tools from OpenAI and Google.

    These tools give millions of users the ability to easily spit out content, including for nefarious purposes that range from depicting celebrities Taylor Swift and Katy Perry to disrupting elections and humiliating teens and women.

    In response, Denmark is seeking to protect ordinary Danes, as well as performers and artists who might have their appearance or voice imitated and shared without their permission. A bill that’s expected to pass early next year would change copyright law by imposing a ban on the sharing of deepfakes to protect citizens’ personal characteristics — such as their appearance or voice — from being imitated and shared online without their consent.

    If enacted, Danish citizens would get the copyright over their own likeness. In theory, they then would be able to demand that online platforms take down content shared without their permission. The law would still allow for parodies and satire, though it’s unclear how that will be determined.

    Experts and officials say the Danish legislation would be among the most extensive steps yet taken by a government to combat misinformation through deepfakes.

    Henry Ajder, founder of consulting firm Latent Space Advisory and a leading expert in generative AI, said that he applauds the Danish government for recognizing that the law needs to change.

    “Because right now, when people say ‘what can I do to protect myself from being deepfaked?’ the answer I have to give most of the time is: ‘There isn’t a huge amount you can do,’” he said, ”without me basically saying, ‘scrub yourself from the internet entirely.’ Which isn’t really possible.”

    He added: “We can’t just pretend that this is business as usual for how we think about those key parts of our identity and our dignity.”


    Deepfakes and misinformation

    U.S. President Donald Trump signed bipartisan legislation in May that makes it illegal to knowingly publish or threaten to publish intimate images without a person’s consent, including deepfakes. Last year, South Korea rolled out measures to curb deepfake porn, including harsher punishment and stepped up regulations for social media platforms.

    Danish Culture Minister Jakob Engel-Schmidt said that the bill has broad support from lawmakers in Copenhagen, because such digital manipulations can stir doubts about reality and spread misinformation.

    “If you’re able to deepfake a politician without her or him being able to have that product taken down, that will undermine our democracy,” he told reporters during an AI and copyright conference in September.

    The law would apply only in Denmark, and is unlikely to involve fines or imprisonment for social media users. But big tech platforms that fail to remove deepfakes could face severe fines, Engel-Schmidt said.

    Ajder said Google-owned YouTube, for example, has a “very, very good system for getting the balance between copyright protection and freedom of creativity.”

    The platform’s efforts suggest that it recognizes “the scale of the challenge that is already here and how much deeper it’s going to become,” he added.

    Twitch, TikTok and Meta, which owns Facebook and Instagram, didn’t respond to requests for comment.

    Engel-Schmidt said that Denmark, the current holder of the European Union’s rotating presidency, had received interest in its proposed legislation from several other EU members, including France and Ireland.

    Intellectual property lawyer Jakob Plesner Mathiasen said that the legislation shows the widespread need to combat the online danger that’s now infused into every aspect of Danish life.

    “I think it definitely goes to say that the ministry wouldn’t make this bill, if there hadn’t been any occasion for it,” he said. “We’re seeing it with fake news, with government elections. We are seeing it with pornography, and we’re also seeing it also with famous people and also everyday people — like you and me.”

    The Danish Rights Alliance, which protects the rights of creative industries on the internet, supports the bill, because its director says that current copyright law doesn’t go far enough.

    Danish voice actor David Bateson, for example, was at a loss when AI voice clones were shared by thousands of users online. Bateson voiced a character in the popular “Hitman” video game, as well as Danish toymaker Lego’s English advertisements.

    “When we reported this to the online platforms, they say ‘OK, but which regulation are you referring to?’” said Maria Fredenslund, an attorney and the alliance’s director. “We couldn’t point to an exact regulation in Denmark.”


    ‘When it’s online, you’re done’

    Watson had heard about fellow influencers who found digitally-altered images of themselves online, but never thought it might happen to her.

    Delving into a dark side of the web where faceless users sell and share deepfake imagery — often of women — she said she was shocked how easy it was to create such pictures using readily available online tools.

    “You could literally just search ‘deepfake generator’ on Google or ‘how to make a deepfake,’ and all these websites and generators would pop up,” the 28-year-old Watson said.

    She is glad her government is taking action, but she isn’t hopeful. She believes more pressure must be applied to social media platforms.

    “It shouldn’t be a thing that you can upload these types of pictures,” she said. “When it’s online, you’re done. You can’t do anything, it’s out of your control.”

    Stefanie Dazio in Berlin, Kelvin Chan in London, and Barbara Ortutay in San Francisco, contributed to this report.

    Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – Oct. 2025

    [ad_2]

    Associated Press

    Source link

  • Motion Picture Association tells Meta to stop using PG-13 to refer to Instagram teen account content

    [ad_1]

    The Motion Picture Association is asking Meta to stop referring to content shown to teen accounts on Instagram as “guided by PG-13 ratings,” saying it is misleading and could erode trust in its movie ratings system.

    A lawyer on behalf of the MPA sent Meta Platforms a cease-and-desist letter asking the tech giant to “immediately and permanently disassociate its Teen Accounts and AI tools from the MPA’s rating system.”

    Instagram had announced last month that its teen accounts will be will be restricted to seeing PG-13 content by default. The Motion Picture Association, which runs the film rating system that was established nearly 60 years ago, said at the time that it was not contacted by Meta prior to its announcement.

    The MPA says Meta’s claims claims that its Teen Accounts will be “guided by” PG-13 ratings and that its Teen Account content settings are “generally aligned with movie ratings for ages 13+” are “false and highly misleading.” The association’s movie ratings, which range from G to NC-17, are done by parents who watch entire movies and evaluate them to come up with a rating.

    “Meta’s attempts to restrict teen content literally cannot be ‘guided by’ or ‘aligned with’ the MPA’s PG-13 movie rating because Meta does not follow this curated process,” the association’s letter says. “Instead, Meta’s content restrictions appear to rely heavily on artificial intelligence or other automated technology measures.”

    In a statement, Meta said it updated its teen content policies to be “closer to PG-13 movie standards— which parents already know” so parents can better understand what their teens see on Instagram.

    “We know social media isn’t the same as movies, but we made this change to support parents, and we hope to work with the MPA to continue bringing families this clarity,” the company said. Meta added that its intent was never to suggest that it partnered with the MPA or that the material on Instagram had been rated by the movie association.

    [ad_2]

    Source link

  • Indians who fled a Myanmar cyberscam center are being flown home from Thailand

    [ad_1]

    MAE SOT, Thailand (AP) — India is repatriating on Thursday the first batch of hundreds of its nationals who last month fled to Thailand from Myanmar, where most had been working at a notorious center for online scams.

    The center, known as KK Park on the outskirts of the border city of Myawaddy and said to house a major cybercrime operation, was raided by Myanmar’s army in mid-October to suppress cross-border online scams and illegal gambling.

    An Indian air force transport plane left Thailand en route to India and another plane was to leave later in the day, with about 270 out of 465 Indians who are to be repatriated. The remainder will leave Thailand next Monday, according to Maj. Gen. Maitree Chupreecha, commander of the Thai army’s northern region Naresuan Task Force.

    In March, India repatriated 549 nationals after an earlier crackdown on cybercrime operations at the Myanmar-Thai border.

    Those currently being repatriated are among more than 1,500 people from 28 nations who fled the raid in Myawaddy. Across the border in the Thai town of Mae Sot, Thai authorities had set up temporary facilities for housing and processing not just Indians, but also Chinese, Filipinos, Vietnamese, Ethiopians and Kenyans, among other nationalities.

    In April, the U.N. Office on Drugs and Crime estimated that hundreds of industrial-scale scam centers generate just under $40 billion in annual profits.

    Southeast Asia is the world epicenter for online scams, and hundreds of thousands of people are believed to have been lured to work in Myanmar, Cambodia and Laos, where many were forced to perpetrate global scams involving false romances, fraudulent investments, and illegal gambling.

    Human trafficking is another major criminal aspect of such operations as many of the workers were recruited under false pretenses offering legitimate jobs, only to find themselves trapped in virtual slavery.

    State media in military-run Myanmar said the raid on KK Park was part of operations starting in early September to suppress cross-border online scams and illegal gambling. Since the raid, witnesses and the Thai army have said that that parts of KK Park were demolished by explosions.

    However, independent Myanmar media, including The Irrawaddy, an online news service, have reported that organized criminal scams in Myanmar continue to operate in the Myawaddy area.

    The cybercrime problem received major attention last month when the United States and Britain enacted sanctions against organizers of a major Cambodian cyberscam gang, and its alleged ringleader was indicted by a U.S. federal court in New York.

    In South Korea, the case of a young man, killed after apparently being lured to work at a cyberscam operation in Cambodia, caused an uproar.

    [ad_2]

    Source link

  • Gemini AI to transform Google Maps into a more conversational experience

    [ad_1]

    Google Maps is heading in a new direction with artificial intelligence sitting in the passenger’s seat.

    Fueled by Google’s Gemini AI technology, the world’s most popular navigation app will become a more conversational companion as part of a redesign announced Wednesday.

    The hands-free experience is meant to turn Google Maps into something more like an insightful passenger able to direct a driver to a destination while also providing nearby recommendations on places to eat, shop or sightsee, when asked for the advice.

    “No fumbling required — now you can just ask,” Google promised in a blog post about the app makeover.

    The AI features are also supposed to enable Google Maps to be more precise by calling out landmarks to denote the place to make a turn instead of relying on distance notifications.

    AI chatbots, like Gemini and OpenAI’s ChatGPT, have sometimes lapsed into periods of making things up — known as “hallucinations” in tech speak — but Google is promising that built-in safeguards will prevent Maps from accidentally sending drivers down the wrong road.

    All the information that Gemini is drawing upon will be culled from the roughly 250 million places stored in Google Maps’ database of reviews accumulated during the past 20 years.

    Google Maps’ new AI capabilities will be rolling out to both Apple’s iPhone and Android mobile devices.

    That will give Google’s Gemini a massive audience to impress — or disappoint — with its AI prowess, given the navigation app is used by more than 2 billion people around the world. Besides making it even more indispensable, Google is hoping the AI features will turn into a showcase that help gives Gemini a competitive edge against ChatGPT.

    Prodded by OpenAI’s release of ChatGPT in late 2022, Google has been steadily rolling out more of its own technology designed to ensure its products continue to evolve with the upheaval being unleashed by AI. The changes have included an overhaul of Google’s ubiquitous search engine that has de-emphasized a listing of relevant web links in its results and increasingly highlighted AI overviews and conversational responses provided through an AI mode.

    [ad_2]

    Source link

  • Indians Who Fled a Myanmar Cyberscam Center Are Being Flown Home From Thailand

    [ad_1]

    MAE SOT, Thailand (AP) — India is repatriating on Thursday the first batch of hundreds of its nationals who last month fled to Thailand from Myanmar, where most had been working at a notorious center for online scams.

    An Indian air force transport plane left Thailand en route to India and another plane was to leave later in the day, with about 270 out of 465 Indians who are to be repatriated. The remainder will leave Thailand next Monday, according to Maj. Gen. Maitree Chupreecha, commander of the Thai army’s northern region Naresuan Task Force.

    In March, India repatriated 549 nationals after an earlier crackdown on cybercrime operations at the Myanmar-Thai border.

    Those currently being repatriated are among more than 1,500 people from 28 nations who fled the raid in Myawaddy. Across the border in the Thai town of Mae Sot, Thai authorities had set up temporary facilities for housing and processing not just Indians, but also Chinese, Filipinos, Vietnamese, Ethiopians and Kenyans, among other nationalities.

    In April, the U.N. Office on Drugs and Crime estimated that hundreds of industrial-scale scam centers generate just under $40 billion in annual profits.

    Southeast Asia is the world epicenter for online scams, and hundreds of thousands of people are believed to have been lured to work in Myanmar, Cambodia and Laos, where many were forced to perpetrate global scams involving false romances, fraudulent investments, and illegal gambling.

    Human trafficking is another major criminal aspect of such operations as many of the workers were recruited under false pretenses offering legitimate jobs, only to find themselves trapped in virtual slavery.

    State media in military-run Myanmar said the raid on KK Park was part of operations starting in early September to suppress cross-border online scams and illegal gambling. Since the raid, witnesses and the Thai army have said that that parts of KK Park were demolished by explosions.

    However, independent Myanmar media, including The Irrawaddy, an online news service, have reported that organized criminal scams in Myanmar continue to operate in the Myawaddy area.

    The cybercrime problem received major attention last month when the United States and Britain enacted sanctions against organizers of a major Cambodian cyberscam gang, and its alleged ringleader was indicted by a U.S. federal court in New York.

    In South Korea, the case of a young man, killed after apparently being lured to work at a cyberscam operation in Cambodia, caused an uproar.

    Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – Oct. 2025

    [ad_2]

    Associated Press

    Source link

  • Phony AI-generated videos of Hurricane Melissa flood social media sites

    [ad_1]

    One viral video shows what appears to be four sharks swimming in a Jamaican hotel’s pool as floodwaters allegedly brought on by Hurricane Melissa swamp the area. Another purportedly depicts Jamaica’s Kingston airport completely ravaged by the storm. But neither of these events happened, it’s just AI-generated misinformation circulating on social media as the storm churned across the Caribbean this week.

    These videos and others have racked up millions of views on social media platforms, including X, TikTok and Instagram.

    Some of the clips appear to be spliced together or based on footage of old disasters. Others appear to be created entirely by AI video generators.

    “I am in so many WhatsApp groups and I see all of these videos coming. Many of them are fake,” said Jamaica’s Education Minister Dana Morris Dixon on Monday. “And so we urge you to please listen to the official channels.”

    Although it’s common for hoax photos, videos and misinformation to surface during natural disasters, they’re usually debunked quickly. But videos generated by new artificial intelligence tools have taken the problem to a new level by making it easy to create and spread realistic clips.

    In this case, the content has been showing up in social media feeds alongside genuine footage shot by local residents and news organizations, sowing confusion among social media users.

    Here are a few steps you can take to reduce your chances of getting fooled.

    Check for watermarks

    Look for a watermark logo indicating that the video was generated by Sora, a text-to-video tool launched by ChatGPT-maker OpenAI, or other AI video generators. These will usually appear in one of the corners of a video or photo.

    It is quite easy to remove these logos using third-party tools, so you can also check for blurs, pixelation or discoloration where a watermark should be.

    Take a closer look

    Look more closely at videos for unclear details. While the sharks-in-pool video appears realistic at first glance, it looks less believable upon closer examination because one of the sharks has a strange shape.

    You might see objects that blend together, or details such as lettering on a sign that are garbled, which are telltale signs of AI-generated imagery. Branding is also something to look out for as many platforms are cautious about reproducing specific company logos.

    Experts say it’s going to get increasingly harder to tell the difference between reality and deepfakes as the technology improves.

    Experts noted that Melissa is the first big natural disaster since OpenAI launched the latest version of its video generation tool Sora last month.

    “Now, with the rise of easily accessible and powerful tools like Sora, it has become even easier for bad actors to create and distribute highly convincing synthetic videos,” said Sofia Rubinson, a senior editor at NewsGuard, which analyzes online misinformation.

    “In the past, people could often identify fakes through telltale signs like unnatural motion, distorted text, or missing fingers. But as these systems improve, many of those flaws are disappearing, making it increasingly difficult for the average viewer to distinguish AI-generated content from authentic footage.”

    Why create deepfakes around a crisis?

    AI expert Henry Ajder said most of the hurricane deepfakes he’s seen aren’t inherently political. He suspects it’s “much closer to more traditional kind of click-based content, which is to try and get engagement, to try and get clicks.”

    On X, users can get paid based on the amount of engagement their posts get. YouTubers can earn money from ads.

    A video that racks up millions of views could earn the creator a few thousand dollars, Ajder said, not bad for the amount of effort needed.

    Social media accounts also use videos to expand their follower base in order to promote projects, products or services, Ajder said.

    So check who’s posting the video. If the account has a track record of clickbait-style content, be skeptical.

    But keep in mind that the people behind deepfake videos aren’t always trying to hide.

    “Some creators are just trying to do interesting things using AI that they think are going to get people’s attention,” he said.

    So who is behind the account?

    While it’s unclear who exactly created the pool shark video, one version found on Instagram carries the watermark for a TikTok account, Yulian_Studios. That account’s TikTok profile describes itself, in Spanish, as a “Content creator with AI visual effects in the Dominican Republic.”

    The shark video can’t be found on the account’s page, but it does have another AI-generated clip of an obese man clinging to a palm tree as hurricane winds blow in Jamaica.

    Trust your gut

    Context matters. Take a beat to consider whether what you’re seeing is plausible. The Poynter journalism website advises that if you see a situation that seems “exaggerated, unrealistic or not in character,” consider that it could be a deepfake.

    That includes the audio. AI videos used to come with synthetic voice-overs that had unusual cadence or tone, but newer tools can create synchronized sound that sound realistic.

    And if you found it on X, make sure to check whether there’s a community note attached, which is the platform’s user-powered fact-checking tool.

    One version of the shark pool video on X comes with a community note that says: “This video footage and the voice used were both created by artificial intelligence, it is not real footage of hurricane Melissa in Jamaica.”

    Go to an official source

    Don’t just rely on random strangers on the internet for information. The Jamaican government has been posting storm updates and so has the National Hurricane Center.

    [ad_2]

    Source link

  • Yale Study Quantifies How Much Elon Musk’s Politics Have Cost Tesla

    [ad_1]

    Tesla’s fading momentum may have less to do with its cars and more with its CEO’s politics. Andrew Harnik/Getty Images

    How did Tesla go from the world’s fastest-growing automaker to a company beleaguered by slowing sales and shrinking market share? According to a team of Yale researchers, the answer lies in the polarizing and partisan behavior of CEO Elon Musk.

    Sure, Tesla has faced headwinds from aging models, rising competition, and a saturated customer base. But an analysis of county-level data shows that its declining demand is also linked to Musk’s increasingly political actions. The study’s authors estimate that Tesla would have sold between 1 million to 1.26 million more cars in recent years without what they call the “Musk partisan effect.”

    During the most recent quarter, Tesla’s profit plunged 37 percent year-over-year. Revenue fell for two consecutive quarters this year. (The most recent quarter saw a rebound thanks to tax credits-fueled buying rush.)

    The Yale researchers argue that much of Tesla’s decline stems from the alienation of its traditional consumer base. Drawing on vehicle registration data from S&P Global and county-level voting records, they found that Tesla’s customer base has long leaned Democratic and environmentally conscious.

    That began to change in 2022, when Musk acquired X and rolled back content moderation policies. The shift deepened amid his involvement in the 2024 U.S. presidential election and his subsequent appointment as head of the Trump administration’s Department of Government Efficiency (DOGE). “Musk’s actions antagonized his most loyal customer base,” the authors wrote.

    The trend has only grown more pronounced. Between October 2022 and April 2025, Musk’s partisan behavior caused Tesla to lose between 67 percent and 83 percent of its potential car sales, according to the study. In the first quarter of 2025 alone, that figure jumped to 150 percent.

    Musk himself has acknowledged the backlash. During an April earnings call, he said his DOGE role had led to “blowback” and announced plans to scale back his time with the agency to refocus on Tesla.

    The fallout hasn’t even benefited Tesla’s competitors. The study found that, absent Musk’s partisan behavior, sales of other EV and hybrid models would have been 17 to 22 percent lower over the past three years and 25 percent lower in early 2025, suggesting his actions helped rival automakers.

    Musk’s controversies have also had unintended policy consequences, the researchers noted. In California, which aims for zero-emission vehicles to make up 25 percent of new sales by 2026, 68 percent by 2030, and 100 percent by 2035, progress has stalled. The study estimates that without Musk’s partisan impact, California would have added 139,700 more EV sales in the first quarter of 2025. The reality is that California fell short by 28,000 vehicles in that quarter to stay on track.

    This study highlights just how impactful a CEO’s partisan actions can be,” the authors concluded.

    Yale Study Quantifies How Much Elon Musk’s Politics Have Cost Tesla

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • California Legislation Could Lead to Better Online Privacy Nationwide

    [ad_1]

    The privacy changes web browsers will be required to make under a new California law could set the de facto standard for the entire country, changing how Americans control their data when using the internet, according to experts.

    Assembly Bill 566, recently signed into law by Gov. Gavin Newsom, requires companies that make web browsers to offer users an opt-out “signal” that automatically tells websites not to share or sell their personal information as they browse.

    It will likely be easier for companies to roll out the service for the entire country, rather than for users only in California.

    “It’s such a trivial implementation,” said Emory Roane, associate director of policy at Privacy Rights Clearinghouse, an organization that pushed for the legislation. “It’s really not that difficult technically.”

    The legislation, a first of its kind in the country, was sponsored by the California Privacy Protection Agency, the state’s consumer privacy watchdog, as well as several consumer advocacy and privacy rights groups.

    Under the law, browsers like Google’s Chrome and Microsoft’s Edge will have until the beginning of 2027 to create a way for consumers to select the signal. Combined with recent changes from other states, the new law could be a tipping point in how web traffic is treated in the United States.

    “We expect it to have a national impact,” Roane said.

    California already offers privacy protections under the California Consumer Privacy Act, including customers’ right to opt out from having their information sold.

    But advocates for the new law point out this still puts the burden on the consumer to navigate to web pages and individually select web pages to opt out from. The new tool will effectively automate that process, giving consumers a single toggle to keep their data protected.

    “I would argue if you have to go to every individual website and click the link saying you ‘don’t want your information sold or shared,’ that’s not really a meaningful privacy right,” said Caitriona Fitzgerald, deputy director of the Electronic Privacy Information Center, another organization that pressed for AB 566.

    Already, some browser makers have voluntarily offered similar settings under a framework called the Global Privacy Control. Mozilla’s Firefox, for example, includes a setting called “tell websites not to sell or share your data.” With that setting on, the browser communicates to sites that the visitor wants the site to respect the user’s preference.

    But until now, browsers haven’t been required to offer a setting that uses the Global Privacy Control or another standard to communicate users’ preferences. “There are browser extensions but those aren’t very widely used,” said Nick Doty, senior technologist at the Center for Democracy and Technology.

    Since it would likely be burdensome for companies to carve out a way to only allow the signal to be used by Californians, according to experts, the tool will likely be available across the country. How, exactly, that will look still remains to be seen. The legislation doesn’t require browser makers to use a specific standard.

    Spokespeople for Google and Microsoft declined to comment on the companies’ plans.

    There’s still a risk that some websites may try to detect which state a visitor is from, and only respect the signal if they find the visitor is from a state that mandates it.

    This is legally risky, though, according to Roane, who points out that AB 566 applies to residents of California, regardless of whether they’re using the web from California.

    “If I’m safe saying I’m a resident and you’re assuming I’m not and you’re flagrantly not respecting my privacy wishes, that is a violation of the law,” Roane said.


    Pushback from Google and the industry

    The law didn’t get across the finish line without friction. As CalMatters reported in September, despite not being publicly against the legislation, Google organized opposition to the bill through a group it backs financially.

    AB 566 also wasn’t the first attempt at such legislation. Newsom vetoed a similar, but slightly more expansive, version of the bill in 2024.

    But now that the door is open, some advocates say they are going to continue to push to further expand privacy preferences.

    Roane notes that legislation could be drafted that requires connected smart devices to offer an opt-out preference, or for vehicles that gather data on drivers to respect opt-out preference requests.

    “We are finally, finally starting to have real privacy rights,” Roane said, “but we’re far away from them being really easy to exercise across the country and across the border and even in states like California where we have these rights.”

    This story was originally published by CalMatters and distributed through a partnership with The Associated Press.

    Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – Oct. 2025

    [ad_2]

    Associated Press

    Source link

  • San Jose announces AI education program for the public

    [ad_1]

    San Jose once again is touting its Silicon Valley prominence with the city offering access to artificial intelligence courses and tools to all residents.

    At the GovAI Coalition Summit that began Wednesday in San Jose, Mayor Matt Mahan announced AI for All, a first-of-its-kind collaborative program featuring tech giants Google, OpenAI and Anthropic, as well as the Bay Area Council.

    “The AI revolution is here — and there’s no better place to be a part of it than San Jose,” Mahan said in a news release for the summit. “This first-in-the-nation initiative makes sure everyone from students to seniors can seize the opportunities of this new era and be prepared for the pitfalls. The coming years will determine if AI’s proliferation will drive inequality or opportunity, and we’re not waiting to find out — we’re shaping it for the collective good of humanity.”

    AI for All will consist of a single city portal with free courses, training paths and certifications from leading AI companies, according to the release. The content will come in multiple languages and accessible to businesses, in schools and at home.

    It will also be available to residents without reliable internet access at local libraries and community centers, the city said.

    A committee made up of representatives from the city, Bay Area Council, the participating companies and community partners will oversee the program’s implementation and accessibility, the city said.

    [ad_2]

    NBC Bay Area staff

    Source link

  • Epic Games and Google say they’re settling 5-year legal fight over Android app store

    [ad_1]

    SAN FRANCISCO — Video game maker Epic Games has reached a “comprehensive settlement” with Google that could end its 5-year-old legal crusade targeting Google’s Play Store for Android apps.

    Epic and Google revealed the settlement agreement in a joint legal document they filed in a San Francisco federal court Tuesday.

    They said it “would allow the parties to put their disputes aside while making Android a more vibrant and competitive platform for users and developers.”

    Epic, which makes the hit online game Fortnite, won a victory over the summer when a federal appeals court upheld a jury verdict condemning Google’s Android app store as an illegal monopoly. The unanimous ruling cleared the way for a federal judge to enforce a potentially disruptive shake-up that’s designed to give consumers more choices.

    The specific terms of the settlement agreement remain under seal and must be approved by U.S. District Judge James Donato, but the two companies broadly outlined some of their agreements in their joint filing.

    They said the settlement closely follows Donato’s October 2024 ruling ordering Google to tear down the digital walls shielding its Android app store from competition. That included a provision that will require its app store to distribute rival third-party app stores so consumers can download them to their phones, if they so desire.

    Google had hoped to void those changes with an appeal, but the ruling issued in July by the Ninth Circuit Court of Appeals delivered a legal blow for the tech giant, which has been waylaid in three separate antitrust trials affecting different pillars of its internet empire.

    Epic Games filed lawsuits targeting Google’s Play Store as well as Apple’s iPhone app store in 2020 in an attempt to bypass exclusive payment processing systems that charged 15% to 30% commissions on in-app transactions. The settlement agreement proposed Tuesday calls for Google to limit those payments to between 9% and 20%, depending on the transaction.

    Epic CEO Tim Sweeney called the settlement an “awesome proposal” in a social media post. A hearing is set for Thursday.

    [ad_2]

    Source link

  • France to Suspend Shein Sales After Finding Childlike Sex Dolls

    [ad_1]

    The French government moved to temporarily suspend Shein’s website after authorities discovered sex dolls resembling children were being sold on its platform.

    The French finance ministry said Wednesday that it had begun the process to suspend Shein for “the time necessary for the platform to demonstrate” it has scrubbed its site of illegal products.

    Copyright ©2025 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

    [ad_2]

    Chelsey Dulaney

    Source link

  • Gemini AI to transform Google Maps into a more conversational experience

    [ad_1]

    Google Maps is heading in a new direction with artificial intelligence sitting in the passenger’s seat.

    Fueled by Google’s Gemini AI technology, the world’s most popular navigation app will become a more conversational companion as part of a redesign announced Wednesday.

    The hands-free experience is meant to turn Google Maps into something more like an insightful passenger able to direct a driver to a destination while also providing nearby recommendations on places to eat, shop or sightsee, when asked for the advice.

    “No fumbling required — now you can just ask,” Google promised in a blog post about the app makeover.

    The AI features are also supposed to enable Google Maps to be more precise by calling out landmarks to denote the place to make a turn instead of relying on distance notifications.

    AI chatbots, like Gemini and OpenAI’s ChatGPT, have sometimes lapsed into periods of making things up — known as “hallucinations” in tech speak — but Google is promising that built-in safeguards will prevent Maps from accidentally sending drivers down the wrong road.

    All the information that Gemini is drawing upon will be culled from the roughly 250 million places stored in Google Maps’ database of reviews accumulated during the past 20 years.

    Google Maps’ new AI capabilities will be rolling out to both Apple’s iPhone and Android mobile devices.

    That will give Google’s Gemini a massive audience to impress — or disappoint — with its AI prowess, given the navigation app is used by more than 2 billion people around the world. Besides making it even more indispensable, Google is hoping the AI features will turn into a showcase that help gives Gemini a competitive edge against ChatGPT.

    Prodded by OpenAI’s release of ChatGPT in late 2022, Google has been steadily rolling out more of its own technology designed to ensure its products continue to evolve with the upheaval being unleashed by AI. The changes have included an overhaul of Google’s ubiquitous search engine that has de-emphasized a listing of relevant web links in its results and increasingly highlighted AI overviews and conversational responses provided through an AI mode.

    [ad_2]

    Source link

  • Pasco Schools set to unlock AI for student use on Dec. 1

    [ad_1]

    PASCO COUNTY, Fla. — As artificial intelligence (AI) tools continue to shape classrooms and workplaces, Pasco County Schools is preparing to embrace the technology, while also setting clear boundaries for its use.

    District teachers are already using Microsoft Copilot, an AI-powered assistant similar to ChatGPT, to help create lesson plans and develop guided tutorials for students.

    Beginning Dec. 1, high school students in the district will gain access to the tool as well.


    What You Need To Know

    • Pasco Schools to unlock AI for use by students in high school on Dec. 1
    • The district is currently drafting guidelines for the use of AI by students 
    • Pasco teachers have been using AI tools for lesson plans and student tutorials 
    • Pasco students will be allowed to use Microsoft Copilot in a limited capacity 


    Copilot functions like an advanced search engine. It can draft essays, answer questions and summarize research materials in seconds — tasks that could otherwise take students hours to complete. With such powerful capabilities, district leaders say they are focused on balancing innovation with responsibility.

    During a recent school board meeting, Superintendent John Legg emphasized that Pasco’s AI guidelines will need to evolve alongside the technology.

    “The one thing that I have heard — and I am not an AI expert — but in working with people who are, is the day we publish this is the day it is obsolete because it is emerging that quickly,” Legg said. “We will be constantly revisiting this, probably for the next few years.”

    The district is planning one more round of revisions to its AI guidelines before officially releasing them to students.

    While Pasco moves forward, other nearby school districts — Hillsborough and Pinellas, for example — are also drafting or refining their own policies around AI. Pasco officials say they’ve reviewed those guidelines closely to ensure consistency across the region.

    So far, there have been no statewide directives in Florida regarding the use of AI in schools. For now, each district is deciding how best to prepare students for a future where AI is part of everyday learning.

    [ad_2]

    Jason Lanning

    Source link

  • Australia adds Reddit and Kick to social media platforms banning children under 16

    [ad_1]

    MELBOURNE, Australia (AP) — Australia has added message board Reddit and livestreaming service Kick to its list of social media platforms that must ban children younger than 16 from holding accounts.

    The platforms join Facebook, Instagram, Snapchat, Threads, TikTok, X and YouTube in facing a world-first legal obligation to shut the accounts of younger Australian children from Dec. 10, Communications Minister Anika Wells said on Wednesday.

    Platforms that fail to take reasonable steps to exclude children younger than 16 could be punished with a fine of up to 50 million Australian dollars ($33 million).

    “We have met with several of the social media platforms in the past month so that they understand there is no excuse for failure to implement this law,” Wells told reporters in Canberra.

    “Online platforms use technology to target children with chilling control. We are merely asking that they use that same technology to keep children safe online,” Wells added.

    Australia’s eSafety Commissioner Julie Inman Grant, who will enforce the social media ban, said the list of age-restricted platforms would evolve with new technologies.

    The nine platforms currently age-restricted meet the key requirement that their “sole or significant purpose is to enable online social interaction,” a government statement said.

    Inman Grant said she would work with academics to evaluate the impacts of the ban, including whether children sleep or interact more or become more physically active.

    “We’ll also look for unintended consequences and we’ll be gathering evidence” so that others could learn from Australia’s achievements, Inman Grant said.

    Australia’s move is being closely watched by countries that share concerns about social media impacts on young children.

    European Commission President Ursula von der Leyen told a United Nations forum in New York in September that she was “inspired” by Australia’s “common sense” move to legislate the age restriction.

    Critics of the legislation fear that banning young children from social media will impact the privacy of all users, who must establish they are older than 16.

    Wells recently said the government seeks to keep platform users’ data as private as possible.

    More than 140 Australian and international academics with expertise in fields related to technology and child welfare signed an open letter to Prime Minister Anthony Albanese last year opposing a social media age limit as “too blunt an instrument to address risks effectively.”

    [ad_2]

    Source link

  • European Union welcomes suspension of China’s rare earth controls

    [ad_1]

    BRUSSELS (AP) — The European Union has agreed with China on stabilizing the flow of rare earth materials and products from China that are critical elements for many high-tech and military products, an official said Tuesday. EU trade commissioner Maroš Šefčovič met with Chinese Commerce Minister Wang Wentao in Brussels on Friday to discuss Beijing’s export controls on rare earths issued in April and October, and European regulations on semiconductor sales, said Olof Gill, a spokesperson for the European Commission, the 27-nation bloc’s executive arm. Like the U.S., Europe runs a huge trade deficit with China — around 300 billion euros ($345 billion) last year. It relies heavily on China for rare earth material and products, which are also used to make magnets used in cars and appliances.

    Gill said that the EU welcomed China’s recent 12-month suspension of rare earths export controls, and called for a new and stable system of trade in the critical materials. The EU is working with China on an export licensing system to ensure a more stable flow of rare earth minerals to the bloc, he said.

    “This is an appropriate and responsible step in the context of ensuring stable global trade flows in a critically important area,” Gill said.

    Šefčovič said that that Brussels and Beijing were continuing to speak about further trade measures.

    “Both sides reaffirmed commitment to continue engagement on improving the implementation of export control policies,” he said in an X post.

    China is the EU’s second-largest trading partner in goods, after the United States. Bilateral trade is estimated at 2.3 billion euros ($2.7 billion) per day.

    Both China and the EU believe it’s in their interest to keep their trade ties stable for the sake of the global economy, and they share certain climate goals.

    [ad_2]

    Source link

  • Stability AI largely wins UK court battle against Getty Images over copyright and trademark

    [ad_1]

    LONDON (AP) — Artificial intelligence company Stability AI mostly prevailed against Getty Images Tuesday in a British court battle over intellectual property.

    Seattle-based Getty had accused Stability AI of infringing its copyright and trademark by scraping 12 million images from its website, without permission, to train its popular image generator, Stable Diffusion.

    The closely followed case at Britain’s High Court was among the first in a wave of lawsuits involving generative AI as movie studios, authors and artists challenged tech companies’ use of their works to train AI chatbots.

    Tech companies have long argued that “fair use” or “fair dealing” legal doctrines in the United States and United Kingdom allow them to train their AI systems on large troves of writings or images. Tuesday’s ruling provides some clarity but still leaves big unanswered questions over copyright and AI, experts said.

    According to the judge’s written ruling, Getty narrowly won its argument that Stability had infringed its trademark, but lost the rest of its case.

    Both sides claimed victory.

    “This is a significant win for intellectual property owners,” Getty Images said in a statement.

    Shares of Getty dipped 3% before the opening bell in the U.S.

    Stability, based in London, said it was pleased with the ruling.

    “This final ruling ultimately resolves the copyright concerns that were the core issue,” Stability’s General Counsel Christian Dowell said.

    Getty had accused Stability of both primary and secondary copyright infringement.

    Legal experts said the first one involves the act of reproducing something without permission — similar to a dodgy factory churning out counterfeit Chanel handbags or pirated CDs — while the second involves importing those copies from another country.

    In this case, Getty said Stability’s use of its image library to train and develop Stable Diffusion’s AI model amounted to breach of primary copyright. Stability responded that the case doesn’t belong in the United Kingdom because the AI model’s training technically happened elsewhere, on computers run by U.S. tech giant Amazon.

    During the three-week trial in June, Getty dropped its primary copyright allegations, in a sign that it didn’t think they would succeed. But it still pursued the secondary infringement claims. Even if Stability’s AI training happened outside the U.K., Getty said offering the Stable Diffusion service to British users amounted to importing unlawful copies of its images into the country.

    Justice Joanna Smith rejected Getty’s claims, ruling that Stable Diffusion’s AI didn’t infringe copyright because it doesn’t “store or reproduce any Copyright Works (and has never done so).”

    Getty also sued for trademark infringement because its watermark appeared on some of the images generated by Stability’s chatbot.

    The judge sided with Getty but added that the case only partially succeeded, and that her findings are “both historic and extremely limited in scope.”

    “While I have found instances of trademark infringement, I have been unable to determine that these were widespread,” she said.

    Experts said Getty’s move to drop part of its copyright case means AI training is still in legal limbo.

    “The decision leaves the U.K. without a meaningful verdict on the lawfulness of an AI model’s process of learning from copyright materials,” said Iain Connor, an intellectual property partner at law firm Michelmores.

    Smith said there was “very real societal importance” in deciding how to strike a balance between the creative and tech industries. But she added that the court can only rule on the “diminished” case that remained and couldn’t consider “issues that have been abandoned.”

    A Getty spokeswoman declined to say whether there would be an appeal.

    Getty is also pursuing a copyright infringement lawsuit in the United States against Stability. It originally sued in 2023 but refiled the case in a San Francisco federal court in August.

    The Getty lawsuits are among a slew of cases that highlight how the generative AI boom is fueling a clash between tech companies and creative industries.

    AI companies are now fighting more than 50 copyright lawsuits — so many that a tech industry lobby group has called on President Donald Trump for help stop the court fights, saying they threaten AI innovation.

    Among the cases, Anthropic agreed to pay $1.5 billion to settle a class-action lawsuit by authors while a federal judge dismissed a similar lawsuit from 13 authors against Meta Platforms. Warner Bros. has sued Midjourney for copyright infringement, as have Disney and Universal in seperate lawsuits, alleging that its image generator creates copyrighted characters.

    ___

    AP Technology Writer Matt O’Brien contributed to this report.

    [ad_2]

    Source link

  • How China’s Chokehold on Drugs, Chips and More Threatens the U.S.

    [ad_1]

    BEIJING—China has demonstrated it can weaponize its control over global supply chains by constricting the flow of critical rare-earth minerals. President Trump went to the negotiating table when the lack of Chinese materials threatened American production, and he reached a truce last week with Chinese leader Xi Jinping that both sides say will ease the flow of rare earths.

    But Beijing’s tools go beyond these critical minerals. Three other industries where China has a chokehold—lithium-ion batteries, mature chips and pharmaceutical ingredients—give an idea of what the U.S. would need to do to free itself fully from vulnerability. 

    Copyright ©2025 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

    [ad_2]

    Yoko Kubota

    Source link

  • How Twelve Labs Teaches A.I. to ‘See’ and Transform Video Understanding: Interview

    [ad_1]

    Soyoung Lee, co-founder and head of GTM at Twelve Labs, pictured at Web Summit Vancouver 2025. Photo by Vaughn Ridley/Web Summit via Sportsfile via Getty Images

    Sure, the score of a football game is important. But sporting events can also foster cultural moments that slip under the radar—such as Travis Kelce signing a heart to Taylor Swift in the stands. While such footage could be social-media gold, it’s easily missed by traditional content tagging systems. That’s where Twelve Labs comes in.

    “Every sports team or sports league has decades of footage that they’ve captured in-game, around the stadium, about players,” Soyoung Lee, co-founder and head of GTM at Twelve Labs, told Observer. However, these archives are often underutilized due to inconsistent and outdated content management. “To date, most of the processes for tagging content have been manual.”

    Twelve Labs, a San Francisco-based startup specializing in video-understanding A.I., wants to unlock the value of video content by offering models that can search vast archives, generate text summaries and create short-form clips from long-form footage. Its work extends far beyond sports, touching industries from entertainment and advertising to security.

    “Large language models can read and write really well,” said Lee. “But we want to move on to create a world in which A.I. can also see.”

    Is Twelve Labs related to Eleven Labs?

    Founded in 2021, Twelve Labs isn’t to be confused with ElevenLabs, an A.I. startup that specializes in audio. “We started a year earlier,” Lee joked, adding that Twelve Labs—which named itself after the initial size of its founding team—often partners with ElevenLabs for hackathons, including one dubbed “23Labs.”

    The startup’s ambitious vision has drawn interest from deep-pocketed backers. It has raised more than $100 million from investors such as Nvidia, Intel, and Firstman Studio, the studio of Squid Game creator Hwang Dong-hyuk. Its advisory bench is equally star-studded, featuring Fei-Fei Li, Jeffrey Katzenberg and Alexandr Wang.

    Twelve Labs counts thousands of developers and hundreds of enterprise customers. Demand is highest in entertainment and media, spanning Hollywood studios, sports leagues, social media influencers and advertising firms that rely on Twelve Labs tools to automate clip generation, assist with scene selection or enable contextual ad placements.

    Government agencies also use the startup’s technology for video search and event retrieval. Beyond its work with the U.S. and other nations, Lee said that Twelve Labs has a deployment in South Korea’s Sejong City to help CCTV operators monitor thousands of camera feeds and locate specific incidents. To reduce security risks, the company has removed capabilities for facial and biometric recognition, she added.

    Will video-native A.I. come for human jobs?

    Many of the industries Twelve Labs serves are already debating whether A.I. threatens humans jobs—a concern Lee argues is only partly warranted. “I don’t know if jobs will be lost, per se, but jobs will have to transition,” she said, comparing the shift to how tools like Photoshop reshaped creative roles.

    If anything, Lee believes systems like Twelve Labs’ will democratize creative work traditionally limited to companies with big budgets. “You are now able to do things with less, which means you have more stories that can be created from independent creatives who do not have that same capital,” she said. “It actually allows for the scaling of content creation and personalizing distribution.”

    Twelve Labs is not the only A.I. player eyeing video, but the company insists it serves a different need than its much larger competitors. “We’re excited that video is now starting to get more attention, but the way we’re seeing it is a lot of innovation in large language models, a lot of innovation in video generation models and image generation models like Sora—but not in video understanding,” said Lee, referencing OpenAI’s text-to-video A.I. model and app.

    For now, Twelve Labs offers video search, video analysis and video-to-text capabilities. The company plans to expand into agentic platforms that can not only understand video but also build narratives from it. Such models could be useful beyond creative fields, Lee said, pointing to examples like retailers identifying peak foot-traffic hours or security clients mapping the sequence of events surrounding an accident.

    While A.I. might help a Hollywood director assemble a movie, Lee believes it won’t ever be the director. Even if the technology can provide narrative options, humans still decide which story is most compelling, identify gaps and supply the footage. “At the end of the day, I think there’s nothing that can replace human creative intent.”

    How Twelve Labs Teaches A.I. to ‘See’ and Transform Video Understanding: Interview

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link