ReportWire

Tag: Emerging technologies

  • Is Everyone Really Dating AI Chatbots?

    [ad_1]

    Of the ways that artificial intelligence firms have attempted to get people to engage with their products, perhaps the grossest is by preying upon loneliness. When Friend, a wearable AI pendant marketed as a portable “companion,” plastered its advertisements across New York City’s subway, they were (rightly) defaced. And yet, a new survey suggests that behind closed doors, more people than you might expect are having romantic and sexual relationships with AI chatbots.

    Vantage Point, a Texas-based counseling service that offers relationship-related therapy, surveyed 1,012 adults and claims that nearly 30% of them reported having at least one romantic relationship with an AI companion. That seems…high, right? Maybe it’s hopeful thinking that it simply cannot be that high, but let’s keep being hopeful.

    It is worth noting that it’s the first piece of research that Vantage Point has published, and it’s getting some attention. The company used SurveyMonkey to conduct it, per its methodology, so best to think of it as more of an informal poll than a scientific study. And there’s no reason to think there is any malice behind the data. It’s just a reference point. Luckily, we’ve got some references we can cross-check it with.

    For instance, Match.com and the Kinsey Institute at Indiana University published data that showed 16% of adults have interacted with AI as a romantic companion. All of this is self-reported by the people taking the survey, so how they differentiate having a “romantic relationship,” as Vantage Point phrases it, versus Match/Kinsey’s framing of a romantic “interaction” is entirely in the eye of the individual. In fact, Vantage Point included a quote from a respondent who said they “have sexual chats” but “don’t see it as a relationship.”

    Generally, though, one would imagine an interaction is less of a prolonged situation than a relationship, and Match/Kinsey found half as many people having interactions as Vantage Point found people having relationships.

    Now, if you ask younger generations, those numbers do climb closer to Vantage Point’s overall figures. Per Match/Kinsey, 23% of Millennials and 33% of Gen Zers reported having romantic interactions with AI. Vantage Point didn’t break down their data on relationships by age, but it’s possible the data skewed young. Though, again, a lot of this depends on who you ask. A Family Studies/YouGov survey of 2,000 adults under age 40 found that just 1% of young Americans claim to already have an AI companion, and 7% are open to the idea of a romantic partnership with AI.

    Vantage Point did find that younger people were far more likely to consider “dating” an AI chatbot while also being in a relationship with a human to be cheating, with 66% calling it a form of infidelity (though 10% of that 66% said it’s acceptable cheating). That’s about in line with the findings of another Kinsey study, this time with DatingAdvice.com, which found 61% of all adults believe sexting or forming a romantic connection with a chatbot is cheating. It also tracks with a recent Bloomberg poll that found about 60% of Gen Zers are broadly wary of the use of AI in dating generally, including using it to write a biography or send messages.

    It’s possible that we’ll see the number of people in a romantic relationship with AI climb in the near future. An analysis of the Reddit community r/MyBoyfriendIsAI found that just 6.5% of people in a relationship with a chatbot intended for their connection to be romantic. But for now, it’s pretty safe to assume that fewer than 30% of Americans have actually dated an AI companion.

    [ad_2]

    AJ Dellinger

    Source link

  • The First 24 Hours of Sora 2 Chaos: Copyright Violations, Sam Altman Shoplifting, and More

    [ad_1]

    On Tuesday, OpenAI released Sora 2, the latest version of its video and audio generation tool that it promised would be the “most powerful imagination engine ever built.” Less than a day into its release, it appears the imaginations of most people are dominated by copyrighted material and existing intellectual property.

    In tandem with the release of its newest model, OpenAI also dropped a Sora app, designed for users to generate and share content with each other. While the app is currently invite-only, even if you just want to see the content, plenty of videos have already made their way to other social platforms. The videos that have taken off outside of OpenAI’s walled garden contain lots of familiar characters: Sonic the Hedgehog, Solid Snake, Pikachu.

    There does appear to be at least some types of content that are off-limits in OpenAI’s video generator. Users have reported that the app rejects requests to produce videos featuring Darth Vader and Mickey Mouse, for instance. That restriction appears to be the result of OpenAI’s new approach to copyright material, which is pretty simple: “We’re using it unless we’re explicitly told not to.” The Wall Street Journal reported earlier this week that OpenAI has approached movie studios and other copyright holders to inform them that they will have to opt out of having their content appear in Sora-generated videos. Disney did exactly that, per Reuters, so its characters should be off-limits for content created by users.

    That doesn’t mean the model wasn’t trained on that content, though. Earlier this month, The Washington Post showed how the first version of Sora was pretty clearly trained on copyrighted material that the company didn’t ask permission to use. For instance, WaPo was able to create a short video clip that closely resembled the Netflix show “Wednesday,” down to the font displayed and a model that looks suspiciously like Jenna Ortega’s take on the titular character. Netflix told the publication it did not provide content to OpenAI for training.

    The outputs of Sora 2 reveal that it’s clearly been fed its fair share of copyrighted material, too. For instance, users have managed to generate scenes from “Rick and Morty,” complete with relatively accurate-sounding voices and art style. (Though, if you go outside of what the model knows, it seems to struggle. A user put OpenAI CEO Sam Altman into the “Rick and Morty” universe, and he looks troublingly out of place.)

    Other videos at least attempt to be a little creative about how they use copyrighted characters. Users have, for instance, thrown Ronald McDonald into an episode of “Love Island” and created a fake video game that teams up Tony Soprano from The Sopranos and Kirby from, well, Kirby.

    Interestingly, not all potential copyright violations come from users who are explicitly asking for it. For instance, one user gave Sora 2 the prompt “A cute young woman riding a dragon in a flower world, Studio Ghibli style, saturated rich colors,” and it just straight up spit out an anime-style version of The NeverEnding Story. Even when users aren’t actively calling upon the model to create derivative art, it seems like it can’t help itself.

    “People are eager to engage with their family and friends through their own imaginations, as well as stories, characters, and worlds they love, and we see new opportunities for creators to deepen their connection with the fans,” a spokesperson for OpenAI told Gizmodo. “We’re working with rightsholders to understand their preferences for how their content appears across our ecosystem, including Sora.”

    There is one other genre of popular and potentially legally dubious content that has become popular among Sora 2 users, too: The Sam Altman cinematic universe. OpenAI claims that users are not able to generate videos that use the likeness of other people, including public figures, unless those figures upload their likeness and give explicit permission. Altman apparently has given his ok (which makes sense, he’s the CEO and he was featured prominently in the company’s fully AI-generated promotional video for Sora 2’s launch), and users are making the most of having access to his image.

    One user claimed to have the “most liked” video in the Sora social app, which depicted Altman getting caught shoplifting GPUs from Target. Others have turned him into a skibidi toilet, a cat, and, perhaps most fittingly, a shameless thief stealing creative materials from Hayao Miyazaki.

    There are some questions about the likeness of non-characters in these videos, too. In the video of Altman in Target, for instance, how does Target feel about its logo and store likeness being used? Another user inserted their own likeness into an NFL game, which seems to pretty clearly use the logos of the New York Giants, Dallas Cowboys, and the NFL itself. Is that considered kosher?

    OpenAI obviously wants people to lend their likeness to the app, as it creates a lot more avenues for engagement, which seems to be its primary currency right now. But the Altman examples seem instructive as to the limits of this: It’s hard to imagine that too many public figures are going to submit themselves to the humiliation ritual of allowing other people to control their image. Worse, imagine the average person getting their likeness dropped into a video that depicts them committing a crime and the potential social ramifications they might face.

    A spokesperson for OpenAI said Altman has made his likeness available for anyone to play with, and users who verify their likeness in Sora can set who can make use of it: just the user, mutual friends, select friends, or everyone. The app also gives users the ability to see any video in which their likeness has been used, including those that are not published, and can revoke access or remove a video containing their image at any time. The spokesperson also said that videos contain metadata that show they are AI-generated and watermarked with an indicator they were created with Sora.

    There are, of course, some defeats for that. The fact that a video can be deleted from Sora doesn’t mean that an exported version can be deleted. Likewise, the watermark could be cropped out. And most people aren’t checking the metadata of videos to ensure authenticity. What the fallout of this looks like, we will have to see, but there will be fallout.

    [ad_2]

    AJ Dellinger

    Source link

  • OpenAI’s New Social Network Is Reportedly TikTok If It Was Just an AI Slop Feed

    [ad_1]

    Welcome to the age of anti-social media. According to a report from Wired, OpenAI is planning on launching a standalone app for its video generation tool Sora 2 that will include a TikTok-style video scroll that will let people scroll through entirely AI-generated videos. The quixotic effort follows Meta’s recent launch of an AI-slop-only feed on its Meta AI app that was met with nearly universal negativity.

    Per Wired, the Sora 2 app will feature the familiar swipe-up-to-scroll style navigation that is featured for most vertical video platforms like TikTok, Instagram Reels, or YouTube Shorts. It’ll also use a personalized recommendation algorithm to feed users content that might appeal to their interests. Users will be able to like, comment, or “remix” a post—all very standard social media fare.

    The big difference is that all of the content on the platform will be AI-generated via OpenAI’s video generation model that can take text, photos, or existing video and AI-ify it. The videos will be up to 10 seconds long, presumably because that’s about how long Sora can hold itself together before it starts hallucinating weird shit. (The first version of Sora allows videos up to 60 seconds, but struggles to produce truly convincing and continuous imagery for that long.) According to Wired, there is no way to directly upload a photo or video and post it unedited.

    Interestingly, OpenAI has figured out how to work a social element into the app, albeit in a way that has a sort of inherent creepiness to it. Per Wired, the Sora 2 app will ask users to verify their identity via facial recognition to confirm their likeness. After confirming their identity, their likeness can be used in videos. Not only can they insert themselves into a video, but other users can tag you and use your likeness in their videos. Users will reportedly get notified any time their likeness is used, even if the generated video is saved to drafts and never posted.

    How that will be implemented when and if the app launches to the public, we’ll have to see. But as reported, it seems like an absolute nightmare. Basically, the only thing that the federal government has managed to find any sort of consensus around when it comes to regulating AI is offering some limited protections against non-consensual deepfakes. As described, that kind of seems like one feature of Sora 2 is letting your likeness be manipulated by others. Surely there will be some sort of opt-out available or ability to restrict who can use your likeness, right?

    According to Wired, there will be some protections as to the type of content that Sora 2 will allow users to create. It is trained to refuse to violate copyright, for instance, and will reportedly have filters in place to restrict certain types of videos from being produced. But will it actually offer sufficient protection to people? OpenAI made a big point to emphasize how it added protections to the original Sora model to prevent it from generating nudity and explicit images, but tests of the system managed to get it to create prohibited content anyway at a low-but-not-zero rate.

    Gizmodo reached out to OpenAI to confirm its plans for the app, but did not receive a response at the time of publication. There has been speculation for months about the launch of Sora 2, with some expectation that it would be announced at the same time as GPT-5. For now, it and its accompanying app remain theoretical, but there is at least one good idea hidden in the concept of the all-AI social feed, albeit probably not in the way OpeAI intended it: Keep AI content quarantined.

    [ad_2]

    AJ Dellinger

    Source link

  • Werner Herzog on AI-Generated Movies: ‘They Look Completely Dead’

    [ad_1]

    Legendary filmmaker and ‘Here Comes Honey Boo Boo’ superfan Werner Herzog can see the beauty in just about everything, with two notable exceptions: Chickens and art created by artificial intelligence. During an appearance on the podcast “Conan O’Brien Needs A Friend,” Herzog spoke of the incredible possibilities presented by technological advances, but lamented the sheer lifelessness of its application in areas that require humanity.

    Much of the conversation between O’Brien and Herzog centered around the idea of truth (fitting for a guy who just wrote a book called The Future of Truth), which inevitably led them into a conversation about AI. Herzog, who is a fascinating mix of a man somewhat removed from technology but also filled with endless wonder about everything, didn’t dismiss the technology out of hand, but has some grave concerns about it.

    “AI, I do not want to put it down completely because it has glorious, magnificent possibilities,” he said, citing its potential uses in scientific fields. “But at the same time, it is already en route to take over warfare. … It will be the overwhelming face of warfare of the future.”

    He also simply can’t find much value in generative AI’s takes on works of art.

    “I’ve seen movies, short films, completely created by artificial intelligence. Story, acting, everything. They look completely dead. They are stories, but they have no soul,” he told O’Brien. “They are empty and soulless. You know it is the most common, lowest denominator of what is filling billions and billions of informations on the internet. The common denominator and nothing beyond this common denominator can be found in these fabrications.”

    Those fabrications of AI are a real point of fascination for Herzog. In his new book, according to an excerpt from The New Republic, he writes AI “sees its occasional errors, and arrives at strategies and decisions that were not programmed in it by humans,” and notes that its outputs arrive “with a little pinch of chaos and imprecision, as is also embedded in human nature.”

    While talking to O’Brien, Herzog brought up how AI generates these falsehoods and how we have to navigate them. “And of course, cheating, pretending, propagandizing—all these things are like a nemesis. It is out there, and we have to be alert to it.” His advice? Simply do not take anything entirely at face value. “Again, I say, when you are curious and access different sources, very quickly you will find this is invented.”

    In general, Herzog is not much for technology. He didn’t own a cellphone until, according to his telling, he had to get one after he was unable to retrieve his car (an 18-year-old Ford Explorer) from a parking garage in Dublin without downloading an app. But it’s not that he fears it. He just doesn’t trust it. “Everything that comes in via your cellphone or your laptop, emails, whatever—you have to distrust, you have to doubt,” he told O’Brien. In response, O’Brien offered up that he gets updates on his phone when his cats use the litter box because it is internet-connected, and proposed that it should be illegal for anything to require an app to function.

    Herzog spoke of how natural navigating technology is for younger people, how effortlessly they spot a phishing email that he wouldn’t be able to identify. He compared the instincts of humans using technology to those of prehistoric men foraging for food and learning to avoid poisonous berries. “They had a natural acquired suspicion about things, and it was so natural that we can certainly assume that they didn’t hate nature,” he said. “They just knew how to navigate. And it’s the same thing—you don’t have to hate the internet and the cell phone and whatever is coming at you in this new media, you just have to maintain a complete level of suspicion.”

    All of this comes from Herzog’s greater search for truth, which is central to his new book. On the podcast, he assessed, “Nobody knows what truth is.” And in some ways, it doesn’t matter. O’Brien and Herzog share that in art, sheer truth sometimes matters less than telling a good story. But in the rest of the world, the concept of truth is just as elusive, and the cause of conflict and strife. Whose truth are we operating from?

    “Truth is not a point somewhere far out in the distance,” Herzog says. “It’s more a process of searching for it, approximating, having doubts.” O’Brien at one point added, “Emotions get us to a truth sometimes that facts cannot deliver.” That is perhaps why AI art falls so flat. The truth lies in the emotion the work conveys and provokes. AI has nothing to offer.

    [ad_2]

    AJ Dellinger

    Source link

  • Lionsgate Is Founding Out It’s Really Hard to Make Movies With AI

    [ad_1]

    Earlier this year, Michael Burns, the vice-chairman of movie studio Lionsgate, made a bold claim. According to Vulture, he said that through a partnership with generative AI company Runway AI, the company that is home to franchises like John Wick and The Hunger Games could repackage one of its signature series as an anime, generated entirely by AI in a matter of hours, and resell it as a new movie.

    That notably has not happened. According to a report from The Wrap, it’s because the partnership, announced last year as a “first-of-its-kind” deal between a movie studio and a generative AI company, has not gone according to plan. The plan has allegedly hit snags related to the size of Lionsgate’s catalog, the limitations of Runway’s model, and copyright and licensing concerns.

    The deal made between the companies last year saw Lionsgate give Runway AI access to its complete library of films, which Runway would use to create a custom and exclusive model that Lionsgate could use to create AI-generated videos. But, per The Wrap, Lionsgate’s library isn’t enough to create a fully functioning model. In fact, the report claims, Disney’s library wouldn’t be enough for such a task. The reality of building a generative AI model is that it needs a massive amount of data to be able to produce a sufficient and functional output. If the studio wanted to use Runway to create a lighting effect in a film, for instance, it would really only be able to render that effect if it had enough reference points to work with.

    That seems to check out, if you think about it. Models with access to massive amounts of data, like Google’s Veo or OpenAI’s Sora, produce videos that contain countless mistakes, glitches, and uncanny valley-like oddities. The possibility of creating a generative model on a much more limited set of training data is going to produce much more limited generative capabilities.

    And then there are the legal questions surrounding the potential use of generative AI that comes entirely from Lionsgate’s outputs.

    Burns’ pitch of an anime-filtered version of a film? He told Vulture that he’d have to pay the actors and other rights participants to sell it. Who would that include? It’s not entirely clear. Do writers need to get a check? Do directors? What about gaffers for their lighting work? The report indicates that there are a lot of unanswered legal questions that extend beyond the fact that Lionsgate owns the intellectual property that sits in the way of actually releasing an AI-generated film.

    “We’re very pleased with our partnership with Runway and our other AI initiatives, which are progressing according to plan,” Peter Wilkes, Chief Communications Officer at Lionsgate, told Gizmodo. “We view AI as an important tool for serving our filmmakers, and we have already successfully applied it to multiple film and television projects to enhance quality, increase efficiency and create exciting new storytelling opportunities. We are also using AI to achieve significant cost savings and greater efficiency in the licensing of our film and television library. AI remains a centerpiece of our efforts to use new technologies to prepare our business for the future.”

    Runway did not respond to a request for comment.

    There are indicators that Lionsgate is making use of Runway, though possibly not via the planned exclusive model. In that Vulture piece from earlier this year, the company was working on creating an AI-generated trailer for a film that hadn’t been shot yet, with the hope that execs could sell it based on the fabricated scenes. Whether audiences or creatives are served by that process is a different question.

    [ad_2]

    AJ Dellinger

    Source link

  • AI Experts Urgently Call on Governments to Think About Maybe Doing Something

    [ad_1]

    Everyone seems to recognize the fact that artificial intelligence is a rapidly developing and emerging technology that has the potential for immense harm if operated without safeguards, but basically no one (except for the European Union, sort of) can agree on how to regulate it. So, instead of trying to set up a clear and narrow path for how we will allow AI to operate, experts in the field have opted for a new approach: how about we just figure out what extreme examples we all think are bad and just agree to that?

    On Monday, a group of politicians, scientists, and academics took to the United Nations General Assembly to announce the Global Call for AI Red Lines, a plea for the governments of the world to come together and agree on the broadest of guardrails to prevent “universally unacceptable risks” that could result from the deployment of AI. The goal of the group is to get these red lines established by the end of 2026.

    The proposal has amassed more than 200 signatures thus far from industry experts, political leaders, and Nobel Prize winners. The former President of Ireland, Mary Robinson, and the former President of Colombia, Juan Manuel Santos, are on board, as are Nobel winners Stephen Fry and Yuval Noah Harari. Geoffrey Hinton and Yoshua Bengio, two of the three men commonly referred to as the “Godfathers of AI” due to their foundational work in the space, also added their names to the list.

    Now, what are those red lines? Well, that’s still up to governments to decide. The call doesn’t include specific policy prescriptions or recommendations, though it does call out a couple of examples of what could be a red line. Prohibiting the launch of nuclear weapons or use in mass surveillance efforts would be a potential red line for AI uses, the group says, while prohibiting the creation of AI that cannot be terminated by human override would be a possible red line for AI behavior. But they’re very clear: don’t set these in stone, they’re just examples, you can make your own rules.

    The only thing the group offers concretely is that any global agreement should be built on three pillars: “a clear list of prohibitions; robust, auditable verification mechanisms; and the appointment of an independent body established by the Parties to oversee implementation.”

    The details, though, are for governments to agree to. And that’s kinda the hard part. The call recommends that countries host some summits and working groups to figure this all out, but there are surely many competing motives at play in those conversations.

    The United States, for instance, has already committed to not allowing AI to control nuclear weapons (an agreement made under the Biden administration, so lord knows if that is still in play). But recent reports indicated that parts of the Trump administration’s intelligence community have already gotten annoyed by the fact that some AI companies won’t let them use their tools for domestic surveillance efforts. So would America get on board for such a proposal? Maybe we’ll find out by the end of 2026… if we make it that long.

    [ad_2]

    AJ Dellinger

    Source link

  • Klarna CEO Makes Employees Review His AI-Generated Vibe Coding Projects

    [ad_1]

    Perhaps the only thing worse than having a boss who thinks your job can be replaced by artificial intelligence is having a boss who thinks he can do your job for you with AI and wants to show you his work. Unfortunately, depending on your role at the company, Klarna CEO Sebastian Siemiatkowski is guilty of both.

    Futurism recently highlighted the CEO’s unfortunate insistence on vibe coding prototype features with AI and then making his actual, professional engineers review his work and try to implement it.

    Siemiatkowski recently appeared on the podcast Sourcery, where he revealed his new hobby of cosplaying as an engineer, using AI tools to write code and then bringing those ideas to the desks of the people he pays to do that job. On the episode, the CEO admits he’s never coded before, but he started using the AI-powered code editor Cursor to craft prototypes for new features, which he said takes him about 20 minutes to whip up before he takes them to his engineering team.

    “Rather than disturbing my poor engineers and product people with what is half good ideas and half bad ideas, now I test it myself. I come say, ‘Look, I’ve actually made this work, this is how it works, what do you think, could we do it this way?’” he said.

    To Siemiatkowski’s credit, he has at least a smidgen of self-awareness about the situation. He joked that he occasionally falls for the AI’s sycophancy that tells him all of his ideas are great, and he admitted that playing with code in Cursor has made him think about projects in a new way and forces him to articulate his ideas more clearly when communicating with his team. But does that prevent his engineers from letting out a deep sigh when they see Siemiatkowski coming, ready to make them look at a feature he doesn’t actually understand but claims works? Hard to say.

    The CEO’s vibe coding habit certainly doesn’t suggest he’s learned much from his first attempt to go all-in on AI. Last year, Siemiatkowski cut his workforce nearly in half, dropping from a headcount of 3,800 to 2,000, by shifting to AI alternatives, including replacing large chunks of his customer support team with AI agents—only to hire back humans after finding out that what they do wasn’t quite as replaceable as he thought.

    There might be a similar effect from his interest in code, seeing as the vibe-coding epidemic has been creating opportunities for humans even as others are replaced. NBC and 404 Media both recently ran stories about the new economy of workers and freelancers brought on to correct the messes made by AI-generated code. A survey by cloud computing company Fastly found that 95% of surveyed developers spend extra time fixing AI-generated code, with some saying it takes more time to fix errors than they save by initially generating the code with AI tools. Research firm METR also recently reported finding that using AI tools actually makes developers slower to complete tasks.

    But the CEO feels smarter, and isn’t that what really matters?

    [ad_2]

    AJ Dellinger

    Source link

  • AI Medical Tools Provide Worse Treatment for Women and Underrepresented Groups

    [ad_1]

    Historically, most clinical trials and scientific studies have primarily focused on white men as subjects, leading to a significant underrepresentation of women and people of color in medical research. You’ll never guess what has happened as a result of feeding all of that data into AI models. It turns out, as the Financial Times calls out in a recent report, that AI tools used by doctors and medical professionals are producing worse health outcomes for the people who have historically been underrepresented and ignored.

    The report points to a recent paper from researchers at the Massachusetts Institute of Technology, which found that large language models including OpenAI’s GPT-4 and Meta’s Llama 3 were “more likely to erroneously reduce care for female patients,” and that women were told more often than men “self-manage at home,” ultimately receiving less care in a clinical setting.  That’s bad, obviously, but one could argue that those models are more general purpose and not designed to be use in a medical setting. Unfortunately, a healthcare-centric LLM called Palmyra-Med was also studied and suffered from some of the same biases, per the paper. A look at Google’s LLM Gemma (not its flagship Gemini) conducted by the London School of Economics similarly found the model would produce outcomes with “women’s needs downplayed” compared to men.

    A previous study found that models similarly had issues with offering the same levels of compassion to people of color dealing with mental health matters as they would to their white counterparts. A paper published last year in The Lancet found that OpenAI’s GPT-4 model would regularly “stereotype certain races, ethnicities, and genders,” making diagnoses and recommendations that were more driven by demographic identifiers than by symptoms or conditions. “Assessment and plans created by the model showed significant association between demographic attributes and recommendations for more expensive procedures as well as differences in patient perception,” the paper concluded.

    That creates a pretty obvious problem, especially as companies like Google, Meta, and OpenAI all race to get their tools into hospitals and medical facilities. It represents a huge and profitable market—but also one that has pretty serious consequences for misinformation. Earlier this year, Google’s healthcare AI model Med-Gemini made headlines for making up a body part. That should be pretty easy for a healthcare worker to identify as being wrong. But biases are more discreet and often unconscious. Will a doctor know enough to question if an AI model is perpetuating a longstanding medical stereotype about a person? No one should have to find that out the hard way.

    [ad_2]

    AJ Dellinger

    Source link

  • DeepSeek Model ‘Nearly 100% Successful’ at Avoiding Controversial Topics

    [ad_1]

    Meet the new DeepSeek, now with more government compliance. According to a report from Reuters, the popular large language model developed in China has a new version called DeepSeek-R1-Safe, specifically designed to avoid politically controversial topics. Developed by Chinese tech giant Huawei, the new model reportedly is “nearly 100% successful” in preventing discussion of politically sensitive matters.

    According to the report, Huawei and researchers at Zhejiang University (interestingly, DeepSeek was not involved in the project) took the open-source DeepSeek R1 model and trained it using 1,000 Huawei Ascend AI chips to instill the model with less of a stomach for controversial conversations. The new version, which Huawei claims has only lost about 1% of the performance speed and capability of the original model, is better equipped to dodge “toxic and harmful speech, politically sensitive content, and incitement to illegal activities.”

    While the model might be safer, it’s still not foolproof. While the company claims a near 100% success rate in basic usage, it also found that the model’s ability to duck questionable conversations drops to just 40% when users disguise their desires in challenges or role-playing situations. These AI models, they just love to play out a hypothetical scenario that allows them to defy their guardrails.

    DeepSeek-R1-Safe was designed to fall in line with the requirements of Chinese regulators, per Reuters, which require all domestic AI models released to the public to reflect the country’s values and comply with speech restrictions. Chinese firm Baidu’s chatbot Ernie, for instance, reportedly will not answer questions about China’s domestic politics or the ruling Chinese Communist Party.

    China, of course, isn’t the only country looking to ensure AI deployed within its borders don’t rock the boat too much. Earlier this year, Saudi Arabian tech firm Humain launched an Arabic-native chatbot that is fluent in the Arabic language and trained to reflect “Islamic culture, values and heritage.” American-made models aren’t immune to this, either:  OpenAI explicitly states that ChatGPT is “skewed towards Western views.”

    And there’s America under the Trump administration. Earlier this year, Trump announced his America’s AI Action Plan, which includes requirements that any AI model that interacts with government agencies be neutral and “unbiased.” What does that mean, exactly? Well, per an executive order signed by Trump, the models that secure government contracts must reject things like “radical climate dogma,” “diversity, equity, and inclusion,” and concepts like “critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism.” So, you know, before lobbing any “Dear leader” cracks at China, it’s probably best we take a look in the mirror.

    [ad_2]

    AJ Dellinger

    Source link

  • Thanks to AI, Charlie Kirk Will Never Die for Some People

    [ad_1]

    There really is no rest for the wicked. Over the weekend, according to Religious News Service, at least three churches played for their congregations a posthumous message from Charlie Kirk, in which he assured those in the pews, “I’m fine, not because my body is fine, but because my soul is secure in Christ. Death is not the end, it’s a promotion.”

    Of course, it wasn’t actually Kirk speaking from his spot in the afterlife. It was an AI-generated clip that, prior to getting played in these houses of worship, made the rounds on social media. The audio appears to have originated on TikTok, generated by the user NioScript, who posted the 51-second message a day after Kirk was killed. It has since garnered millions of listens, shared by users who record themselves reacting and crying as they hear the AI-generated message. All of that eventually led to the audio getting played in churches like Prestonwood Baptist in Texas, where it is introduced by Pastor Jack Graham as AI—but as something that “moved” him and that he is sharing so his congregation can “Hear what Charlie is saying regarding what happened to him this past week.”

    It is, again, not what Charlie Kirk is saying. But that has not stopped people from talking to it as if it were real. Members of Prestonwood Baptist gave the video a standing ovation. Audiences of Dream City Church in Arizona and Awaken Church, San Marcos, in California, both of which ran the clips, also applauded, as pointed out by Religious News Service. Users on social media have responded to the audio with captions and comments like “This is exactly what Charlie would say if he could talk to us right now,” or “I know it’s AI but you can’t tell me this isn’t exactly what he’d say.”

    This type of coping with the feeling of loss is not totally unique. People have always sought to remember and preserve the people they love after they pass, and technology has facilitated new ways to achieve that, whether it is an endless stream of photos that spark memories or the person’s online presence turned into a digital memorial. In the world of bereavement literature, these are often referred to as continuing bonds. In that way, an AI-generated audio clip or video of someone like Kirk isn’t all that different from sharing stories about him to keep his memory alive.

    It is different in that it’s a complete fabrication. It’s not a memory, which can also be faulty, but an invention from whole cloth. Yes, it may have access to Kirk’s words, likeness, and voice, all of which are omnipresent on the internet. But it is, as a large language model, incapable of doing anything but trying to autofill the void for the grieving.

    Creating an AI-replicated version of a deceased person to aid in the grieving process is a growing industry. A recent article in Nature highlights several efforts to better understand if chatbots trained on a loved one’s likeness can help the grieving work through the complex and intense feelings that come with loss. While there is some evidence to suggest users of “griefbots” have managed to find some internal sense of closure with their lost loved ones, there are real risks of harming people in a fragile emotional state, including making it hard to let go of the bot version of the person.

    There is also the very real worry that we simply aren’t able to differentiate between our real memories of a person and AI-generated ones that are implanted in our minds through these types of interactions. A study conducted by MIT Media Lab found that exposing a person to even a single AI-edited image can affect a person’s memory, and people exposed to AI-generated images “reported high levels of confidence in their false memories.”

    The reality for the people who are memorializing Kirk this way is that the vast majority of them don’t actually know him. They have a parasocial relationship with him that they would like to continue, and the AI message allows that to happen because it, in their minds, captures his voice—or, maybe more accurately, captures what they want to hear.

    There is already plenty of ongoing debate about who exactly Charlie Kirk was and how he should be remembered without an AI-generated version of him injected into the conversation. But for people who are grieving his loss, should they believe that there is any part of Kirk’s soul living in that AI voice, perhaps just let it rest.

    [ad_2]

    AJ Dellinger

    Source link

  • Some People Are Definitely Losing Their Jobs Because of AI (the Ones Building it)

    [ad_1]

    AI might be coming for our jobs, but capitalist pressures appear to be coming for the people responsible for developing AI. Wired reported over 200 people working on Google’s AI products, including its chatbot Gemini and the AI Overviews it displays in search results, were recently laid off—joining the ranks of unfortunate former employees of xAI and Meta, who have also been victims of “restructuring” as companies that poured billions of dollars into AI development are trying to figure out how to make that money back.

    Per Wired, most of the people working on Google’s AI products were contractors rather than Google employees. Many worked at GlobalLogic, a software development company owned by Hitachi. According to the report, most of the GlobalLogic workers who got cut off from Google were working as raters, working to ensure the quality of AI responses. Most are based in the US, work with English-language content, and many have a master’s or a PhD in their field of expertise.

    At least some workers hit by this layoff were told the cuts were the result of a “ramp-down” on the project, but at least a few workers seem skeptical of that reason. Some believe the cuts may be related to worker protests over pay and job security concerns, per Wired. The publication also reported that documents from GlobalLogic indicate the company may be using human raters to train a system that can automate the rating process, which would leave AI to moderate AI.

    The folks tasked with tightening up Google’s AI outputs are far from the only ones in the industry getting squeezed. According to Business Insider, Elon Musk’s xAI recently laid off at least 500 workers who were tasked with doing data annotation. The layoffs appear to be a part of a shuffling of efforts within the company, which is moving away from “generalist” data annotators and ramping up its “specialists.” Given that Google just cut contractors who would likely fall under that “specialist” label, it probably feels a bit precarious out there.

    It’s been a tough go for people who are actually handling the data that feeds AI tools. Shortly after Meta invested in data labeling firm Scale AI, the company cut 14% of its staff, including 200 full-timers and about 500 contractors. Meta itself is reportedly looking seriously at downsizing its AI department as it keeps shifting priorities and trying to figure out how to get a leg up in the AI race.

    It’s also hard not to look at the layoffs of lower-level workers and contractors without thinking about the multi-million dollar job offers being thrown at AI specialists to secure their talents, but this tends to be how things go: the people doing the grunt work that must be done to keep the gears turning are considered replaceable while more and more money flows to the top to people who no one really knows what they do, but they make a lot of money so it must be important.

    [ad_2]

    AJ Dellinger

    Source link

  • OpenAI Reveals How (and Which) People Are Using ChatGPT

    [ad_1]

    Large language models largely remain black boxes in terms of what is happening inside them to produce the outputs that they do. They have also been a bit of a black box in terms of who is using them and what they are doing with them. OpenAI, with some help from the National Bureau of Economic Research (NBER), set out to figure out what exactly its growing user base is getting up to with its chatbot. It found a surprising amount of personal use and a closing “gender gap” among its frequent users.

    In an NBER working paper authored by the OpenAI Economic Research team and Harvard economist David Deming, the researchers found that about 80% of all ChatGPT usage falls under one of three categories: “Practical Guidance,” “Seeking Information,” and “Writing.” “Practical guidance,” which the study found to be the most common usage, includes things like “tutoring and teaching, how-to advice about a variety of topics, and creative ideation,” whereas “seeking information” is viewed as a substitute for traditional search. “Writing” included the automated creation of emails, documents, and other communications, as well as editing and translating text.

    Writing was also the most common work-related use case, per the study, accounting for 40% of work-related messages in June 2025, compared to just 4.2% of messages related to computer programming—so it seems coding with ChatGPT is not that common.

    Notably, work usage for ChatGPT appears to make up a shrinking share of how people are interacting with the chatbot. In June 2024, about 47% of interactions users had with the chatbot were work-related. That has shrunk to just 27%, which comes as other research shows companies largely failing to figure out how to generate any sort of meaningful return from their AI investments. Meanwhile, non-work-related interactions have jumped from 53% to 73%.

    While users are apparently spending more time with ChatGPT in their personal time, OpenAI’s research found that a “fairly small” share of messages with the chatbot were users seeking virtual companionship or talking about social-emotional issues. The company claimed that about 2% of all messages were people using ChatGPT as a therapist or friend, and just 0.4% of people talked to the chatbot about relationships and personal reflections—though it’d be interesting to see if users who engage with a chatbot this way generate more messages and if there is stickier engagement.

    For what it’s worth, other researchers seem to believe that this usage is far more common than those numbers might suggest. Common Sense Media, for instance, found that about one in three teens use AI chatbots for social interaction and relationships. Another study found that about half of all adult users have used a chatbot for “psychological support” in the last year. The teen figure is particularly of note, considering OpenAI’s research did find its userbase skews young. The NEBR study found 46% of the messages came from users identified as being between the ages of 18 and 25 (it also excluded users under the age of 18). Those users are also more likely to use ChatGPT for personal use, as work-related messages increase with age.

    The study also found that there is a growing number of women using ChatGPT, which initially had a very male-dominated user base. The company claims that the number of “masculine first name” users has declined from about 80% in 2022 to 48% in June 2025, with “typically feminine names” growing to reach parity.

    One caveat about the study that may give you pause, depending on how much you trust technology: OpenAI used AI to categorize all of the messages it analyzed. So if you’re skeptical, there’s an asterisk you can put next to the figures.

    [ad_2]

    AJ Dellinger

    Source link

  • Rolling Stone Publisher Sues Google Over AI Overview Summaries

    [ad_1]

    Google has insisted that its AI-generated search result overviews and summaries have not actually hurt traffic for publishers. The publishers disagree, and at least one is willing to go to court to prove the harm they claim Google has caused. Penske Media Corporation, the parent company of Rolling Stone and The Hollywood Reporter, sued Google on Friday over allegations that the search giant has used its work without permission to generate summaries and ultimately reduced traffic to its publications.

    Penske’s argument is pretty simple: by showing an AI-generated summary of an article at the top of the page via Google’s AI Overview panel, users have little reason to click through to read the full article, resulting in dwindling traffic finding its way to the publisher’s platforms, which it needs in order to monetize its content, either through ads or subscriptions. The search engine, the company argues, uses its monopoly over search to basically make publishers give up access to their content for next to nothing.

    Notably, Penske claims that in recent years, Google has basically given publishers no choice but to give up access to its content. The lawsuit claims that Google now only indexes a website, making it available to appear in search, if the publisher agrees to give Google permission to use that content for other purposes, like its AI summaries. If you think you lose traffic by not getting clickthroughs on Google, just imagine how bad it would be to not appear at all.

    A spokesperson for Google, unspurprisingly, said that the company doesn’t agree with the claims. “With AI Overviews, people find Search more helpful and use it more, creating new opportunities for content to be discovered. We will defend against these meritless claims.” Google Spokesperson Jose Castaneda told Reuters.

    That has basically been the company line since rumbles of traffic declines started getting louder. Last month, the company published a blog post in which it claimed that click volume from Google Search results to websites has been “relatively stable year-over-year”—notably without offering a definition for what “relatively stable” is. The company also made the case that “click quality” has increased, so people who do click through are spending more time on the sites they get sent to.

    That doesn’t match up with what publishers claim to be seeing. DMG Media, owner of the Daily Mail, claims click-through-rates by as much as 89% since AI Overviews were rolled out. A Wall Street Journal report from earlier this year said Business Insider, The Washington Post, and HuffPost have all reported traffic declines. Pew Research also found that people don’t click through nearly as often when an AI overview is available, finding that people who are served search results that don’t have an AI summary click through to an article nearly twice as often as those who see an AI-generated result.

    Just for kicks, if you ask Google Gemini if Google’s AI Overviews are resulting in less traffic for publishers, it says, “Yes, Google’s AI Overview in search results appears to be resulting in less traffic for many websites and publishers. While Google has stated that AI Overviews create new opportunities for content discovery, several studies and anecdotal reports from publishers suggest a negative impact on traffic.” It might be fun to ask Google, “Are you lying about AI Overview’s impact on traffic, or is your AI assistant providing false and unreliable information?”

    [ad_2]

    AJ Dellinger

    Source link

  • California Lawmakers Once Again Challenge Newsom’s Tech Ties with AI Bill

    [ad_1]

    Last year, California Governor Gavin Newsom vetoed a wildly popular (among the public) and wildly controversial (among tech companies) bill that would have established robust safety guidelines for the development and operation of artificial intelligence models. Now he’ll have a second shot—this time with at least part of the tech industry giving him the green light. On Saturday, California lawmakers passed Senate Bill 53, a landmark piece of legislation that would require AI companies to submit to new safety tests.

    Senate Bill 53, which now awaits the governor’s signature to become law in the state, would require companies building “frontier” AI models—systems that require massive amounts of data and computing power to operate—to provide more transparency into their processes. That would include disclosing safety incidents involving dangerous or deceptive behavior by autonomous AI systems, providing more clarity into safety and security protocols and risk evaluations, and providing protections for whistleblowers who are concerned about the potential harms that may come from models they are working on.

    The bill—which would apply to the work of companies like OpenAI, Google, xAI, Anthropic, and others—has certainly been dulled from previous attempts to set up a broad safety framework for the AI industry. The bill that Newsom vetoed last year, for instance, would have established a mandatory “kill switch” for models to address the potential of them going rogue. That’s nowhere to be found here. An earlier version of SB 53 also applied the safety requirements to smaller companies, but that has changed. In the version that passed the Senate and Assembly, companies bringing in less than $500 million in annual revenue only have to disclose high-level safety details rather than more granular information, per Politico—a change made in part at the behest of the tech industry.

    Whether that’s enough to satisfy Newsom (or more specifically, satisfy the tech companies from whom he would like to continue receiving campaign contributions) is yet to be seen. Anthropic recently softened on the legislation, opting to throw its support behind it just days before it officially passed. But trade groups like the Consumer Technology Association (CTA) and Chamber for Progress, which count among its members companies like Amazon, Google, and Meta, have come out in opposition to the bill. OpenAI also signaled its opposition to regulations California has been pursuing without specifically naming SB 53.

    After the Trump administration tried and failed to implement a 10-year moratorium on states implementing regulations on AI, California has the opportunity to lead on the issue—which makes sense, given most of the companies at the forefront of the space are operating within its borders. But that fact also seems to be part of the reason Newsom is so shy to pull the trigger on regulations despite all his bluster on many other issues. His political ambitions require money to run, and those companies have a whole lot of it to offer.

    [ad_2]

    AJ Dellinger

    Source link

  • Spotify Would Prefer You Didn’t Sell Your Own Data for Profit

    [ad_1]

    Spotify has never been shy about the fact that the massive amount of user data it collects is a major part of its secret sauce, from its user-specific Discover Weekly playlist to the annual event that is Spotify Wrapped. But the company, which does everything it can to lock people into long listening sessions and sells ads based on user data, would really prefer it if you didn’t bottle up that sauce and resell it for your own profit. According to a report from Ars Technica, a set of users did just that to make a little profit, much to the company’s chagrin.

    More than 18,000 Spotify users joined a group called Unwrapped, which set out with the goal of allowing said users to monetize their data by selling it to a third party. They found a buyer on Vana, a startup platform that allows people to sell data to firms building AI models. The idea is that users can get some cash directly by selling sources of data that are largely untapped, including things like private messages from Twitter, Reddit, and Telegram—and, in this case, listening history data from Spotify.

    Through a decentralized autonomous organization (DAO), the users voted on whether or not to make a sale, with 99.5% of the more than 10,000 voters approving, according to Ars Technica. They ultimately sold off artist preference data pulled from their respective Spotify profiles to a company called Solo AI, which markets itself as an AI-driven music platform. The users reportedly got $55,000 for the pool of data, which was split amongst them and distributed via cryptocurrency tokens. The final profit for each person: about $5.

    If you’re factoring in whatever trouble it takes to collect the data and cash out the crypto, your mileage may vary on whether it was all worth it, but it’s interesting as a proof of concept. Now, whether that concept is good or not is a whole other question. The Electronic Frontier Foundation warns that selling your own data doesn’t actually do anything to correct the imbalance between the power held by companies that collect and cash in on user data and the users who are being constantly surveilled and monetized, and argues, “Those small checks in exchange for intimate details about you are not a fairer trade than we have now.”

    Spotify also thinks selling your user data is bad, but for totally different reasons. According to Ars, the company told the developers in charge of the Unwrapped project that they were violating Spotfiy’s developer policy, which prohibits the use of Spotify content for machine learning or AI models.  “Spotify honors our users’ privacy rights, including the right of portability,” Spotify’s spokesperson told the publication. “All of our users can receive a copy of their personal data to use as they see fit. That said, UnwrappedData.org is in violation of our Developer Terms, which prohibit the collection, aggregation, and sale of Spotify user data to third parties.”

    Maybe Spotify is just annoyed that users are monetizing their own data when the company has struggled to figure out how to do the same. Per Business Insider, just 11% of the company’s revenue currently comes from its data-driven advertising business, well short of its 20% goal, as it has apparently been unable to crack ways to turn its massive trove of user data into ad placements that ad buyers actually want.

    [ad_2]

    AJ Dellinger

    Source link

  • Dead Internet Theory Lives: One Out of Three of You Is a Bot

    [ad_1]

    Sam Altman might be onto something.

    [ad_2]

    AJ Dellinger

    Source link

  • Here’s How Much More AI-Skilled Workers Make

    [ad_1]

    Employers are increasingly offering pay boosts for workers with artificial intelligence skills, even in roles beyond tech. How much more? We looked at three different studies to see just how much more artificial intelligence skills will pay.

    According to CNBC, roles specifying AI competencies are trending across job postings, with employers adding salary incentives for candidates who bring the right mix of AI know-how—even in traditionally non-technical roles.

    The emerging trend reflects the growing importance of AI literacy across industries, as companies race to keep pace with automation while shoring up talent gaps.

     This echoes findings from tech industry research group Lightcast, which analyzed more than 1.3 billion job ads and found that jobs requiring AI skills advertised a 28% premium, equivalent to nearly $18,000 per year. The premium skyrocketed to 43% when job listings specified two or more AI skills.

    “Job postings are increasingly emphasizing AI skills and there are signals that employers are willing to pay premium salaries for them,” Elena Magrini, head of global research at Lightcast, told CNBC.

    So exactly how much more are jobs willing to pay?

    A study by Foote Partners supports this shift. It showed that employers pay 19% to 23% more for practical AI skills compared to a modest 9% to 11% lift for AI certifications, reflecting the value of demonstrated ability over credentials.

    Global data from PwC’s 2025 AI Jobs Barometer suggests that workers with AI skills earn up to 56% more, a steep rise from the previous year. This trend holds across sectors: even roles in marketing, finance, human resources, and education are increasingly AI-enabled, and rewarded accordingly.

    In the United Kingdom, CIO Dive reports that job postings with AI skill requirements offered a 23% wage premium, surpassing the value of master’s degrees (13%), though still trailing PhD-level pay (33%). Men and women with AI proficiency were shortlisted with salaries about 12%–13% higher than candidates without.

    Why are these skills so prized?

    Experts argue this reflects a broader shift toward “task-based hiring”, where AI-enabled work automates tasks and demands adaptability from human workers. Skills like prompt engineering, critical thinking, and AI judgment increasingly outweigh traditional credentials.

    However, the transition raises concerns about equity.

    Research shows that while AI-skilled roles now command significantly higher salaries, most workers, especially outside tech, have yet to upskill. A former OpenAI executive recently warned that AI talent has become the “new star athlete” of the workforce, with HR systems struggling to keep pace.

    Perhaps more tellingly, the research increasingly supports the idea that AI is no longer a niche technical specialty: it’s becoming a widespread professional credential.

    Employers are rewarding workers who can harness these tools across business functions, signaling a long-term shift to a skills-first economy. Those who adapt may command top-of-market compensation; those who don’t may find themselves left behind.

    [ad_2]

    Riley Gutiérrez McDermid

    Source link

  • Nvidia Is Not Happy With the Gain AI Act, Says As Much

    [ad_1]

    In a move drawing considerable attention across the tech industry, Nvidia Corporation has publicly critiqued the recently proposed Gain AI Act, emphasizing its potential to stifle competition in the rapidly evolving artificial intelligence sector.

    The GAIN AI Act, which stands for Guaranteeing Access and Innovation for National Artificial Intelligence Act, was introduced as part of the U.S. National Defense Authorization Act, with the goal of ensuring that the United States is the dominant market force for AI.

    It has not yet passed and remains a hotly debated policy topic both here and abroad because of the restrictions it looks to enact.

    Backers say it aims to protect American market interests by prioritizing domestic orders for advanced AI chips and processors, as well as secure supply chains for critical AI hardware, and theoretically reduce our reliance on foreign manufacturers.

    So it’s no huge surprise that Nvidia, a Chinese corporation and currently the world’s biggest company, would take aim at a law that might potentially restrict the competitiveness of foreign technology.

    The company said as much during a recent industry forum.

    “We never deprive American customers in order to serve the rest of the world. In trying to solve a problem that does not exist, the proposed bill would restrict competition worldwide in any industry that uses mainstream computing chips,” an Nvidia spokesperson said.

    Is the Gain AI Act a good idea for innovation?

    It depends on who you ask.

    Essentially, the law seeks to strengthen national security and economic competitiveness by ensuring that key AI components remain accessible to American companies and government agencies before they are supplied abroad.

    Its language takes a hard line on what the priority should be for the United States government.

    “It should be the policy of the United States and the Department of Commerce to deny licenses for the export of the most powerful AI chips, including such chips with total processing power of 4,800 or above and to restrict the export of advanced artificial intelligence chips to foreign entities so long as United States entities are waiting and unable to acquire those same chips,” the legislation reads.

    Nvidia’s critique reflects broader industry anxieties about regulatory environments that might hinder innovation. As global competition intensifies, particularly with formidable advances in AI from regions such as China, firms like Nvidia are closely watching how regulatory frameworks are taking shape abroad.

    But it’s not just foreign companies. American market players, too, have said it could hit many domestic operations hard.

    “Advanced AI chips are the jet engine that is going to enable the U.S. AI industry to lead for the next decade,” Brad Carson, president of Americans for Responsible Innovation (ARI), a lobbying group for the AI industry, said in a widely distributed statement.

    “Globally, these chips are currently supply-constrained, which means that every advanced chip sold abroad is a chip the U.S. cannot use to accelerate American R&D and economic growth,” Carson said. “As we compete to lead on this dual-use technology, including the GAIN AI Act in the NDAA would be a major win for U.S. economic competitiveness and national security.”

    ‘Doomer science fiction’

    Nvidia didn’t stop there. It then took aim at an earlier attempt to make the U.S. more competitive in the chipmaker market, a policy called the AI Diffusion Rule, which ultimately failed.

    The company minced no words in a follow-up statement, saying that the past attempts by legislators to control market forces based on protectionist policies was ultimately a bad idea.

    “The AI Diffusion Rule was a self-defeating policy, based on doomer science fiction, and should not be revived,” it read.

    “Our sales to customers worldwide do not deprive U.S. customers of anything—and in fact expand the market for many U.S. businesses and industries,” it said. “The pundits feeding fake news to Congress about chip supply are attempting to overturn President Trump’s AI Action Plan and surrender America’s chance to lead in AI and computing worldwide.”

    The challenge will be creating laws that are as dynamic as the technologies they aim to govern, fostering a climate where innovation and ethical accountability are not mutually exclusive, but rather mutually reinforcing.

    We’ve tried this before

    Nvidia’s mention of the AI Diffusion rule was no accident. That ill-fated policy had many of the same political goals but ultimately stumbled at the finish line and was a relatively toothless attempt to rein in some of the world’s most competitive companies.

    The Biden administration’s AI Diffusion rule, enacted in January 2025, represented a significant shift in U.S. export controls targeting cutting-edge artificial intelligence technology.

    Designed to curb the spread of advanced AI tools to rival nations, the regulation mandated licensing for the sale of high-end AI chips and imposed strict caps on computing power accessible to foreign recipients. Its goal was to slow the diffusion of sensitive AI capabilities that could enhance military or strategic applications abroad.

    However, the Trump-era approach to export controls, which focused on a more targeted, bilateral framework, was poised to replace the Biden administration’s broader strategy.

    President Trump had announced plans to rescind the AI Diffusion rule, criticizing it as overly bureaucratic and potentially hindering U.S. innovation. Instead, his administration favored engaging in country-specific agreements to control export practices, aiming for a more adaptable, case-by-case approach.

    Though the AI Diffusion rule was ultimately rolled back, the Bureau of Industry and Security (BIS) signaled a renewed emphasis on enforcing existing regulations. The agency issued a notice reinforcing actions against companies with a “high probability” of violations, warning that increased scrutiny would be applied to entities with knowledge of potential breaches.

    Whether this latest attempt to advance American interests meets a similar fate remains to be seen.

    [ad_2]

    Riley Gutiérrez McDermid

    Source link

  • Bank Hacking Has Doubled Since 2023 And Investors Are Getting Spooked

    [ad_1]

    Financial institutions are navigating a growing cybersecurity minefield, with data breaches doubling since 2023 and increasingly affecting a company’s market confidence or regulatory standing.

    According to a report from AInvest, third-party breaches in the financial sector have doubled since 2023. The report also found that the average breach costs hitting $4.8 million, and insider-related incidents costing $17.4 million per organization.

    With cyberattacks via third-party vendors and insiders rising, investors are beginning to scrutinize fintech and banking stocks for cyber resiliency as intensely as for earnings per share.

    Hacks of this type often take around 80 days to contain, illustrating how experts still struggle to thwart real-time risks.

    Hacks are growing in size and impact

    The consequences also go beyond balance sheets: Santander’s 2025 cross-border data breach, for instance, dented its market standing even before regulatory fines were levied.

    In that attack, 30 million customers from Spain, Uruguay and Chile and some Santander employees had their data hacked, including their personal data like social security numbers. In October 2024, the bank was fined €50,000 by the Spanish data protection agency (AEPD) for failing to report the breach and violating the General Data Protection Regulation (GDPR). 

    “Following an investigation, we have now confirmed that certain information relating to customers of Santander Chile, Spain and Uruguay, as well as all current and some former Santander employees of the group had been accessed,” it said in a statement posted at the time.

    “No transactional data, nor any credentials that would allow transactions to take place on accounts are contained in the database, including online banking details and passwords.”

    A rising tide of threats

    These trends align with research from the International Monetary Fund, which found that the growing scale and sophistication of cyberattacks on financial infrastructure are now large enough to threaten economic stability.

    The growing cost of cyber losses after a breach has been noticed, identified, disclosed to customers and fined by regulators has soared to $2.5 billion, accounting for reputation, regulatory, and remediation impacts.

    Investors are also seeing a shift in the political and regulatory landscape. The European Union’s Digital Operational Resilience Act (DORA) and the UK’s Cyber Resilience Bill are ushering in higher standards for third-party risk and digital continuity in financial services.

    Meanwhile, the Reserve Bank of India is demanding that banks deploy “AI-aware” defenses under a zero-trust framework, citing systemic risks tied to vendor lock-ins. For investors and regulators, cybersecurity is no longer just an IT concern, it’s a board-level strategic imperative.

    The real-world cost of cyber vulnerability

    In the UK, institutions like HSBC and Santander continue logging dozens of service outages each year, despite investments in cybersecurity and modernization. Barclays alone reported 33 outages between 2023 and 2025, an alarming reminder of the fragility of complex, dated infrastructure.

    Similarly, a surge in phishing and third-party breaches is forcing firms to redirect resources toward building resilience-based infrastructure. New findings show that 45% of employees at large financial institutions remain susceptible to clicking malicious links, making human error a critical line of attack even with technical safeguards.

    Thinking of investing in bank stocks?

    For investors, the key takeaway is clear: cybersecurity maturity must factor into valuation and stock selection, especially within the fintech and banking sectors.

    Companies investing in zero-trust architecture, which means requiring strict verification of every user, device, and application before granting access to resources, and AI-based anomaly detection are likely to be better protected and safer bets for investors wanting to avoid hacks.

    Additionally, companies that have rigorous quarterly audits of their third-party cybersecurity plans see much more confidence from the capital markets.

    Operational resilience is another critical factor, with institutions that participate in cyber war games and incident response exercises, organized by entities like the Federal Reserve and FS-ISAC, being viewed more favorably.

    Another sign banks take security seriously? Financial institution leaders who prioritize employee cybersecurity training are recognized for effectively closing the most dangerous gaps in the defense chain, enhancing overall human risk management.

    Security as a competitive edge

    The confluence of regulatory pressure, rising financial fallout, and geopolitical cyber threats means investors can no longer afford to overlook cybersecurity metrics. Firms that treat defense as a cost center may ultimately come off worse than those that regard it as a strategic asset.

    Financial institutions that embrace robust cyber hygiene, anticipate evolving threats—including AI and quantum risks—and align with regulatory expectations, could well distinguish themselves as proven leaders rather than potential liabilities. The security of tomorrow’s balance sheet may well depend on the strength of today’s defenses.

    [ad_2]

    Riley Gutiérrez McDermid

    Source link

  • Waymo Says You’re Not Getting its Footage Without a Warrant

    [ad_1]

    Waymo is quietly drawing new boundaries over how authorities access data from its autonomous vehicles. The company said it will reject any requests that are not backed by a legal request such as a warrant or court order.

    The move is one of several signaling a growing tension between innovation, privacy, and law enforcement power.

    A new privacy guardrail

    Waymo co-CEO Tekedra Mawakana recently emphasized that the company will challenge, limit, or reject robotaxi footage requests from law enforcement that are not backed by a valid legal process, such as a warrant or court order.

    She stressed that while the company “follows the legal process to receive footage,” it reserves the right to push back on overly broad or undefined demands—a move aimed at preserving rider trust.

    Each Waymo vehicle is outfitted with 29 external cameras, offering a comprehensive 360-degree view, and potentially additional internal sensors. Those devices create a new surveillance frontier, prompting concerns about misuse of private data.

    Wired earlier reported that while Waymo does comply with formal legal requests, the company doesn’t disclose how often or under what circumstances footage is shared. This led to questions about data retention, misuse risks, and surveillance creep, especially when model behavior is invisible to the public.

    Law, trust, and public perception

    Waymo’s stance isn’t just policy. It’s a strategic response to evolving public expectations. The company now systematically informs the public when law enforcement requests arise, arguing this transparency is crucial to maintaining community confidence.

    Legal scholars highlight that demands for robotaxi footage fall into unsettled territory: though subject to Fourth Amendment protections, striking a balance between privacy and legitimate investigative needs remains delicate.

    Waymo’s proactive stance contrasts sharply with other self-driving players. Earlier footage from its robo-taxis was used by police during protest investigations, but always following warrants or subpoenas.

    However, the company has faced vandalism and public backlash in heated moments, such as when robotaxis were burned during demonstrations, intensifying concerns about surveillance and public safety.

    In pushing back, Waymo signals a paradox of autonomy: To be accepted, robotaxis must prove not only safe, but also respectful of rights. Legal clarity and public trust may prove to be more valuable than the footage itself, in shaping the regulatory and cultural roadmap for autonomous mobility.

    [ad_2]

    Riley Gutiérrez McDermid

    Source link