ReportWire

Tag: Deepfake

  • Google prohibits ads promoting websites and apps that generate deepfake porn

    Google prohibits ads promoting websites and apps that generate deepfake porn

    [ad_1]

    Google has updated its Inappropriate Content Policy to include language that expressly prohibits advertisers from promoting websites and services that generate deepfake pornography. While the company already has strong restrictions in place for ads that feature certain types of sexual content, this update leaves no doubt that promoting “synthetic content that has been altered or generated to be sexually explicit or contain nudity” is in violation of its rules.

    Any advertiser promoting sites or apps that generate deepfake porn, that show instructions on how to create deepfake porn and that endorse or compare various deepfake porn services will be suspended without warning. They will no longer be able to publish their ads on Google, as well. The company will start implementing this rule on May 30 and is giving advertisers the chance to remove any ad in violation of the new policy. As 404 Media notes, the rise of deepfake technologies has led to an increasing number of ads promoting tools that specifically target users wanting to create sexually explicit materials. Some of those tools reportedly even pretend to be wholesome services to be able to get listed on the Apple App Store and Google Play Store, but it’s masks off on social media where they promote their ability to generate manipulated porn.

    Google has, however, already started prohibiting services that create sexually explicit deepfakes in Shopping ads. Similar to its upcoming wider policy, the company has banned Shopping ads for services that “generate, distribute, or store synthetic sexually explicit content or synthetic content containing nudity. ” Those include deepfake porn tutorials and pages that advertise deepfake porn generators.

    [ad_2]

    Mariella Moon

    Source link

  • Drake deletes AI-generated Tupac track after Shakur’s estate threatened to sue

    Drake deletes AI-generated Tupac track after Shakur’s estate threatened to sue

    [ad_1]

    Drake apparently learned it isn’t wise to mess with Tupac Shakur — even decades after his untimely death. Billboard first spotted that the Canadian hip-hop artist deleted the X (Twitter) post with his track “Taylor Made Freestyle,” which used an AI-generated recreation of Shakur’s voice to try to get under Kendrick Lamar’s skin.

    The takedown came after an attorney representing the late hip-hop legend threatened to sue the Canadian rapper for his “unauthorized” use of Tupac’s voice if he didn’t remove it from social channels within 24 hours. However, the track was online for a week and — unsurprisingly — has been copiously reposted.

    “The Estate is deeply dismayed and disappointed by your unauthorized use of Tupac’s voice and personality,” Howard King, the attorney representing Shakur’s estate, wrote earlier this week in a cease-and-desist letter acquired by Billboard. “Not only is the record a flagrant violation of Tupac’s publicity and the estate’s legal rights, it is also a blatant abuse of the legacy of one of the greatest hip-hop artists of all time. The Estate would never have given its approval for this use.”

    Photo of the late Tupac Shakur, staring down at the camera against a black background with subtle horizontal gray lines.

    2PAC.com

    King implied that using Shakur’s voice to diss Lamar was an especially egregious show of disrespect. Lamar, a 17-time Grammy winner and Pulitzer recipient, has spoken frequently about his deep admiration for Tupac, and the Oakland rapper’s estate says the feelings are mutual. “The unauthorized, equally dismaying use of Tupac’s voice against Kendrick Lamar, a good friend to the Estate who has given nothing but respect to Tupac and his legacy publicly and privately, compounds the insult,” King wrote in a cease-and-desist letter.

    Drake’s track also included an AI-generated clone of Snoop Dogg’s voice. The Doggystyle rapper and cannabis aficionado appeared surprised in a social post last week: “They did what? When? How? Are you sure?” He continued, “Why everybody calling my phone, blowing me up? What the fuck? What happened? What’s going on? I’m going back to bed. Good night.”

    However, the one-time Doggy Fizzle Televizzle host has a history of poker-faced coyness. Last year, he took to Instagram to solemnly announce he was “giving up smoke,” leading to rampant speculation about why the stoner icon would quit his favorite pastime. Soon after, his announcement was revealed as a PR stunt for Solo Stove — which, marketing gimmicks aside, makes some terrific bonfire pits.

    [ad_2]

    Will Shanklin

    Source link

  • Microsoft’s AI tool can turn photos into realistic videos of people talking and singing

    Microsoft’s AI tool can turn photos into realistic videos of people talking and singing

    [ad_1]

    Microsoft Research Asia has unveiled a new experimental AI tool called VASA-1 that can take a still image of a person — or the drawing of one — and an existing audio file to create a lifelike talking face out of them in real time. It has the ability to generate facial expressions and head motions for an existing still image and the appropriate lip movements to match a speech or a song. The researchers uploaded a ton of examples on the project page, and the results look good enough that they could fool people into thinking that they’re real.

    While the lip and head motions in the examples could still look a bit robotic and out of sync upon closer inspection, it’s still clear that the technology could be misused to easily and quickly create deepfake videos of real people. The researchers themselves are aware of that potential and have decided not to release “an online demo, API, product, additional implementation details, or any related offerings” until they’re sure that their technology “will be used responsibly and in accordance with proper regulations.” They didn’t, however, say whether they’re planning to implement certain safeguards to prevent bad actors from using them for nefarious purposes, such as to create deepfake porn or misinformation campaigns.

    The researchers believe their technology has a ton of benefits despite its potential for misuse. They said it can be used to enhance educational equity, as well as to improve accessibility for those with communication challenges, perhaps by giving them access to an avatar that can communicate for them. It can also provide companionship and therapeutic support for those who need it, they said, insinuating the VASA-1 could be used in programs that offer access to AI characters people can talk to.

    According to the paper published with the announcement, VASA-1 was trained on the VoxCeleb2 Dataset, which contains “over 1 million utterances for 6,112 celebrities” that were extracted from YouTube videos. Even though the tool was trained on real faces, it also works on artistic photos like the Mona Lisa, which the researchers amusingly combined with an audio file of Anne Hathaway’s viral rendition of Lil Wayne’s Paparazzi. It’s so delightful, it’s worth a watch, even if you’re doubting what good a technology like this can do.

    This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.

    [ad_2]

    Mariella Moon

    Source link

  • AI expert says Princess Kate photo scandal shows our

    AI expert says Princess Kate photo scandal shows our

    [ad_1]

    London — The European Parliament passed the world’s first comprehensive law regulating the use of artificial intelligence on Wednesday, as controversy swirled around an edited photo of Catherine, the Princess of Wales, that experts say illustrates how even the awareness of new AI technologies is affecting society.

    “The reaction to this image, if it were released before, pre-the big AI boom we’ve seen over the last few years, probably would be: ‘This is a really bad job with editing or Photoshop,’” Henry Ajder, an expert on AI and deepfakes, told CBS News. “But because of the conversation about Kate Middleton being absent in the public eye and the kind of conspiratorial thinking that that’s encouraged, when that combines with this new, broader awareness of AI-generated images… the conversation is very, very different.”

    Princess Kate, as she’s most often known, admitted to “editing” the photo of herself and her three children that was posted to her official social media accounts on Sunday. Neither she nor Kensington Palace provided any details of what she had altered on the photo, but one royal watcher told CBS News it could have been a composite image created from a number of photographs.

    Ajder said AI technology, and the rapid increase in public awareness of what it can do, means people’s “sense of shared reality, I think, is being eroded further or more quickly than it was before.”

    Countering this, he said, will require work on the part of companies and individuals.

    What’s in the EU’s new AI Act?

    The European Union’s new AI Act takes a risk-based approach to the technology. For lower risk AI systems such as spam filters, companies can choose to follow voluntary codes of conduct.

    For technologies considered higher risk, where AI is involved in electricity networks or medical devices, for instance, there will be tougher requirements under the new law. Some uses of AI, such as police scanning people’s faces using AI technology while they’re in public places, will be outright banned apart from in exceptional circumstances.

    The EU says the law, which is expected to come into effect by early summer, “will guarantee the safety and fundamental rights of people and businesses when it comes to AI.”

    Losing “our trust in content”?

    Millions of people view dozens of images every day on their smartphones and other devices. Especially on small screens, it can be very difficult to detect inconsistencies that might indicate tampering or the use of AI, if it’s possible to detect them at all.

    “It shows our vulnerability towards content and towards how we make up our realities,” Ramak Molavi Vasse’i, a digital rights lawyer and senior researcher at the Mozilla Foundation, told CBS News. “If we cannot trust what we see, this is really bad. Not only do we have, already, a decrease in trust in institutions. We have a decrease in trust and media, we have a decrease in trust, even for big tech… and for politicians. So this part is really bad for democracies and can be destabilizing.”

    Vasse’i co-authored a recent report looking at the effectiveness of different methods of marking and detecting whether a piece of content has been generated using AI. She said there were a number of possible solutions, including educating consumers and technologists and watermarking and labeling images, but none of them are perfect.

    “I fear that the speed in which the development happens is too quick. We cannot grasp and really govern and control the technology that is kind of, not creating the problem in the first place, but accelerating the speed and distributing the problem,” Vasse’i told CBS News.

    “I think that we have to rethink the whole informational ecosystem that we have,” she said. “Societies are built on trust on a private level, on a democratic level. We need to recreate our trust in content.” 

    How can I know what I’m looking at is real?

    Ajder said that, beyond the wider aim of working toward ways to bake transparency around AI into our technologies and information ecosystems, it’s difficult on the individual level to tell whether AI has been used to change or create a piece of media. 

    That, he said, makes it vitally important for media consumers to identify sources that have clear quality standards.

    “In this landscape where there is increasing distrust and dismissal of this kind of legacy media, this is a time when actually traditional media is your friend, or at least it is more likely to be your friend than getting your news from random people tweeting out stuff or, you know, Tiktok videos where you’ve got some guy in his bedroom giving you analysis of why this video is fake,” Adjer said. “This is where trained, rigorous investigative journalism will be better resourced, and it will be more reliable in general.”


    Creating a “lie detector” for deepfakes

    05:36

    He said tips about how to identify AI in imagery, such as watching to see how many times someone blinks in a video, can quickly become outdated as technologies are developing at lightning speed.

    His advice: “Try to recognize the limitations of your own knowledge and your own ability. I think some humility around information is important in general right now.”

    [ad_2]

    Source link

  • Super Bowl LVIII, Told by AI Deepfakes

    Super Bowl LVIII, Told by AI Deepfakes

    [ad_1]

    Most Palone singing “America the Bountiful”
    Photo: Midjourney

    Super Bowl XVIII was jam-packed with celebrities, love stories, angry outbursts, and even some football. Many of us watched the Super Bowl on TV with our own two eyes, but Gizmodo set out to learn what the big game would have looked like through the eyes of an AI image generator.

    Gizmodo used Midjourney to create visual representations of some of the Super Bowl’s biggest moments. AI deepfakes are slowly becoming a central component of our society, so we figured we might as well get ahead of the curve, and just make these before someone else does. Some are surprisingly accurate while others are painfully wrong. Maybe in the future, we won’t even need a real Super Bowl. We can just AI deepfake the whole thing.

    [ad_2]

    Maxwell Zeff

    Source link

  • The Taylor Swift deepfake debacle was frustratingly preventable | TechCrunch

    The Taylor Swift deepfake debacle was frustratingly preventable | TechCrunch

    [ad_1]

    You know you’ve screwed up when you’ve simultaneously angered the White House, the TIME Person of the Year, and pop culture’s most rabid fanbase. That’s what happened last week to X, the Elon Musk-owned platform formerly called Twitter, when AI-generated, pornographic deepfake images of Taylor Swift went viral.

    One of the most widespread posts of the nonconsensual, explicit deepfakes was viewed more than 45 million times, with hundreds of thousands of likes. That doesn’t even factor in all the accounts that reshared the images in separate posts – once an image has been circulated that widely, it’s basically impossible to remove.

    X lacks the infrastructure to identify abusive content quickly and at scale. Even in the Twitter days, this issue was difficult to remedy, but it’s become much worse since Musk gutted so much of Twitter’s staff, including the majority of its trust and safety teams. So, Taylor Swift’s massive and passionate fanbase took matters into their own hands, flooding search results for queries like “taylor swift ai” and “taylor swift deepfake” to make it more difficult for users to find the abusive images. As the White House’s press secretary called on Congress to do something, X simply banned the search term “taylor swift” for a few days. When users searched the musician’s name, they would see a notice that an error had occurred.

    This content moderation failure became a national news story, since Taylor Swift is Taylor Swift. But if social platforms can’t protect one of the most famous women in the world, who can they protect?

    “If you have what happened to Taylor Swift happen to you, as it’s been happening to so many people, you’re likely not going to have the same amount of support based on clout, which means you won’t have access to these really important communities of care,” Dr. Carolina Are, a fellow at Northumbria University’s Centre for Digital Citizens in the U.K., told TechCrunch. “And these communities of care are what most users are having to resort to in these situations, which really shows you the failure of content moderation.”

    Banning the search term “taylor swift” is like putting a piece of Scotch tape on a burst pipe. There’s many obvious workarounds, like how TikTok users search for “seggs” instead of sex. The search block was something that X could implement to make it look like they’re doing something, but it doesn’t stop people from just searching “t swift” instead. Copia Institute and Techdirt founder Mike Masnick called the effort “a sledge hammer version of trust & safety.”

    “Platforms suck when it comes to giving women, non-binary people and queer people agency over their bodies, so they replicate offline systems of abuse and patriarchy,” Are said. “If your moderation systems are incapable of reacting in a crisis, or if your moderation systems are incapable of reacting to users’ needs when they’re reporting that something is wrong, we have a problem.”

    So, what should X have done to prevent the Taylor Swift fiasco anyway?

    Are asks these questions as part of her research, and proposes that social platforms need a complete overhaul of how they handle content moderation. Recently, she conducted a series of roundtable discussions with 45 internet users from around the world who are impacted by censorship and abuse to issue recommendations to platforms about how to enact change.

    One recommendation is for social media platforms to be more transparent with individual users about decisions regarding their account or their reports about other accounts.

    “You have no access to a case record, even though platforms do have access to that material – they just don’t want to make it public,” Are said. “I think when it comes to abuse, people need a more personalized, contextual and speedy response that involves, if not face-to-face help, at least direct communication.”

    X announced this week that it would hire 100 content moderators to work out of a new “Trust and Safety” center in Austin, Texas. But under Musk’s purview, the platform has not set a strong precedent for protecting marginalized users from abuse. It can also be challenging to take Musk at face value, since the mogul has a long track record of failing to deliver on his promises. When he first bought Twitter, Musk declared he would form a content moderation council before making major decisions. This did not happen.

    In the case of AI-generated deepfakes, the onus is not just on social platforms. It’s also on the companies who create consumer-facing generative AI products.

    According to an investigation by 404 Media, the abusive depictions of Swift came from a Telegram group devoted to creating nonconsensual, explicit deepfakes. The users in the group often use Microsoft Designer, which draws from Open AI’s DALL-E 3 to generate images based on inputted prompts. In a loophole that Microsoft has since addressed, users could generate images of celebrities by writing prompts like “taylor ‘singer’ swift” or “jennifer ‘actor’ aniston.”

    A principal software engineering lead at Microsoft, Shane Jones wrote a letter to the Washington state attorney general stating that he found vulnerabilities in DALL-E 3 in December, which made it possible to “bypass some of the guardrails that are designed to prevent the model from creating and distributing harmful images.”

    Jones alerted Microsoft and OpenAI to the vulnerabilities, but after two weeks, he had received no indication that the issues were being addressed. So, he posted an open letter on LinkedIn to urge OpenAI to suspend the availability of DALL-E 3. Jones alerted Microsoft to his letter, but he was swiftly asked to take it down.

    “We need to hold companies accountable for the safety of their products and their responsibility to disclose known risks to the public,” Jones wrote in his letter to the state attorney general. “Concerned employees, like myself, should not be intimidated into staying silent.”

    As the world’s most influential companies bet big on AI, platforms need to take a proactive approach to regulate abusive content – but even in an era when making celebrity deepfakes wasn’t so easy, violative behavior easily evaded moderation.

    “It really shows you that platforms are unreliable,” Are said. “Marginalized communities have to trust their followers and fellow users more than the people that are technically in charge of our safety online.”

    [ad_2]

    Amanda Silberling

    Source link

  • Taylor Swift searches on X paused as part of ‘temporary action’ as deepfake explicit images spread

    Taylor Swift searches on X paused as part of ‘temporary action’ as deepfake explicit images spread

    [ad_1]

    Elon Musk’s social media platform X has blocked searches for Taylor Swift as pornographic deepfake images of the singer have circulated online.

    Attempts to search for her name on the site resulted in an error message and a prompt for users to retry their search, which added, “Don’t fret – it’s not your fault.”

    Searches for variations of her name such as “taylorswift” and “Taylor Swift AI” turned up the same error messages.

    RELATED: Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

    Sexually explicit and abusive fake images of Swift began circulating widely last week on X, making her the most famous victim of a scourge that tech platforms and anti-abuse groups have struggled to fix.

    “This is a temporary action and done with an abundance of caution as we prioritize safety on this issue,” Joe Benarroch, head of business operations at X, said in a statement to multiple news outlets.

    After the images began spreading online, the singer’s devoted fanbase of “Swifties” quickly mobilized, launching a counteroffensive on X and a #ProtectTaylorSwift hashtag to flood it with more positive images of the pop star. Some said they were reporting accounts that were sharing the deepfakes.

    The deepfake-detecting group Reality Defender said it tracked a deluge of nonconsensual pornographic material depicting Swift, particularly on X. Some images also made their way to Meta-owned Facebook and other social media platforms.

    The researchers found at least a couple dozen unique AI-generated images. The most widely shared were football-related, showing a painted or bloodied Swift that objectified her and in some cases inflicted violent harm on her deepfake persona.

    Researchers have said the number of explicit deepfakes have grown in the past few years, as the technology used to produce such images has become more accessible and easier to use.

    In 2019, a report released by the AI firm DeepTrace Labs showed these images were overwhelmingly weaponized against women. Most of the victims, it said, were Hollywood actors and South Korean K-pop singers.

    Copyright © 2024 by The Associated Press. All Rights Reserved.

    [ad_2]

    AP

    Source link

  • Taylor Swift deepfakes spread online, sparking outrage

    Taylor Swift deepfakes spread online, sparking outrage

    [ad_1]

    Pornographic deepfake images of Taylor Swift are circulating online, making the singer the most famous victim of a scourge that tech platforms and anti-abuse groups have struggled to fix.

    Sexually explicit and abusive fake images of Swift began circulating widely this week on the social media platform X.

    Her ardent fanbase of “Swifties” quickly mobilized, launching a counteroffensive on the platform formerly known as Twitter and a #ProtectTaylorSwift hashtag to flood the social media site with more positive images of the pop star. Some said they were reporting accounts that were sharing the deepfakes.

    The Screen Actors Guild released a statement on the issue Friday, calling the images of Swift “upsetting, harmful, and deeply concerning,” adding that “the development and dissemination of fake images — especially those of a lewd nature — without someone’s consent must be made illegal.”

    The deepfake-detecting group Reality Defender said it tracked a deluge of nonconsensual pornographic material depicting Swift, particularly on X. Some images also made their way to Meta-owned Facebook and other social media platforms.

    “Unfortunately, they spread to millions and millions of users by the time that some of them were taken down,” said Mason Allen, Reality Defender’s head of growth.

    The researchers found at least a couple dozen unique AI-generated images. The most widely shared were football-related, showing a painted or bloodied Swift that objectified her, and in some cases, inflicted violent harm on her deepfake persona.

    This comes after earlier this month an AI-generated video featuring Swift’s likeness endorsing a fake Le Creuset cookware giveaway also made the rounds online. It was unclear who was behind that scam, and Le Creuset issued an apology to those who may have been duped.

    Researchers have said the number of explicit deepfakes have grown in the past few years, as the technology used to produce such images has become more accessible and easier to use. In 2019, a report released by the AI firm DeepTrace Labs showed these images were overwhelmingly weaponized against women. Most of the victims, it said, were Hollywood actors and South Korean K-pop singers.

    Brittany Spanos, a senior writer at Rolling Stone who teaches a course on Swift at New York University, says Swift’s fans are quick to mobilize in support of the artist, especially those who take their fandom very seriously and in situations of wrongdoing.

    “This could be a huge deal if she really does pursue it to court,” she said.

    When reached for comment on the fake images of Swift, X directed the Associated Press to a post from its safety account that said the company strictly prohibits the sharing of non-consensual nude images on its platform. The company has sharply cut back its content-moderation teams since Elon Musk took over the platform in 2022.

    “Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them,” the company wrote in the X post early Friday morning. “We’re closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed.”

    Meanwhile, Meta said in a statement that it strongly condemns “the content that has appeared across different internet services” and has worked to remove it.

    “We continue to monitor our platforms for this violating content and will take appropriate action as needed,” the company said.

    A representative for Swift didn’t immediately respond to a request for comment Friday.

    Allen said researchers are 90% confident that the images were created by diffusion models, which are a type of generative artificial intelligence model that can produce new and photorealistic images from written prompts. The most widely known are Stable Diffusion, Midjourney and OpenAI’s DALL-E. Allen’s group didn’t try to determine the provenance.

    Microsoft, which offers an image-generator based partly on DALL-E, said Friday that it was in the process of investigating whether its tool was misused. Much like other commercial AI services, it said it doesn’t allow “adult or non-consensual intimate content, and any repeated attempts to produce content that goes against our policies may result in loss of access to the service.”

    Asked about the Swift deepfakes on “NBC Nightly News,” Microsoft CEO Satya Nadella said Friday that there’s a lot still to be done in setting AI safeguards and “it behooves us to move fast on this.”

    “Absolutely this is alarming and terrible, and so therefore yes, we have to act,” Nadella said.

    Midjourney, OpenAI and Stable Diffusion-maker Stability AI didn’t immediately respond to requests for comment.

    Federal lawmakers who’ve introduced bills to put more restrictions or criminalize deepfake porn indicated the incident shows why the U.S. needs to implement better protections.

    “For years, women have been victims of non-consensual deepfakes, so what happened to Taylor Swift is more common than most people realize,” said Rep. Yvette D. Clarke, a Democrat from New York, who’s introduced legislation that would require creators to digitally watermark deepfake content.

    Rep. Joe Morelle, another New York Democrat pushing a bill that would criminalize sharing deepfake porn online, said what happened to Swift was disturbing and has become more and more pervasive across the internet.

    “The images may be fake, but their impacts are very real,” Morelle said in a statement. “Deepfakes are happening every day to women everywhere in our increasingly digital world, and it’s time to put a stop to them.”

    [ad_2]

    Source link

  • Deepfakes Are on the Rise — Will They Change How Businesses Verify Their Users? | Entrepreneur

    Deepfakes Are on the Rise — Will They Change How Businesses Verify Their Users? | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    You know how you can’t do anything these days without proving who you are? Whether opening a bank account or just hopping onto a car-sharing service. With online identity verification becoming more integrated into daily life, fraudsters have become more interested in outsmarting the system.

    Criminals are investing more money and effort to overcome security solutions. Their ultimate weapon is deepfakes — impersonating real people using artificial intelligence (AI) techniques. Now, the multi-million question is: Can organizations effectively employ AI to combat fraudsters with their tools?

    According to a Regula identity verification report, a whopping one-third of global businesses have already fallen victim to deepfake fraud, with fraudulent activities involving deepfake voice and video posing significant threats to the Banking sector.

    For instance, fraudsters can easily pretend to be you to get access to your bank account. Stateside, almost half of the companies surveyed confessed to being targeted with the voice deepfakes last year, beating the global average of 29%. It’s like a blockbuster heist but in the digital realm.

    And as AI technology for creating deepfakes becomes more accessible, the risk of businesses being affected only increases. That poses a question: Should the identity verification process be adjusted?

    Related: Deepfake Scams Are Becoming So Sophisticated, They Could Start Impersonating Your Boss And Coworkers

    Endless race

    Luckily, we’re not at the “Terminator” stage yet. Right now, most deepfakes are still detectable — either by eagle-eyed humans or AI technologies that have already been integrated into ID verification solutions for quite some time. But don’t let your guard down. Deepfake threats are evolving quickly — we are already on the edge of witnessing persuasive samples that can scarcely arouse any suspicion, even upon deliberate scrutiny.

    The good news is that the AI, the superhero we’ve enlisted to fight against good old “handmade” identity fraud, is now being trained to spot fake stuff created by its fellow AI buddies. How does it manage this magic? First of all, AI models don’t work in a vacuum; human-fed data and clever algorithms shape them. Researchers can develop AI-powered tools to remove the bad guys of synthetic fraud and deepfakes.

    The core idea of this protective technology is to be on the lookout for anything fishy or inconsistent while doing those ID liveness checks and “selfie” sessions (where you snap a live pic or video with your ID). An AI-powered identity verification solution becomes the digital Sherlock Holmes. It can detect both changes that occur over time, like shifts in lighting or movement, and sneaky changes within the image itself – like tricky copy-pasting or image stitching.

    Fortunately, AI-generated fraud still has some blind spots, and organizations should leverage those weak points. Deepfakes, for instance, often fail to capture shadows correctly and have odd backgrounds. Fake documents typically lack optically variable security elements and would fail to project-specific images at certain angles.

    Another key challenge criminals face is that many AI models are primarily trained using static face images, mainly because those are more readily available online. These models struggle to deliver realism in liveness “3D” video sessions, where individuals must turn their heads.

    One more vulnerability organizations can use is the difficulty in manipulating documents for authentication compared to attempting to use a fake face (or to “swap a face”) during a liveness session. This is because criminals typically have access only to one-dimensional ID scans. Moreover, modern IDs often incorporate dynamic security features that are visible only when the documents are in motion. The industry is constantly innovating in this area, making it nearly impossible to create convincing fake documents that can pass a capture session with liveness validation, where the documents must be rotated at different angles. Hence, requiring physical IDs for a liveness check can significantly boost an organization’s security.

    While the AI training for ID verification solutions keeps evolving, it’s essentially a constant cat-and-mouse game with fraudsters, and the results are often unpredictable. It is even more intriguing that criminals are also training AI to outsmart enhanced AI detection, creating a continuous cycle of detection and evasion.

    Take age verification, for example. Fraudsters can employ masks and filters that make people appear older during a liveness test. In response to such tactics, researchers are pushed to identify fresh cues or signs of manipulated media and train their systems to spot them. It’s a back-and-forth battle that keeps going, with each side trying to outsmart the other.

    Related: The Deepfake Threat is Real. Here Are 3 Ways to Protect Your Business

    Maximum level of security

    In light of all we’ve explored thus far, the question looms: What steps should we take?

    First, to achieve the highest level of security in ID verification, toss out the old playbook and embrace a liveness-centric approach for identity checks. What’s the essence of it?

    While most AI-generated forgeries still lack the naturalness needed for convincing liveness sessions, organizations seeking maximum security should work exclusively with physical objects — no scans, no photos — just real documents and real people.

    In the ID verification process, the solution must validate both the liveness and authenticity of the document and the individual presenting it.

    This should also be supported by an AI verification model trained to detect even the most subtle video or image manipulations, which might be invisible to the human eye. It can also help detect other parameters that could flag abnormal user behavior. This involves checking the device used to access a service, its location, interaction history, image stability and other factors that can help verify the authenticity of the identity in question. It’s like piecing together a puzzle to determine if everything adds up.

    And one final tip – requesting that customers use their mobile phones during liveness sessions instead of a computer’s webcam would be helpful. This is because it is generally much more difficult for fraudsters to swap images or videos when using a mobile phone’s camera.

    To wrap it up, AI is the ultimate sidekick for the good guys, ensuring the bad guys can’t sneak past those defenses. Still, AI models need guidance from us humans to stay on the right track. But when together, we are superb at spotting fraud.

    [ad_2]

    Ihar Kliashchou

    Source link

  • From big screen to picket line: Why your favourite U.S. actors are striking – National | Globalnews.ca

    From big screen to picket line: Why your favourite U.S. actors are striking – National | Globalnews.ca

    [ad_1]

    Some of Canadians’ favourite Hollywood actors will officially be taking a break from the big screen to join the picket line.

    The Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) made the decision Thursday to join the Hollywood’s writers’ union in a strike. Observers say the actors’ union’s decision largely comes down to a demand for compensation from studios and streaming services that keeps up with inflation.

    “The compensation issues include both upfront compensation, the session fees, the money they’re paid when they do the work, and also residuals or royalties that actors, and also writers and directors get paid when product is rerun or reused,” said Los Angeles entertainment lawyer Jonathan Handel in an interview with Global News.

    When it comes to streaming, actors are concerned that being on a successful show on services like Netflix or Prime video won’t earn them a higher compensation than one that draws in less buzz.

    Story continues below advertisement

    “‘Wednesday’ doesn’t pay any higher residual than ‘Tuesday’ as it works,” Handel said, referencing the recent Netflix series produced and partially directed by Tim Burton.

    American producer Tom Nunan told Global News that actors are increasingly being paid one lump-sum for their work on streaming services. Now, they want longer relationships with their content — similar to how they have been paid by non-streamers — and to see more transparency with the way that streaming services are measuring success.


    Click to play video: 'The impact of the Hollywood strike on Canada '


    The impact of the Hollywood strike on Canada 


    Before streaming services, “actors would have a movie or TV show premiere and then get paid for that one thing and then it would be on cable systems or on demand… and they would continue to have what we call residual relationships with the content financially,” Nunan said.

    “Now in the streaming era, you get paid once and that’s all you get paid.”

    Story continues below advertisement

    Attending a photo event on Wednesday, film star Matt Damon said that while everyone was hoping a strike could be averted, many actors need a fair contract to survive.

    “We ought to protect the people who are kind of on the margins,” Damon told The Associated Press. “And 26,000 bucks a year is what you have to make to get your health insurance. And there are a lot of people whose residual payments are what carry them across that threshold… And that’s absolutely unacceptable. We can’t have that.”


    Actor Rosario Dawson attends a rally by striking writers and actors outside Warner Bros. studios in Burbank, Calif. on Friday, July 14, 2023.(AP Photo/Mark J. Terrill).


    Actor Jac Cheairs and his son Wyatt, 11, take part in a rally by striking writers and actors outside Netflix studio in Los Angeles on Friday, July 14, 2023. (AP Photo/Chris Pizzello).


    Actor Dermot Mulroney takes part in a rally by striking writers and actors outside Netflix studio in Los Angeles on Friday, July 14, 2023. (AP Photo/Chris Pizzello).


    Actor Jason Sudeikis, center, walk a picket line with striking writers and actors, Friday, July 14, 2023 at NBC Universal Studios in New York. (AP Photo/Bebeto Matthews).


    Actors and comedians Tina Fey, second from right, and Fred Armisen, second from left, join striking members of the Writers Guild of America on the picket line during a rally outside Silvercup Studios, Tuesday May 9, 2023, in New York. (AP Photo/Bebeto Matthews).

    Another key issue in the strike is the use of artificial intelligence — or AI. Computer generated imagery (CGI) is already widely used in the industry to simulate crowds or audiences, for example.

    Story continues below advertisement

    But as the digital age advances, studios have started to explore ways to convincingly replicate actors’ voices and faces. Early rumblings of ‘deepfakes’ already exist, where AI is used to make images of fake events or make appear that someone is saying something they didn’t.

    Handel says that the industry generally holds two schools of thought on the matter. Some actors say they don’t have an issue with studios reproducing their likeness with AI, but they want to be compensated by studios. Others take issue with the use of AI entirely for authenticity purposes.

    “It’s a compromise between both sides of the table… but I think the unions are most likely to take the first position: that as long as there’s compensation that would be satisfactory,” Handel said.

    Nunan says he doesn’t think there is a large risk of Canadians’ favourite A-listers having their likeness replicated without their consent. Rather, lesser-known actors are more likely to have their features replicated without being aware because they don’t have the same protections through lawyers, agents and managers.


    Click to play video: 'Hollywood actors join screenwriters on strike: ‘We are being victimized by a very greedy entity’'


    Hollywood actors join screenwriters on strike: ‘We are being victimized by a very greedy entity’


    With actors and writers stepping away from U.S. productions, Handel says audiences may have to brace themselves for slightly different content for the time being. Reality television will be emphasized, he says, along with sports.

    Story continues below advertisement

    There’s also an opportunity for foreign content with actors and writers who are not part of the striking unions.

    “Some companies, Netflix in particular, have proved very adept at creating content overseas and getting Americans to watch it. You know, “Squid Game,” for example. Netflix managed to do something that no one thought was possible, which is to get Americans to watch foreign content.”

    Nunan, on the other hand, does not see foreign content now dominating screens, but it “could be promoted more heavily,” he says.

    The actors’ guild released a statement early Thursday announcing that its deadline for negotiations to conclude had ended without a contract.


    Click to play video: 'BIV: Impact of Hollywood strikes on B.C. film industry'


    BIV: Impact of Hollywood strikes on B.C. film industry


    “The companies have refused to meaningfully engage on some topics and on others completely stonewalled us. Until they do negotiate in good faith, we cannot begin to reach a deal,” said Fran Drescher, the star of “The Nanny” who is now the actors’ guild president.

    Story continues below advertisement

    Members of the Writers Guild of America have been on strike since early May, slowing the production of film and television series on both coasts and in production centres like Atlanta.

    Handel said the dual actors’ and writers’ strike is a “win” for studios because “they’re not spending money on production.”

    With files from the Associated Press and Global News’ Reggie Cecchini.

    &copy 2023 Global News, a division of Corus Entertainment Inc.

    [ad_2]

    Naomi Barghiel

    Source link

  • Deepfake: Post The Bruce Willis Controversy What Disruption To Entertainment Could Be Caused

    Deepfake: Post The Bruce Willis Controversy What Disruption To Entertainment Could Be Caused

    [ad_1]

    At the beginning of October there were numerous reports that veteran actor Bruce Willis had sold the rights to his face to deepfake company, Deepcake. Though these rumors were debunked by an official spokesperson for the actor the conversations around the technology have continued. How could it be used positively for the industry in the future and could it negatively impact actors?

    Willis announced his retirement from acting in March after being diagnosed with a speech disorder known as aphasia. There was a report that he had sold the rights to his face, that major news outlets including the Daily Mail and The Telegraph ran with. Though untrue, it did get people’s imaginations running about the possibilities through using the technology.

    Deepfakes use artificial intelligence (AI) and machine learning technology to render realistic videos. The tech has so far been used to mimic celebrities and other well-known individuals with surprising accuracy. Willis had worked with Deepcake before on a deepfake project, an advert for Russian telecoms company Megafon.

    The advert was shot and aired in 2021 and a Russian actor had Willis’ face superimposed over his using deepfake technology.

    The production, through Deepcake, had to collect numerous materials from Willis and his consent to use his likeness in the advert.

    In a statement from Deepcake, they shed more light on the controversy surrounding the report.

    “The wording about rights is wrong… Bruce couldn’t sell anyone any rights, they are his by default,”

    The quote implies that Willis couldn’t sell his rights even if he wanted to, however, his participation in the Russian advert implies otherwise. Perhaps not long-term, but it could certainly be done on a project-by-project basis.

    If just materials were needed for Willis to be replicated so accurately, anyone could be deepfaked with the requisite archives. For those in the public eye, most of those materials are in the public domain already.

    Some organizations have come out and said the technology would affect actors’ livelihoods and even that they could be contracted out of their voices and/or faces. Regardless the business is growing.

    Deepfake technology has been used for recently retired Darth Vader actor James Earl Jones. His voice as Vader can continue and was recently used on Disney’s Obi-Wan Kenobi series through a company called Respeecher. The voice was even made to sound younger and more relevant to the timeline the show is set-in.

    The growth of the tech does bring the points of rights into question. Could estates that represent deceased celebrities position themselves for their individual to carry on their legacy using deepfake technology? Is it ethical to do so? Music is still released from musicians that have passed away. Michael Jackson, Pop Smoke, and Tupac are notable examples. Though they may have recorded the vocals did that mean they wanted the tracks released? Starting a new project using their likeness is potentially even more controversial, as it’s something they can’t comment on in live terms.

    Willis’ situation is much more unique as he can decide which projects to lend his name and likeness to, with this could we see another layer to performance with actors playing actors portraying characters in the future?

    The continued development of the technology will certainly be something to look out for as another perspective is that characters could live on irrespective of what happens to an actor. Scheduling conflicts could become a thing of the past. The passing of Chadwick Boseman is a prime example. Clearly, no one wanted to replace Boseman but it was pivotal that the Black Panther character continued, with Disney deciding to continue a storyline post the death of T’Challa.

    Speaking with Empire, Marvel head Kevin Feige said about the matter, “It just felt like it was much too soon to recast,”

    “Stan Lee always said that Marvel represents the world outside your window. And we had talked about how, as extraordinary and fantastical as our characters and stories are, there’s a relatable and human element to everything we do. The world is still processing the loss of Chad. And Ryan poured that into the story.”

    There’s a lot to unpack in regards to ethics and processes but there is certainly the potential for mass disruption using deepfake technology.

    [ad_2]

    Josh Wilson, Contributor

    Source link