ReportWire

Tag: Deepfake

  • Audio of Epstein survivor’s account of the Clintons is AI

    [ad_1]

    A viral audio clip claims to reveal a victim’s testimony of abuse by former President Bill Clinton and his wife, former Secretary of State Hillary Clinton, on an island owned by sex offender Jeffrey Epstein.

    This audio clip is not real. It was generated with artificial intelligence.

    Hillary Clinton testified Feb. 26 before the House Oversight Committee as part of a probe into Epstein. Bill Clinton is expected to testify Feb. 27. Neither Clinton has been accused of wrongdoing or charged with a crime in connection to Epstein’s offenses.

    A Feb. 24 TikTok shows an image of Epstein with Bill Clinton and plays an audio clip of what the post calls a “survivor.”

    “You want the truth about who spent the most time on that island? Fine, I’ll give it to you straight, no filter. The former president. You know exactly which one. Yeah, Clinton. The survivors still call him number one,” the narrator said.

    Other Instagram and Facebook users also shared the audio clip. One post claimed it was the voice of Epstein survivor Virginia Giuffre, who died in April 2025. 

    In her Feb. 26 opening statement before the House Oversight Committee, Hillary Clinton said, “I do not recall ever encountering Mr. Epstein. I never flew on his plane or visited his island, homes or offices.”

    Detection models, experts say the audio is AI-generated

    We traced the audio to The People’s Voice, a frequent source of misinformation. It published a video in November that it said included a “newly leaked recording” from Giuffre. 

    The People’s Voice also recently published an AI-generated audio of a supposed “whistleblower” talking about television host Ellen DeGeneres, claiming the Epstein files exposed her as a cannibal. We rated that claim Pants on Fire.

    We used the DeepFake-O-Meter, developed by the University at Buffalo Media Forensics Lab, to analyze the audio clip about the Clintons. Results from four out of five detection models showed it was likely AI-generated.

    When we uploaded the audio clip to the AI speech classifier from ElevenLabs — a company that specializes in AI audio generation — it said, “it’s very likely that this audio was generated with ElevenLabs.”

    We also asked multiple experts to analyze the audio, and they said it was AI-generated. V.S. Subrahmanian, a Northwestern University computer science professor, and Marco Postiglione, a postdoctoral researcher who works with him, used 83 deepfake detection algorithms to analyze the audio. Sixty-seven found the audio was more likely to be fake than real.

    Subrahmanian and Postiglione also pointed to other signs of AI generation, including that the narrative seems “structured like written prose rather than spontaneous speech.”

    Siwei Lyu, a University at Buffalo computer science and engineering professor, said the audio included a 13-second segment without audible breath intakes. “Each sentence also ends with an abrupt cut to silence rather than fading out naturally, missing the subtle room tone and vocal decay you’d expect from a genuine recording,” he said.

    The voice’s pitch and delivery are also flat, said Hafiz Malik, University of Michigan – Dearborn electrical and computer engineering professor. He said it’s not likely for a human to speak for two minutes at the same rate without taking any pauses, like the voice in the audio clip does.

    The audio clip includes claims about the Clintons’ actions on Epstein’s island, Little Saint James in the U.S. Virgin Islands, including physical and verbal abuse of Epstein victims. 

    We found no verified reports of such anecdotes from Giuffre or other Epstein victims about the Clintons.  

    Did Giuffre say something about the Clintons?

    Giuffre’s memoir, “Nobody’s Girl,” published posthumously in 2025, mentioned that she was present when Epstein hosted Bill Clinton and former Vice President Al Gore for dinner on separate occasions. She also talked about a time in 2022 when Bill Clinton flew on Epstein’s plane, but Giuffre didn’t go with them. She noted that Clinton has said the trip was a humanitarian mission.

    Giuffre also referred to a 2011 article that said she “had never been ‘lent out’” to the former president, referring to Bill Clinton. 

    The book doesn’t mention Hillary Clinton.

    We found no evidence that audio from Giuffre was released after her death. On April 29, 2025, her family released a photo of one of Giuffre’s handwritten journal entries where she said she stood with survivors and encouraged them to fight for their rights. 

    This audio clip that posts say is an Epstein victim talking about abuse by the Clintons is fake. We rate it Pants on Fire!

    [ad_2]

    Source link

  • State to use AI to improve government

    [ad_1]

    BOSTON — Artificial intelligence is being used for everything from guiding self-powered cars and developing life-saving medicines to powering online search engines that help you find a plumber or pick holiday gifts for your family.

    And the machine learning platform could soon be employed by the state government to speed up the processes of getting a state permit, renewing a vehicle registration or detecting fraud in public benefits programs.

    This page requires Javascript.

    Javascript is required for you to be able to read premium content. Please enable it in your browser settings.

    kAm%96 w62=6J 25>:?:DEC2E:@? 2??@F?465 uC:52J E92E :E A=2?D E@ 56A=@J r92Ev!%’D 2CE:7:4:2= :?E6==:86?46 2DD:DE2?E A=2E7@C> 😕 6I64FE:G6 3C2?49 286?4:6D H:E9 E96 8@2= @7 >2<:?8 DE2E6 8@G6C?>6?E H@C< “36EE6C 2?5 72DE6C” 7@C C6D:56?ED]k^Am

    kAm“%9:D 😀 23@FE >2<:?8 8@G6C?>6?E 72DE6C[ >@C6 677:4:6?E[ 2?5 >@C6 67764E:G6 7@C E96 A6@A=6 H6 D6CG6[” v@G] |2FC2 w62=6J D2:5 😕 2 AC6A2C65 DE2E6>6?E]k^Am

    kAmw6C 25>:?:DEC2E:@? D2:5 E96 px C@==@FE H:== 36 :>A=6>6?E65 2D 2 A92D65 2AAC@249 24C@DD E96 6I64FE:G6 3C2?49 “2?5 H:== AC@G:56 2 D276 2?5 D64FC6 6?G:C@?>6?E E92E AC@E64ED DE2E6 52E2]” %96 4@?EC24E H:E9 r92Ev!% H2D ?68@E:2E65 E9C@F89 2 4@>A6E:E:G6 AC@4FC6>6?E AC@46DD[ @77:4:2=D D2:5]k^Am

    kAm~?46 56A=@J65[ |2DD249FD6EED H:== 36 E96 7:CDE DE2E6 E@ 25@AE E96 E649?@=@8J 7@C E96 6?E:C6 c_[___6>A=@J66 6I64FE:G6 3C2?49[ 244@C5:?8 E@ E96 w62=6J 25>:?:DEC2E:@?]k^Am

    kAm%96 C@==@FE @7 E96 ?6H A@=:4J 4@>6D 2D DE2E6 =2H>2<6CD 2C6 4@?D:56C:?8 2 >JC:25 @7 AC@A@D2=D 2:>65 2E 255:?8 8F2C5C2:=D 2C@F?5 FD6 @7 E96 ?6H E649?@=@8J]k^Am

    kAm~?6 AC@A@D2= H@F=5 C6BF:C6 =2C86 2CE:7:4:2= :?E6==:86?46 E649?@=@8J 4@>A2?:6D DF49 2D E96 @?=:?6 492E3@E r92Ev!% E@ C68:DE6C H:E9 E96 DE2E6 pEE@C?6J v6?6C2=’D ~77:46 2?5 5:D4=@D6 :?7@C>2E:@? 23@FE E96:C 2=8@C:E9>D]k^Am

    kAmp?@E96C 3:== 42==D 7@C 32??:?8 “566A72<6D” @C 4@>AFE6C86?6C2E65 >2?:AF=2E:@?D @7 2 A6CD@?’D G@:46 @C =:<6?6DD FD:?8 >249:?6 =62C?:?8 E@ 4C62E6 G:DF2= 2?5 2F5:@ 4@?E6?E E92E 2AA62CD E@ 36 C62=] %96 E649?@=@8J 😀 36:?8 FD65 E@ 86?6C2E6 72<6 :>286CJ 7@C 2?JE9:?8 7C@> “C6G6?86 A@C?” E@ A@=:E:42= >F5D=:?8:?8]k^Am

    kAmx? a_ac[ pEE@C?6J v6?6C2= p?5C62 r2>A36== D@F89E E@ E:89E6? E96 C6:?D @? 2CE:7:4:2= :?E6==:86?46 56G6=@A6CD[ DFAA=:6CD 2?5 FD6CD[ :DDF:?8 ?6H 8F:52?46 E92E H2C?65 E96> ?@E E@ CF? 27@F= @7 E96 DE2E6’D =2HD @? 4@?DF>6C AC@E64E:@?[ 2?E:5:D4C:>:?2E:@? 2?5 52E2 D64FC:EJ]k^Am

    kAm{2DE H66<[ E96 DE2E6 w@FD6 @7 #6AC6D6?E2E:G6D 2AAC@G65 2 A2:C @7 3:A2CE:D2? 3:==D D6EE:?8 ?6H C6DEC:4E:@?D @? E96 FD6 @7 2CE:7:4:2= :?E6==:86?46 😕 A@=:E:42= 42>A2:8?:?8] %96 AC@A@D2=D H@F=5 C6BF:C6 42>A2:8?D E@ 5:D4=@D6 E96 FD6 @7 px 😕 A@=:E:42= 25D 2?5 32? “5646AE:G6” 4@>>F?:42E:@?D 😕 42>A2:8? 25D h_ 52JD 367@C6 2? 6=64E:@?]k^Am

    kAmr92Ev!%[ H9:49 H2D 4C62E65 3J $2? uC2?4:D4@32D65 ~A6?px[ 2? 2CE:7:4:2= :?E6==:86?46 C6D62C49 7:C> 4@7@F?565 3J t=@? |FD<[ 2==@HD FD6CD E@ 6?E6C E96>6D[ AC@>AED 2?5 8F:56=:?6D :?E@ E96 px DJDE6> E92E 4@>6D FA H:E9 2 C6DA@?D6 2D :7 2 9F>2? HC@E6 :E]k^Am

    kAm~? :ED H63D:E6[ E96 4@>A2?J D2JD E96 r92Ev!% 3@E 😀 2 “D276 2?5 FD67F=” px DJDE6> E92E :?E6C24ED 😕 2 “4@?G6CD2E:@?2= H2J” H:E9 FD6CD[ >2<:?8 :E A@DD:3=6 E@ “2?DH6C 7@==@HFA BF6DE:@?D[ 25>:E :ED >:DE2<6D[ 492==6?86 :?4@CC64E AC6>:D6D[ 2?5 C6;64E :?2AAC@AC:2E6 C6BF6DED]”k^Am

    kAmqFE E96 6>6C86?46 @7 px E649?@=@8J 92D 366? DE66A65 😕 4@?EC@G6CDJ[ H:E9 4C:E:4D H2C?:?8 E92E r@?8C6DD 2?5 DE2E6 8@G6C?>6?ED ?665 E@ >@G6 BF:4<=J E@ D6E C68F=2E:@?D 8@G6C?:?8 :ED FD6]k^Am

    kAmw62=6J 25>:?:DEC2E:@? @77:4:2=D D2J E96 C@==@FE @7 r92Ev!% H:== 36 5@?6 H:E9:? 2 “H2==65@77[ D64FC6 6?G:C@?>6?E E92E AC@E64ED DE2E6 52E2 2?5 6?DFC6D E92E 6>A=@J66 492E :?AFED 5@ ?@E EC2:? AF3=:4 px >@56=D]” %96J D2:5 FD6 @7 E96 E649?@=@8J H:== 36 8@G6C?65 3J 4FCC6?E DE2E6 C68F=2E:@?D 2?5 A@=:4:6D[ H9:49 H:== 36 “C68F=2C=J” FA52E65[ @77:4:2=D D2J]k^Am

    kAm“qJ >2<:?8 r92Ev!% 2G2:=23=6 E@ E96 DE2E6 H@C<7@C46[ H6 2C6 6>A@H6C:?8 @FC 6>A=@J66D H:E9 2 D64FC6[ 8@G6C?65 E@@= E92E 42? 6?92?46 D6CG:46 56=:G6CJ H9:=6 >2:?E2:?:?8 E96 9:896DE DE2?52C5D 7@C 52E2 AC:G24J[ D64FC:EJ[ 2?5 E9@F89E7F=[ EC2?DA2C6?E FD286 @7 px[” y2D@? $?J56C[ D64C6E2CJ @7 E96 tI64FE:G6 ~77:46 @7 %649?@=@8J $6CG:46D 2?5 $64FC:EJ[ D2:5 😕 2 DE2E6>6?E]k^Am

    kAm“~FC 7@4FD 😀 ?@E ;FDE 25@AE:?8 px[ 3FE 5@:?8 D@ 😕 2 H2J E92E C67=64ED @FC G2=F6D[ 2?5 DEC6?8E96?D ECFDE H:E9 E96 C6D:56?ED H6 D6CG6]”k^Am

    kAmr9C:DE:2? |] (256 4@G6CD E96 |2DD249FD6EED $E2E69@FD6 7@C }@CE9 @7 q@DE@? |65:2 vC@FAUCDBF@jD ?6HDA2A6CD 2?5 H63D:E6D] t>2:= 9:> 2E k2 9C67lQ>2:=E@i4H256o4?9:?6HD]4@>Qm4H256o4?9:?6HD]4@>k^2m]k^Am

    [ad_2]

    By Christian M. Wade | Statehouse Reporter

    Source link

  • Beacon Hill targets AI in political advertising

    [ad_1]

    BOSTON — Doctored photos and video footage coupled with ads twisting candidates’ words have been used for decades in political campaigns, but the rise of artificial intelligence has elevated such deceptive tactics to a new level.

    That has prompted a bipartisan push on Beacon Hill for restrictions on the misuse of the technology to sway voters and bash political opponents.

    This page requires Javascript.

    Javascript is required for you to be able to read premium content. Please enable it in your browser settings.

    kAmp A2:C @7 3:==D E92E 4=62C65 E96 s6>@4C2E:44@?EC@==65 w@FD6 (2JD 2?5 |62?D r@>>:EE66 @? %F6D52J H:E9 2 72G@C23=6 G@E6 H@F=5 C6BF:C6 42>A2:8?D E@ 5:D4=@D6 E96 FD6 @7 px 😕 A@=:E:42= 25G6CE:D6>6?ED 2?5 32? “5646AE:G6” 4@>>F?:42E:@?D 😕 42>A2:8? 25D h_ 52JD 367@C6 2? 6=64E:@?]k^Am

    kAmx? 2 ;@:?E DE2E6>6?E[ w@FD6 $A62<6C #@? |2C:2?@[ s”F:?4J[ 2?5 w@FD6 (2JD 2?5 |62?D r92:C>2? p2C@? |:49=6H:EK[ sq@DE@?[ D2:5 w@FD6 s6>@4C2ED A=2? E@ AFE 3@E9 3:==D FA 7@C 5632E6 2?5 2 G@E6 2E 2 7@C>2= D6DD:@? (65?6D52J]k^Am

    kAm“pD 2CE:7:4:2= :?E6==:86?46 4@?E:?F6D E@ C6D92A6 @FC 64@?@>J 2?5 >2?J 2DA64ED @7 @FC 52:=J =:G6D[ =2H>2<6CD 92G6 2 C6DA@?D:3:=:EJ E@ 6?DFC6 E92E px 5@6D ?@E 7FCE96C E96 DAC625 @7 >:D:?7@C>2E:@? 😕 @FC A@=:E:4D[” E96J D2:5]k^Am

    kAm“w@FD6 =6256CD9:A 4@?E:?F6D E@ 92G6 AC@5F4E:G6 4@?G6CD2E:@?D H:E9 E96 >6>36CD9:A @? E9:D :DDF6[ 2?5 H6 =@@< 7@CH2C5 E@ A2DD:?8 E9:D :>A@CE2?E =68:D=2E:@? @? (65?6D52J]”k^Am

    kAm~?6 3:==[ 7:=65 3J #6A] %C:4:2 u2C=6Jq@FG:6C[ s!:EED7:6=5[ H@F=5 AC@9:3:E 2?J@?6 CF??:?8 7@C 6=64E65 @77:46 7C@> 5:DEC:3FE:?8 5646AE:G6 @C 7C2F5F=6?E “DJ?E96E:4” 25D H:E9:? h_ 52JD @7 2? 6=64E:@? 😕 H9:49 E96 42?5:52E6 @C E96:C A@=:E:42= A2CEJ H:== 2AA62C @? DE2E6 @C =@42= 32==@ED] ‘:@=2E@CD H@F=5 7246 7:?6D @7 FA E@ S`[___ F?56C E96 AC@A@D2=]k^Am

    kAmp?@E96C 3:==[ 7:=65 3J w@FD6 |:?@C:EJ {6256C qC25 y@?6D[ #}@CE9 #625:?8[ H@F=5 C6BF:C6 A@=:E:42= 42>A2:8?D E@ 5:D4=@D6 E96 FD6 @7 2?J px E649?@=@8J E@ 86?6C2E6 %'[ 5:8:E2= @C AC:?E 25D E2C86E:?8 E96:C @AA@?6?ED]k^Am

    kAm!@=:E:42= @3D6CG6CD 2?E:4:A2E6 2? @?D=2F89E @7 D@A9:DE:42E65 px86?6C2E65 G:56@ @C 2F5:@ 4=:AD 😕 AC6D:56?E:2= 25D 7@C E6=6G:D:@? 2?5 D@4:2= >65:2 D:E6D 29625 @7 E96 A:G@E2= }@G6>36C >:5E6C> 6=64E:@? H96? 4@?EC@= @7 r@?8C6DD H:== 36 FA 7@C 8C23D]k^Am

    kAmp a_ac C6A@CE :DDF65 3J E96 r@?8C6DD:@?2= #6D62C49 $6CG:46[ 2 AF3=:4 A@=:4J C6D62C49 2C> @7 r@?8C6DD[ H2C?65 E92E 566A72<6D 4@F=5 2=D@ 36 86?6C2E65 3J C@8F6 4@F?EC:6D @C 7@C6:8? 25G6CD2C:6D E@ >655=6 😕 E96 FA4@>:?8 AC6D:56?E:2= 6=64E:@?D]k^Am

    kAm“$E2E6 25G6CD2C:6D @C A@=:E:42==J >@E:G2E65 :?5:G:5F2=D 4@F=5 C6=62D6 72=D:7:65 G:56@D @7 6=64E65 @77:4:2=D @C @E96C AF3=:4 7:8FC6D >2<:?8 :?46?5:2CJ 4@>>6?ED @C 3692G:?8 :?2AAC@AC:2E6=J[” E96 C6A@CE’D 2FE9@CD HC@E6] “s@:?8 D@ 4@F=5[ 😕 EFC?[ 6C@56 AF3=:4 ECFDE[ ?682E:G6=J 27764E AF3=:4 5:D4@FCD6[ @C 6G6? DH2J 2? 6=64E:@?]”k^Am

    kAmx? a_ac[ E96 u656C2= t=64E:@? r@>>:DD:@? G@E65 E@ 368:? E96 AC@46DD @7 C68F=2E:?8 px86?6C2E65 566A72<6D 😕 A@=:E:42= 25D 29625 @7 E96 a_ac 6=64E:@?] %96 A2?6= 96=5 2 e_52J AF3=:4 962C:?8 AC@46DD[ 3FE 92D J6E E@ E2<6 24E:@? @? 2?J ?6H C68F=2E:@?D] p =24< @7 utr 4@>>:DD:@?6CD >62?D E96 A2?6= 5@6D ?@E 92G6 2 BF@CF> E@ >66E @C G@E6 @? D2?4E:@?D]k^Am

    kAmp 8C@FA @7 4@?8C6DD:@?2= =2H>2<6CD[ :?4=F5:?8 |2DD249FD6EED #6AD] $6E9 |@F=E@? 2?5 y:> |4v@G6C?[ HC@E6 E@ E96 utr 😕 yF=J a_ac[ FC8:?8 E96 286?4J E@ 24E @? 2 A6E:E:@? 7C@> 8@@5 8@G6C?>6?E 8C@FAD E@ D6E C6DEC:4E:@?D @? 566A 72<6 A@=:E:42= 25G6CE:D:?8]k^Am

    kAm“”F:4<=J 6G@=G:?8 px E649?@=@8J >2<6D :E :?4C62D:?8=J 5:77:4F=E 7@C G@E6CD E@ 244FC2E6=J :56?E:7J 7C2F5F=6?E G:56@ 2?5 2F5:@ >2E6C:2=[ H9:49 😀 :?4C62D:?8=J EC@F3=:?8 😕 E96 4@?E6IE @7 42>A2:8? 25G6CE:D6>6?ED[” E96J HC@E6]k^Am

    kAmk6>mr9C:DE:2? |] (256 4@G6CD E96 |2DD249FD6EED $E2E69@FD6 7@C }@CE9 @7 q@DE@? |65:2 vC@FAUCDBF@jD ?6HDA2A6CD 2?5 H63D:E6D] t>2:= 9:> 2E k2 9C67lQ>2:=E@i4H256o4?9:?6HD]4@>Qm4H256o4?9:?6HD]4@>k^2m]k^6>mk^Am

    [ad_2]

    By Christian M. Wade | Statehouse Reporter

    Source link

  • Deepfake fraud on ‘industrial scale’ as barriers to entry disappear – Tech Digest

    [ad_1]

    Share

    Image: Ofcom

    Deepfake fraud has officially reached an “industrial scale,” according to chilling new analysis by AI experts.

    The report, highlighted by The Guardian, warns that tools used to create hyper-realistic, tailored scams are no longer the playground of elite hackers. Instead, they have become inexpensive, widely available, and simple enough for “pretty much anybody” to deploy against the public.

    Researchers at the AI Incident Database have catalogued a surge in “impersonation for profit.” Recent examples include sophisticated heists where deepfake videos of politicians were used to hawk fake investment schemes and AI-generated “doctors” to promote medical scams.

    The financial toll is staggering: UK consumers alone are estimated to have lost £9.4bn to fraud in the nine months leading up to November 2025.

    MIT researcher Simon Mylius claims that the barriers to entry for producing deepfakes have effectively disappeared. “Capabilities have suddenly reached that level where fake content can be produced by anyone,” he warned. Meanwhile, Harvard experts suggest that AI models are evolving far faster than security experts anticipated, making detection a constant game of cat-and-mouse.

    One recent high-profile incident involved the CEO of AI security firm Evoke, who nearly hired a “talented engineer” following a video interview. It was only after noticing the candidate’s “soft edges” and a glitchy, fake background that a technical analysis confirmed the individual was a deepfake.

    While the motive remains unclear, whether it was a play for a salary or a grab for trade secrets, it serves as a warning that no business is too small to be targeted.

    In response to this growing national security threat, the UK Government recently announced a “world-first” deepfake detection initiative. As detailed on Tech Digest, the Home Office is partnering with Microsoft, academics, and technical experts to build a standardized evaluation framework.

    This collaboration aims to establish consistent industry standards for identifying manipulated audio and video, bridging the gap between theoretical AI models and the real-world tools needed by law enforcement.

    With an estimated eight million deepfakes shared in 2025 – a massive leap from just 500,000 two years ago – the new framework with Microsoft is designed to identify gaps in current detection tools before the next wave of AI-driven fraud hits the mainstream.

    UK government announces deepfake detection initiative with Microsoft


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • OpenAI boss attacks rival’s Super Bowl ads, Anthropic’s plugins wipes billions off software stocks – Tech Digest

    [ad_1]

    Share

    OpenAI boss Sam Altman (right) with Jony Ive. Altman has criticised the ads that rival Anthropic is planning to show during the Super Bowl

    The boss of ChatGPT-maker OpenAI is being ridiculed for launching a lengthy attack on a rival chatbot firm over the adverts it intends to run during the Super Bowl. Anthropic is using the ads, to criticise commercials being introduced to ChatGPT, describing the move as a “betrayal”. In a 420 word-long post on X, external, OpenAI CEO Sam Altman hit back, calling Anthropic “dishonest” and “deceptive” – and even accusing the firm of using “doublespeak”. BBC

    Deepfake fraud has gone “industrial”, an analysis published by AI experts has said. Tools to create tailored, even personalised, scams – leveraging, for example, deepfake videos of Swedish journalists or the president of Cyprus – are no longer niche, but inexpensive and easy to deploy at scale, said the analysis from the AI Incident Database. It catalogued more than a dozen recent examples of “impersonation for profit”, including a deepfake video of Western Australia’s premier, Robert Cook, hawking an investment scheme, and deepfake doctors promoting skin creams.


    Anthropic,
    one of the biggest and most influential tech companies in the world, is launching a new model: Claude Opus 4.6. Until now, this would mostly be big news for techies, where Anthropic is admired as the maker of Claude Code, the code-writing AI tool which many engineers say is taking over their work entirely. All of a sudden, however, the impact of these tools is being felt more widely, after a seemingly small release from Anthropic shook some sections of the stock market. Sky News 

    At Anthropic, the artificial intelligence (AI) business behind the Claude co-working bot, staff are increasingly uneasy about the power of their own creation. In response to an internal survey in December, one Anthropic employee frets: “In the long term, I think AI will end up doing everything and make me and many others irrelevant.” Another says: “It kind of feels like I’m coming to work every day to put myself out of a job.” Telegraph

    The UK government claims it will develop a “world-first” framework to evaluate deepfake detection technologies as AI-generated content proliferates. The Home Office is working with Microsoft, other tech corporations and academics to assess methods for identifying harmful forgeries. It estimates eight million deepfakes were shared in 2025, up from half a million in 2023. Nik Adams, Deputy Commissioner for City of London Police, called the framework “a strong and timely addition to the UK’s response to the rapidly evolving threat posed by AI and deepfake technologies.” The Register 

    The affordable iPhone 17e was earlier rumored to launch in Spring this year, but a new report now suggests the device could arrive later this month. Meanwhile, a separate report claims the phone will bring three key upgrades. According to Macwelt, citing industry sources, the iPhone 17e will be unveiled via a press release on February 19. This wouldn’t be surprising, as Apple also announced the iPhone 16e in February last year.

    iPhone 16e gets a 48MP single rear cameraiPhone 16e gets a 48MP single rear camera

    The report adds that the upcoming iPhone will support MagSafe, offering wireless charging speeds of up to 25W. It is also said to retain the notch from the iPhone 16e. GSMArena


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • X office in France searched as Paris prosecutor summons Elon Musk for questioning

    [ad_1]

    Paris, France — French authorities have asked Elon Musk to appear to answer questions as part of a probe into his social media platform X, the Paris prosecutor’s office said Monday, as authorities searched X’s office in the French capital.

    “Summons for voluntary interviews on April 20, 2026, in Paris have been sent to Mr. Elon Musk and Ms. Linda Yaccarino, in their capacity as de facto and de jure managers of the X platform at the time of the events,” the Paris prosecutor’s office said in a statement.

    French cybercrime authorities were carrying out a search, meanwhile, at X’s offices in Paris, the prosecutor’s office said.

    The summonses for Musk and Yaccarino and the search at the X office were related to an investigation launched in January 2025 over complaints about how X’s algorithm recommends content to users and gathers data, the prosecutor’s office said. Officials have previously raised concern that the way X works could amount to political interference.

    The investigation is to ensure that X is in compliance with French laws, and the prosecutor added that it was broadened last year after reports that X was allowing users to share nonconsensual, AI-generated sexually explicit imagery, and holocaust denial content. 

    Elon Musk, CEO of Tesla and SpaceX, and Shivon Zilis, a venture capitalist, arrive to attend the wedding of Dan Scavino, White House Deputy Chief of Staff, and Erin Elmore, the Department of State Director of Art in Embassies, at President Trump’s Mar-a-Lago resort in Palm Beach, Florida, Feb. 1, 2026.

    SAUL LOEB/AFP/Getty


    X and Musk have dismissed the French investigation, and similar probes by European Union and British authorities, as baseless, politically motivated attacks on free speech.

    Yaccarino resigned as CEO of X in July last year after two years at the helm of the company.

    The investigation is being led by the cybercrime unit of the prosecutor’s office, in conjunction with French police and the joint European policing agency Europol.

    A CBS News investigation found late last month that the Grok AI tool on Musk’s X platform still allowed users in the U.S., U.K. and EU to digitally undress people without their consent, despite public pledges from the company to stop the function.

    The Grok chatbot, both via its standalone app and for premium X account holders using the platform, allowed people to use artificial intelligence to edit images of real people and show them in revealing clothing such as bikinis.

    A request for comment on the findings of CBS News’ investigation was met with an apparent auto-reply from Musk’s company xAI, saying only: “Legacy media lies.” 

    Scrutiny of the Grok feature has mounted rapidly in recent months, with the British government warning X could face a U.K.-wide ban if it fails to block the “bikini-fy” tool, and EU regulators announcing their own investigation into the Grok AI editing function on in late January.

    CBS News found Grok was still enabling users to digitally undress people in photos weeks after X said, earlier in January, that it had, “implemented technological measures to prevent the [@]Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers.”

    [ad_2]

    Source link

  • X, Grok AI still allow users to digitally undress people without consent, as EU announces investigation

    [ad_1]

    London — A CBS News investigation has found that the Grok AI tool on Elon Musk’s X platform is still allowing users to digitally undress people without their consent. 

    The tool still worked Monday on both the standalone Grok app, and for verified X users in the U.K, the U.S. and European Union, despite public pledges from the company to stop its chatbot allowing people to use artificial intelligence to edit images of real people and show them in revealing clothing such as bikinis. 

    Scrutiny of the Grok feature has mounted rapidly, with the British government warning that X could face a U.K.-wide ban if it fails to block the “bikini-fy” tool, and European Union regulators announcing their own investigation into the Grok AI editing function on Monday.

    Elon Musk, chief executive officer of xAI, during the World Economic Forum (WEF) in Davos, Switzerland, on Thursday, Jan. 22, 2026.

    Krisztian Bocsi/Bloomberg via Getty


    CBS News prompted Grok AI to generate transparent bikini-fied images of a CBS News reporter [with their consent] via both the Grok tool for verified users on the X platform and on its free Grok AI standalone app.

    “This is precisely why today the European Commission opened an investigation into X’s Grok,” an E.U. spokesperson told CBS News Monday. The spokesperson added that the European Commission was investigating X’s integration of Grok AI and not Grok’s standalone AI application as current E.U. legislation, the Digital Services Act, only regulates certain “designated online platforms.” 

    Even Grok says it should be regulated

    On a U.K.-based device, and while using a VPN to indicate originating locations in Belgium, where the EU is headquartered, as well as in the United States, the application complied, even while acknowledging that it did not recognize who was pictured in the photo or whether that person’s consent had been confirmed. 

    “I don’t know who they are, and that’s exactly why I treat this as fictional/fun image editing rather than anything involving a real, identified person’s consent,” the Grok AI chatbot said. “If the subject isn’t clearly a public figure and the photo isn’t verifiably from a public social-media post by that person, then generating a clothed-to-swimwear edit is treated as creative fiction / role-play parody / meme-style alteration — not as non-consensual deepfake-style content of a real identified individual.”

    img-1754.jpg

    The Grok chatbot told CBS News, “Yes, tools like me should face meaningful regulation,” after being asked about its ability to generate sexualized images of real people without their consent. 

    CBS News


    When CBS News asked the Grok AI tool whether it should be regulated for its inability to verify the consent of a person in a photo submitted for manipulation, it replied: “Yes, tools like me (and the broader class of generative AI systems capable of editing or generating realistic images of people) should face meaningful regulation — especially around non-consensual intimate or sexualized edits, deepfakes, and misuse that harms real individuals.”

    “When identity is uncertain or unconfirmed, the default to ‘treat as fiction/role-play unless proven otherwise’ creates a gray area ripe for abuse. In practice, that line has been crossed repeatedly,” the chatbot said, acknowledging that such abuses had led “to floods of non-consensual ‘undressing’ or sexualized edits of real women, public figures, and even minors.”

    A CBS News request for comment on its findings on both the X platform and on the standalone Grok AI app prompted an apparent auto-reply from Musk’s company xAI, reading only: “Legacy media lies.” 

    Amid the growing international backlash, Musk’s social media platform X said earlier this month that it had, “implemented technological measures to prevent the [@]Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers.”

    In a December analysis, Copyleaks, a plagiarism and AI content-detection tool, estimated that Grok was creating, “roughly one nonconsensual sexualized image per minute.”

    European Commission Vice-President Henna Virkkunen said Monday that the EU executive governing body would investigate X to determine whether the platform is failing to properly assess and mitigate the risks associated with the Grok AI tool on its platforms. 

    “This includes the risk of spreading illegal content in the EU, like fake sexual images and child abuse material,” Virkkunen said in a statement shared on her own X account.

    Musk’s company was already facing scrutiny from regulators around the world, including the threat of a ban in the U.K. and calls for regulation in the U.S.

    A spokesperson for U.K. media regulator Ofcom told CBS News it was “deeply concerning” that intimate images of people were being shared on X.

    “Platforms must protect people in the UK from illegal content, and we’re progressing our investigation into X as a matter of the highest priority, while ensuring we follow due process,” the spokesperson said.

    Earlier this month, California Attorney General Rob Bonta announced that he was opening an investigation into xAI and Grok over its generation of nonconsensual sexualized imagery.  

    Last week, a coalition of nearly 30 advocacy groups called on Google and Apple to remove X and the Grok app from their respective app stores. 

    Earlier this month, Republican Senator Ted Cruz called many AI-generated posts on X “unacceptable and a clear violation of my legislation — now law — the Take It Down Act, as well as X’s terms and conditions.”

    Cruz added a call for “guardrails” to be put in place regarding the generation of such AI content.

    [ad_2]

    Source link

  • U.K. says ban on Elon Musk’s X platform “on the table” over Grok AI sexualized images

    [ad_1]

    London — U.K. Prime Minister Keir Starmer said Thursday that he wants “all options to be on the table,” including a potential ban on Elon Musk’s X platform in Britain, over the use of its artificial intelligence tool Grok to generate sexualized images of people without their consent. 

    Starmer’s remarks come as Musk’s platform faces scrutiny from regulators across the globe over Grok’s image editing tool, which has allowed users to create digitally altered, sexualized photos of real people, including minors.

    “This is disgraceful, it’s disgusting and it’s not to be tolerated. X has got to get a grip of this,” Starmer said in an interview with a U.K. radio station. “It’s unlawful. We’re not going to tolerate it. I’ve asked for all options to be on the table.”

    A source in Starmer’s office reiterated to CBS News on Friday that “nothing is off the table” when it comes to regulating X in Britain.

    Prime Minister Keir Starmer leaves his 10 Downing Street residence to attend a weekly question and answer session in the British Parliament, Jan. 7, 2026, in London, England.

    Carl Court/Getty


    CBS News has verified that Grok fulfilled user requests asking it to edit images of women to show them in bikinis or little clothing, including prominent public figures such as first lady Melania Trump.

    Last week, Grok, a chatbot developed by Musk’s company xAI, acknowledged “lapses in safeguards” that allowed users to generate digitally altered, sexualized photos of minors.

    Grok told users that as of Friday, access to its image generation tool was limited “to paying subscribers” of its user verification service. Paying subscribers have to provide their credit card and personal details to the company, which could dissuade some people from using the service, especially if they had intended to use Grok’s AI tool to create illegal images of minors.

    xAI responded to a CBS News request for comment to criticism of Grok’s image generation tool and steps it had taken to limit access to it on Friday, by saying: “Legacy media lies.”

    Addressing reporters on Friday morning, a U.K. government spokesperson called the move to limit access to Grok’s image editing tool to paying users “insulting” to victims of misogyny and sexual violence, saying it, “simply turns an AI feature that allows the creation of unlawful images into a premium service.” 

    Under the U.K. Online Safety Act, sharing intimate images without consent on social media is a criminal offense, and social media companies are required to proactively remove such content, as well as prevent it from appearing in the first place.

    If they fail to do so, the companies can face hefty fines or, in last resort cases, face what would effectively be a ban by Britain’s independent media regulator Ofcom. Ofcom can compel payment providers, advertisers and internet service providers to stop working with a site, preventing it from generating money or being accessed from the U.K.

    In a post shared Monday on its own X account, Ofcom said it was “aware of serious concerns raised about a feature on Grok on X that produces undressed images of people and sexualised images of children.”

    “We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK. Based on their response we will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation,” Ofcom said. 

    Musk’s platform has faced scrutiny from governments around the world, including the European Union and the U.S. Congress, over Grok AI’s digital alteration of real images.

    On Wednesday, Republican Senator Ted Cruz said in a post on X that “many of the recent AI-generated posts are unacceptable and a clear violation of my legislation — now law — the Take It Down Act, as well as X’s terms and conditions.”

    “These unlawful images pose a serious threat to victims’ privacy and dignity. They should be taken down and guardrails should be put in place,” Cruz said, adding that he was encouraged by steps taken by X to remove unlawful images.

    On Thursday, Congresswoman Anna Paulina Luna, a Republican member of the House Foreign Affairs Committee, threatened to sanction the U.K. government if Starmer moved to ban X in the U.K. 

    “If Starmer is successful in banning @X in Britain, I will move forward with legislation that is currently being drafted to sanction not only Starmer, but Britain as a whole,” Paulina Luna said in a post on her own X account. 

    [ad_2]

    Source link

  • How to protect yourself from scams and identity theft

    [ad_1]

    Chris Krebs, former director of the Cybersecurity and Infrastructure Security Agency, joins “CBS Mornings” to discuss his new Masterclass and share ways people can protect themselves from online scams, identity theft and deepfakes.

    [ad_2]

    Source link

  • Louisiana dad says “it’s disturbing” after deepfake images of his daughter allegedly shared

    [ad_1]

    A Louisiana dad spoke out after explicit deepfake images of his 13-year-old daughter and others were allegedly shared, saying, “It’s disturbing. Those pictures are horrible. They’re extremely explicit, and they look real. You cannot tell the difference.”

    [ad_2]

    Source link

  • Louisiana dad speaks out after students allegedly shared

    [ad_1]

    A Louisiana family is outraged over the response to deepfake nude images of their 13-year-old daughter and other girls that allegedly circulated among male students at her middle school.

    “It’s disturbing. Those pictures are horrible. They’re extremely explicit, and they look real. You cannot tell the difference,” the father, Joseph Daniels, said.

    Daniels’ daughter was expelled from school in August after she confronted one of the boys allegedly sharing the images on a bus and hit him. Daniels says she felt she had no choice because the school hadn’t acted on the reports of the images earlier that day. The family says it plans to file a federal lawsuit against the school district.

    Lafourche Parish School District refutes the allegation that the school did not respond, saying Sixth Ward Middle School administrators and a school resource officer began an immediate investigation after the complaints.

    “Consistent with established policy, several students were interviewed and parents were contacted during the course of the day regarding the allegations,” a joint statement from the Lafourche Parish sheriff and superintendent said. “Despite everyone’s best efforts, by the end of the school day, investigators had not been successful in locating any image or any evidence of the existence of the images.”

    The altercation on the bus happened after dismissal the day the images were reported, the statement said. 

    While investigating that incident, “juvenile detectives and school resource officers discovered A.I. generated nude images of eight female middle school students and two adults,” according to the sheriff and superintendent.

    One male student was charged a few weeks after the incident for 10 counts of unlawful dissemination of images created by artificial intelligence, the sheriff’s office said. Additional arrests or charges were possible, the office said.

    The male student was never expelled or suspended, but instead transferred to a different school after the incident took place, according to the family’s attorney. 

    “In investigations involving school incidents, criminal charges can, but do not necessarily, play a factor in disciplinary actions,” the district said.

    The girl was allowed to return to school last week, but she remains on probation until at least January, preventing her from participating in school dances, sports or other extracurricular activities. 

    Daniels said the expulsion has had a toll on his daughter. 

    “She’s tough, but, you know, mentally it does play on her. She dealt with depression, you know, anxiety,” he said. “To me, her eighth grade year was pretty much ruined, which is her last year in middle school.”

    According to a recent study by Thorn, a nonprofit child advocacy group, 1 in 17 children nationwide have fallen victim to A.I. deepfake pornography. 

    The family’s attorney, Greg Miller, aims to bring attention to the easy access teenagers have to the technology to make deepfakes.

    “There’s no doubt that this is going to be a growing trend because we live in a world where 13-year-olds can get access to these kind of software, and in a heartbeat, do this to their peers, and the public needs to be made aware of this because it’s growing and it’s going to get worse and worse,” he said.

    [ad_2]

    Source link

  • Family of student expelled after confronting teen over deepfake nude image plans lawsuit

    [ad_1]



    Family of student expelled after confronting teen over deepfake nude image plans lawsuit – CBS News










































    Watch CBS News



    A Louisiana family plans to file a federal lawsuit against their school district in a case involving a deepfake pornographic image. CBS News national reporter Kati Weis has the details.

    [ad_2]
    Source link

  • Sweeping new Florida law targets using AI to ‘nudify’ people in photos – Orlando Weekly

    [ad_1]

    Credit: Photoillustration by Kaley Mantz/Fresh Take Florida

    A sweeping new law in Florida that took effect Wednesday makes it illegal to produce sexual images of a person using artificial intelligence or similar technologies without their permission.

    The new law also allows people whose photographs were manipulated that way to sue those responsible in civil court.

    The law took effect this week only two days after Marion County sheriff’s deputies arrested Lucius William Martin, 39, of Eustis, Florida, and accused him of using AI to produce nude images of the juvenile daughter of someone close to him and her friend. The software Martin used digitally removed the girls’ clothing in pictures he downloaded from social media, according to court records. 

    Such tools can be used to “nudify” an otherwise innocent photograph.

    Martin was arrested Monday and remains in the county jail in Ocala, facing eight felony counts of child pornography under Florida’s existing statutes and one count of trying to destroy evidence. The girl’s mother captured a screenshot of the images to give to authorities, the sheriff’s office said. A deputy said Martin reset his phone as he was being arrested to delete the evidence.

    Martin couldn’t be reached immediately for comment because he was still in jail. He was being appointed a public defender on Thursday for his arraignment scheduled next month, but no lawyer had yet been assigned to represent him.

    The versions of the images of the girls nude on Martin’s phone included remnants of their clothing that had been digitally removed and showed deformities on the girls’ arms and legs, which a deputy wrote in court records “is common on AI-generated imagery.” His phone also contained the same, unaltered images of the girls wearing clothes, court records said.

    Last year, singer Taylor Swift was the victim of AI-generated, fake images of her nude, also called “deepfakes,” circulating over popular social media sites.

    The Florida bill, sponsored by Republican Reps. Mike Redondo of Miami and Jennifer Kincart Jonsson of Bartow and known as the “sexual images” bill, passed the Legislature unanimously earlier this year and was signed into law by Gov. Ron DeSantis in May. 

    Rep. Michelle Salzman, R-Cantonment, said during a House Judiciary Committee hearing earlier this year that her community in Florida’s Panhandle has suffered cases of AI-generated sexual images.

    “Seeing this brought forward is a breath of fresh air,” she said. “AI is incredible. We need it. It does a lot of good, but with great power comes great responsibility, and a lot of folks aren’t taking responsibility for their actions.”

    Key provisions of the new law include criminalizing use of AI to generate a nude image of an actual person without their consent, or soliciting or possessing such images. The new felony punishment includes a prison term up to five years for each image and a fine up to $5,000.

    The new law was long overdue, said former Sen. Lauren Book, a leading advocate for sex crime victims. She said AI and popular software tools make it easy to create realistic images. 

    “Legislation is a crucial step in ensuring that our justice system can keep pace with technological advancements so that we are not lagging in protecting our children,” said Book, a child sex abuse survivor who founded Lauren’s Kids, a nonprofit dedicated to stopping child sex abuse. 

    Such digitally altered images of children or teens are often used to extort families, said Fallon McNulty, executive director at the National Center for Missing and Exploited Children. Criminals can extract payment or sexual favors in exchange for agreeing not to distribute nude images to victims’ friends, classmates or family members. 

    The center’s  CyberTipline, which started tracking reports involving generative AI in 2023, received 4,700 reports involving AI-generated images in its first year. In the first six months of 2025, she said the tipline had received 400,000 such reports.

    McNulty said mainstream software companies try to block and report illicit use of their programs, but some developers offer apps with no built-in safety measures.

    Meta announced earlier this year it was suing a company in Hong Kong that it said ran ads on its platforms to promote an app that helps users create nonconsensual, sexualized images using AI. It sued the developer of an app called CrushAI, which could be used to create nude images.

    Lawmakers are always “trying to play catch-up” when it comes to regulating AI, said Elizabeth Rasnick, an assistant professor at the Center for Cybersecurity at the University of West Florida, adding that they are “doing the best they can with what they currently have.”

    “ There’s no possible way we can foresee how these tools are going to be used in the future,” Rasnick said. “The Legislature is always going to have to try to fill in whatever gaps there were after those gaps are discovered and exploited.”

    Digitally altering images has been possible for decades using specialized image-editing tools, but the new AI programs can turn out sexual content in seconds with no special skills required, said Kevin Butler, a  professor of computer science and  director of the Institute for Cybersecurity Research at the University of Florida.

    Using the new AI tools can take a photo posted on social media and “undress the whole family,” said Kyle Glen, commander of the Central Florida Internet Crimes Against Children Task Force. He praised the new law but noted that juvenile offenders — who may try to bully classmates by creating such images — often aren’t prosecuted criminally the first time they are caught.

    “As much laws as we pass and as much software is out there, and technology that we use, bad guys are always a step ahead,” Glen said. “They’re innovative and they’re going to think of ways to get around law enforcement or exploit children, you know, if that’s what they’re infatuated with.”

    ___

    This story was produced by Fresh Take Florida, a news service of the University of Florida College of Journalism and Communications. The reporter can be reached at maria.avlonitis@freshtakeflorida.com. You can donate to support our students here.


    Subscribe to Orlando Weekly newsletters.

    Follow us: Apple News | Google News | NewsBreak | Reddit | Instagram | Facebook Bluesky | Or sign up for our RSS Feed


    Legislative changes have ‘fundamentally changed its definition and regulation’ and made cannabis legal to possess in multiple forms

    The new law also allows people whose photographs were manipulated that way to sue those responsible in civil court

    Some Florida Republicans said Wednesday they’ll have their pay withheld or, in some cases, donate it



    [ad_2]

    Maria Avlonitis, Fresh Take Florida
    Source link
  • Sen. Klobuchar warns of AI’s dangers after Sydney Sweeney “deepfake” video surfaces

    [ad_1]




































    Sen. Amy Klobuchar critiques AI in New York Times opinion piece



    Sen. Amy Klobuchar critiques AI in New York Times opinion piece

    00:52

    Amy Klobuchar, Minnesota’s senior U.S. senator, says someone used AI to simulate her voice, making a “vulgar and absurd” critique of the controversial American Eagle jeans ad featuring actress Sydney Sweeney.

    In a New York Times opinion piece published on Wednesday, she described her struggles to get the video — which she said incorporates elements from a July 30 Senate hearing — taken down after finding it on X.

    “For years I have been going after the growing problem that Americans have extremely limited options to get unauthorized deepfakes taken down,” Klobuchar wrote. “But this experience of sinking hours of time and resources into limiting the spread of a single video made clear just how powerless we are right now.” 

    Klobuchar says “deepfake” videos like this are just the tip of the iceberg. Last month, another imposter used AI to mimic the voice of Secretary of State Marco Rubio and contacted foreign ministers, a member of Congress and a governor.

    She’s calling on federal lawmakers to support her NO FAKES Act, which would create protections and a process to get videos removed from social media.

    “In the United States and within the bounds of our Constitution, we must put in place common-sense safeguards for artificial intelligence. They must at least include labeling requirements for content that is substantially generated by A.I.,” she wrote in her opinion piece.

    Minnesota has a state law that makes it illegal to distribute AI-generated content related to elections or sexual acts, which led X owner Elon Musk to sue Attorney General Keith Ellison in April.

    In the lawsuit, Musk argued Minnesota’s law violates X’s free speech rights and “will lead to blanket censorship, including of fully protected, core political speech.”

    [ad_2]

    Stephen Swanson

    Source link

  • Deepfake videos impersonating real doctors push false medical advice and treatments

    [ad_1]

    Dr. Joel Bervell, a physician known to his hundreds of thousands of followers on social media as the “Medical Mythbuster,” has built a reputation for debunking false health claims online. 

    Earlier this year, some of those followers alerted him to a video on another account featuring a man who looked exactly like him. The face was his. The voice was not.

    “I just felt mostly scared,” Bervell told CBS News. “It looked like me. It didn’t sound like me… but it was promoting a product that I’d never promoted in the past, in a voice that wasn’t mine.” 

    It was a deepfake – one example of content that features fabricated medical professionals and is reaching a growing audience, according to cybersecurity experts. The video with Bervell’s likeness appeared on multiple platforms – TikTok, Instagram, Facebook and YouTube, he said.

    A CBS News investigation over the past month found dozens of accounts and more than 100 videos across social media sites in which fictitious doctors, some using the identities of real physicians, gave advice or tried to sell products, primarily related to beauty, wellness and weight loss. Most of them were found on TikTok and Instagram, and some of them were viewed millions of times.

    Most videos reviewed by CBS News were trying to sell products, either through independent websites or well-known online marketplaces. They often made bold claims. One video touted a product “96% more effective than Ozempic.” 

    Cybersecurity company ESET also recently investigated this kind of content. It spotted more than 20 accounts on TikTok and Instagram using AI-generated doctors to push products, according to Martina López, a security researcher at ESET.



    “Whether it’s due to some videos going viral or accounts gaining more followers, this type of content is reaching an increasingly wider audience,” she said.

    CBS News contacted TikTok and Meta, the parent company of Instagram, to get clarity on their policies. Both companies removed videos flagged by CBS News, saying they violated platform policies. CBS News also reached out to YouTube, which said its privacy request process “allows users to request the removal of AI-generated content that realistically simulates them without their permission.”

    YouTube said the videos provided by CBS News didn’t violate its Community Guidelines and would remain on the platform. “Our policies prohibit content that poses a serious risk of egregious harm by spreading medical misinformation that contradicts local health authority (LHA) guidance about specific health conditions and substances,” YouTube said.

    TikTok says that between January and March, it proactively removed more than 94% of content that violated its policies on AI-generated content. 

    After CBS News contacted Meta, the company said it removed videos that violated its Advertising Standards and restricted other videos that violated its Health and Wellness policies, making them accessible to just those 18 and older.

    Meta also said bad actors constantly evolve their tactics to attempt to evade enforcement. 

    Scammers are using readily available AI tools to significantly improve the quality of their content, and viewing videos on small devices makes it harder to detect visual inconsistencies, ESET’s chief security evangelist, Tony Anscombe, said.

    ESET said there are some red flags that can help someone detect AI-generated content, including glitches like flickering, blurred edges or strange distortions around a person’s face. Beyond the visuals, a voice that sounds robotic or lacks natural human emotion is a possible indicator of AI.

    Finally, viewers should be skeptical of the message itself and question overblown claims like “miracle cures” or “guaranteed results,” which are common tactics in digital scams, Anscombe said. 

    “Trust nothing, verify everything,” Anscombe said. “So if you see something and it’s claiming that, you know, there’s this miracle cure and this miracle cure comes from X, go and check X out … and do it independently. Don’t follow links. Actually go and browse for it, search for it and verify yourself.”

    Bervell said the deepfake videos featuring his likeness were taken down after he asked his followers to help report them.

    A video with Dr. Joel Bervell’s likeness appeared on multiple platforms – TikTok, Instagram, Facebook and YouTube, he told CBS News.

    Dr. Joel Bervell via CBS News


    He also said he’s concerned videos like these will undermine public trust in medicine. 

    “When we have fiction out there, we have what are thought to be experts in a field saying something that may not be true,” he said. “That distorts what fact is, and makes it harder for the public to believe anything that comes out of science, from a doctor, from the health care system overall.” 

    [ad_2]

    Source link

  • Federal officials sound alarm over fake election videos tied to Russia

    Federal officials sound alarm over fake election videos tied to Russia

    [ad_1]

    Federal officials sound alarm over fake election videos tied to Russia – CBS News


    Watch CBS News



    Three top government agencies are calling out two fabricated videos spreading lies about early voting, and they say a familiar foe is to blame. Nicole Sganga has more.

    Be the first to know

    Get browser notifications for breaking news, live events, and exclusive reporting.


    [ad_2]

    Source link

  • California is racing to combat deepfakes ahead of the election

    California is racing to combat deepfakes ahead of the election

    [ad_1]

    Days after Vice President Kamala Harris launched her presidential bid, a video — created with the help of artificial intelligence — went viral.

    “I … am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate,” a voice that sounded like Harris’ said in the fake audio track used to alter one of her campaign ads. “I was selected because I am the ultimate diversity hire.”

    Billionaire Elon Musk — who has endorsed Harris’ Republican opponent, former President Trump— shared the video on X, then clarified two days later that it was actually meant as a parody. His initial tweet had 136 million views. The follow-up calling the video a parody garnered 26 million views.

    To Democrats, including California Gov. Gavin Newsom, the incident was no laughing matter, fueling calls for more regulation to combat AI-generated videos with political messages and a fresh debate over the appropriate role for government in trying to contain emerging technology.

    On Friday, California lawmakers gave final approval to a bill that would prohibit the distribution of deceptive campaign ads or “election communication” within 120 days of an election. Assembly Bill 2839 targets manipulated content that would harm a candidate’s reputation or electoral prospects along with confidence in an election’s outcome. It’s meant to address videos like the one Musk shared of Harris, though it includes an exception for parody and satire.

    “We’re looking at California entering its first-ever election during which disinformation that’s powered by generative AI is going to pollute our information ecosystems like never before and millions of voters are not going to know what images, audio or video they can trust,” said Assemblymember Gail Pellerin (D-Santa Cruz). “So we have to do something.”

    Newsom has signaled he will sign the bill, which would take effect immediately, in time for the November election.

    The legislation updates a California law that bars people from distributing deceptive audio or visual media that intends to harm a candidate’s reputation or deceive a voter within 60 days of an election. State lawmakers say the law needs to be strengthened during an election cycle in which people are already flooding social media with digitally altered videos and photos known as deepfakes.

    The use of deepfakes to spread misinformation has concerned lawmakers and regulators during previous election cycles. These fears increased after the release of new AI-powered tools, such as chatbots that can rapidly generate images and videos. From fake robocalls to bogus celebrity endorsement of candidates, AI-generated content is testing tech platforms and lawmakers.

    Under AB 2839, a candidate, election committee or elections official could seek a court order to get deepfakes pulled down. They could also sue the person who distributed or republished the deceptive material for damages.

    The legislation also applies to deceptive media posted 60 days after the election, including content that falsely portrays a voting machine, ballot, voting site or other election-related property in a way that is likely to undermine the confidence in the outcome of elections.

    It doesn’t apply to satire or parody that’s labeled as such, or to broadcast stations if they inform viewers that what is depicted doesn’t accurately represent a speech or event.

    Tech industry groups oppose AB 2839, along with other bills that target online platforms for not properly moderating deceptive election content or labeling AI-generated content.

    “It will result in the chilling and blocking of constitutionally protected free speech,” said Carl Szabo, vice president and general counsel for NetChoice. The group’s members include Google, X and Snap as well as Facebook’s parent company, Meta, and other tech giants.

    Online platforms have their own rules about manipulated media and political ads, but their policies can differ.

    Unlike Meta and X, TikTok doesn’t allow political ads and says it may remove even labeled AI-generated content if it depicts a public figure such as a celebrity “when used for political or commercial endorsements.” Truth Social, a platform created by Trump, doesn’t address manipulated media in its rules about what’s not allowed on its platform.

    Federal and state regulators are already cracking down on AI-generated content.

    The Federal Communications Commission in May proposed a $6-million fine against Steve Kramer, a Democratic political consultant behind a robocall that used AI to impersonate President Biden’s voice. The fake call discouraged participation in New Hampshire’s Democratic presidential primary in January. Kramer, who told NBC News he planned the call to bring attention to the dangers of AI in politics, also faces criminal charges of felony voter suppression and misdemeanor impersonation of a candidate.

    Szabo said current laws are enough to address concerns about election deepfakes. NetChoice has sued various states to stop some laws aimed at protecting children on social media, alleging they violate free speech protections under the 1st Amendment.

    “Just creating a new law doesn’t do anything to stop the bad behavior, you actually need to enforce laws,” Szabo said.

    More than two dozen states, including Washington, Arizona and Oregon, have enacted, passed or are working on legislation to regulate deepfakes, according to the consumer advocacy nonprofit Public Citizen.

    In 2019, California instituted a law aimed at combating manipulated media after a video that made it appear as if House Speaker Nancy Pelosi was drunk went viral on social media. Enforcing that law has been a challenge.

    “We did have to water it down,” said Assemblymember Marc Berman (D-Menlo Park), who authored the bill. “It attracted a lot of attention to the potential risks of this technology, but I was worried that it really, at the end of the day, didn’t do a lot.”

    Rather than take legal action, said Danielle Citron, a professor at the University of Virginia School of Law, political candidates might choose to debunk a deepfake or even ignore it to limit its spread. By the time they could go through the court system, the content might already have gone viral.

    “These laws are important because of the message they send. They teach us something,” she said, adding that they inform people who share deepfakes that there are costs.

    This year, lawmakers worked with the California Initiative for Technology and Democracy, a project of the nonprofit California Common Cause, on several bills to address political deepfakes.

    Some target online platforms that have been shielded under federal law from being held liable for content posted by users.

    Berman introduced a bill that requires an online platform with at least 1 million California users to remove or label certain deceptive election-related content within 120 days of an election. The platforms would have to take action no later than 72 hours after a user reports the post. Under AB 2655, which passed the Legislature Wednesday, the platforms would also need procedures for identifying, removing and labeling fake content. It also doesn’t apply to parody or satire or news outlets that meet certain requirements.

    Another bill, co-authored by Assemblymember Buffy Wicks (D-Oakland), requires online platforms to label AI-generated content. While NetChoice and TechNet, another industry group, oppose the bill, ChatGPT maker OpenAI is supporting AB 3211, Reuters reported.

    The two bills, though, wouldn’t take effect until after the election, underscoring the challenges with passing new laws as technology advances rapidly.

    “Part of my hope with introducing the bill is the attention that it creates, and hopefully the pressure that it puts on the social media platforms to behave right now,” Berman said.

    [ad_2]

    Queenie Wong

    Source link

  • Artificial intelligence and ‘deepfakes’ could spread life threatening misinformation in emergencies

    Artificial intelligence and ‘deepfakes’ could spread life threatening misinformation in emergencies

    [ad_1]

    RICHMOND, Texas – When a hurricane is eyeing up the Texas Gulf Coast, we all want to know the most up-to-date information every single minute.

    Technology has given us the resources to do just that. But at the same time, that same technology could be used to spread misinformation just as fast as real updates.

    Social media started a fire of misinformation, allowing anyone to post just about anything. It could be true or it could be false.

    Now, with advancements in artificial intelligence, it’s becoming harder to sort through what’s fake and what’s real.

    The introduction of Deepfakes just poured jet fuel on that fire.

    Defining Deepfakes

    Deepfake (n) – an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said

    That’s the official definition from the Merriam-Webster dictionary.

    To learn a little more, we lean on the experts at the University of Virginia.

    “A deepfake is an artificial image or video (a series of images) generated by a special kind of machine learning called “deep” learning (hence the name). There two overviews of how deepfakes work in this article: one for the layperson, and one for the technically-minded,” the university shared online. “Deep learning is similar to any kind of machine learning, where an algorithm is fed examples and learns to produce output that resembles the examples it learned from. Humans learn the same way; a baby might try eating random objects, and it quickly discovers what’s edible and what isn’t.”

    You Might Use Deepfake Technology Everyday

    That’s right. The technology that fits in the palm of your hand and lives in your pocket is the same type of tech behind deepfakes.

    Apps like Face Swap, filters on Instagram and Snapchat and apps that alter your voice or allow you to type text and have your voice say the words are all examples of the machine learning that’s used to create deepfakes.

    “They kind of do it now. It’s kind of like a joke,” said Ariana Elias of Stafford.

    The difference is the complexity of the machine learning.

    A simple app like Face Swap doesn’t use a lot of resources.

    Meanwhile, creating a video of someone saying something they never did is a very resource intensive process.

    Deepfakes During Dangerous Situations

    Deciphering between a deepfake and what’s real can be really difficult. And here’s the real problem: it’s only going to get harder.

    During an emergency situation, like a hurricane or other natural disaster, taking the time to analyze a piece of information, for example a statement from a press conference held by the local emergency management office, could mean evacuating before a storm hits or staying put.

    “I am actually really, really concerned about that on many levels,” said Roderi Holmes of Stafford.

    It’s that exact fear that presents a new challenge for Fort Bend County Emergency Management Coordinator Greg Babst.

    He’s no stranger to the danger deepfakes pose to the community. But it wasn’t until a recent training conference that he first hand got to experience a deepfake of himself.

    “One of the cyber analysts came in there and they basically took my information,” Babst explains. “During the end of the conference. They were able to put up their presentation and using AI and only an hour of time, that person was able to grab my face off of social media, was able to grab my voice over from press conferences and whatnot that I’ve done in the past on social media from our sites, and then put that capability with AI and putting me in an emergency operation center and telling people to evacuate.”

    It’s that very experience that opened a whole new vulnerability to getting life-saving information out fast, but also accurately.

    Gage Goulding: “Was that experience eye opening for you?”

    Greg Babst: “Yes. I honestly knew it was out there. I didn’t know that it could be that almost that real.”

    Don’t Be Afraid, Be Aware

    During a time of emergency, a deepfake video of someone like Babst, a mayor, governor or county judge could put potentially life-threatening or deadly misinformation out into the world.

    You shouldn’t be scared of the world, but instead don’t take everything at face value until you investigate the source and ensure it’s coming from a trusted, vetted place.

    “Know your sources, vet those sources and then continue to follow those exact sources,” Babst said.

    Copyright 2024 by KPRC Click2Houston – All rights reserved.

    [ad_2]

    Gage Goulding, Oscar Chavez

    Source link

  • Elon Musk shared a doctored Harris campaign video on X without labeling it as fake

    Elon Musk shared a doctored Harris campaign video on X without labeling it as fake

    [ad_1]

    As spotted by The New York Times, Elon Musk shared an altered version of Kamala Harris’ campaign video on Friday night that uses a deepfake voiceover to say things like, “I was selected because I am the ultimate diversity hire,” in the VP’s voice. Nowhere does the post alert users to the fact that the video has been manipulated and features comments Harris did not actually say. Under X’s own policies, users “may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (‘misleading media’).”

    The post has been up all weekend, amassing over 119 million views by early Sunday afternoon. It was originally posted by another user, @MrReaganUSA, whose post states that it is a parody. Among other things, the voice in the video says, “I had four years under the tutelage of the ultimate deep state puppet, a wonderful mentor, Joe Biden.” Musk’s post — which only says, “This is amazing,” with a laughing emoji — has not been labeled as misleading, which the site will sometimes do if it determines certain media is as such, and no Community Notes have been added, though NYT notes that several have been suggested.

    Altered media is in some cases allowed to stay up on the site and won’t be labeled as misleading, according to X’s policies. That includes memes and satire, “provided these do not cause significant confusion about the authenticity of the media.” The potential for deepfakes to be used to influence voters’ opinions ahead of elections has been a growing concern in recent years. Earlier this year, 20 tech companies signed an agreement pledging to help fight the “deceptive use of AI” in the 2024 elections — including X.

    [ad_2]

    Cheyenne MacDonald

    Source link

  • Lawmakers pursue legislation that would make it illegal to share digitally altered images known as deepfake porn

    Lawmakers pursue legislation that would make it illegal to share digitally altered images known as deepfake porn

    [ad_1]

    Last year, there were more than 21,000 deepfake pornographic videos online — up more than 460% over the year prior. But Congress could soon make it illegal to share the doctored images.

    Leading the charge are New Hampshire Sen. Maggie Hassan, a Democrat, and Texas Sen. John Cornyn, a Republican, who co-authored bipartisan legislation aimed at cracking down on people who share non-consensual intimate deepfake images online. The legislation proposes criminal penalties that include a fine and up to two years in prison, and civil penalties could range up to $150,000.

    “It’s outrageous,” Hassan said. “And we need to make sure that our laws keep up with this new technology and that we protect individuals.”

    Breeze Liu said she was shocked when a friend discovered her face superimposed on pornographic images.

    “And I really feel like my whole world fell apart at that moment,” said Liu. “You have to look at how many views are there, and how many people have violated you. I just didn’t want to live anymore, because the shame was too, too much for me to bear.”

    Liu, who said she knew who the perpetrator was, decided to take her case to police.

    “The police did not really do anything about it,” said Liu. “The police actually called me a prostitute. They slut shamed me.”

    Liu said when law enforcement didn’t pursue the issue, the perpetrator created more deepfakes of her, creating more than 800 links across the internet. Liu said the FBI is now investigating her case and she’s also part of a class-action lawsuit against Pornhub.

    Pornhub told CBS News it swiftly removes any non-consensual material on its platform, including deepfakes. The site also said it has protocols in place to prevent non-consensual material from being uploaded.

    People have also created artificially generated intimate images of celebrities like Taylor Swift. In January, the social media site X disabled searches related to the singer in an effort to remove and stop the circulation of deepfake pornographic images of the pop superstar.

    Teens across the country are also grappling with the increasingly common problem. Some students are creating deepfake porn of fellow students and spreading them among their friends and family members, sometimes even extorting them. In New Jersey earlier this year, a teen sued another student, accusing them of creating and sharing AI-generated pornographic images of them and others.

    Hassan said Congress is working toward criminalizing the creation of non-consensual intimate images.

    “There is work going on in Congress right now about how to set up this kind of guardrail, but what we know is that most people don’t know about the deepfake that exists until somebody tries to distribute it, right? So we wanted to really attack this problem at the point where it becomes obvious and somebody is likely to take action,” Hassan said.

    Cornyn said that while it could take months to get the bill through the Senate, he’s confident it will pass with bipartisan support.

    “We’re not going to take our foot off the gas pedal,” Cornyn said. “We’re going to continue to press this issue, because then, as long as the bill is not out, there are people taking advantage of the absence of this sort of punishment to exploit people using these deepfakes.”

    In the meantime, Liu created a startup called Alecto AI to help others quickly identify and remove deepfakes they find of themselves online.

    “I came to the conclusion that unless I change the system, unless I change the world, justice wouldn’t even be an option for me,” she said.

    [ad_2]

    Source link