ReportWire

Tag: AI generated

  • Book on Charlie Kirk death not proof shooting was staged

    [ad_1]

    Hours after conservative activist Charlie Kirk was shot and killed Sept. 10 at Utah Valley University, internet users found a book on Amazon that detailed the assassination — with a publication date preceding the shooting.

    “Can someone honestly explain to me how a book titled ‘The Shooting of Charlie Kirk: A Comprehensive Account of the Utah Valley University Attack, the Aftermath, and America’s Response’ was published on Amazon.com on SEPTEMBER 9TH, when the event took place on SEPTEMBER 10TH??” one user wrote Sept. 11 on X.

    “Who Staged Charlie Kirk’s Assassination?” another X user posted. “Author credited as Anastasia J. Casey. Amazon listing (Kindle edition) had a publication date showing September 9, 2025, which is one day before the reported date of the shooting. The listing has reportedly been removed. So who is Anastasia J. Casey?”

    A book with that title by an author listed as Anastasia J. Casey was briefly available on Amazon, and the site showed the book was published Sept. 9.

    But that was a technical error, Amazon said in an email to PolitiFact. The book, which was created using artificial intelligence, was published Sept. 10 after the fatal shooting. The erroneous publication date is not evidence that Kirk’s shooting was planned or staged. The e-book — initially sold for $6.99  — has since been removed from Amazon’s website. 

    “Due to a technical issue, the date of publication that had been displayed for this title, while it was briefly listed, was incorrect, and we apologize for any confusion this may have caused,” Amazon said in its statement. “The title was published late in the afternoon on September 10th.”

    The author’s name also appears to be fabricated. We found no information about the purported author, Anastasia J. Casey; the book about Kirk’s shooting is the only one listed under that name.

    Amazon said the book was removed because it violated the company’s content rules.

    The AI-generated book draws from information that was already available online, such as news reports and public statements from law enforcement officials. And this isn’t the first of its kind. We previously fact-checked a similar instance of an AI-generated e-book that appeared online in the wake of the 2023 Maui wildfires.

    AI-generated books have become increasingly common on Amazon because of tools such as ChatGPT that let users create books in hours, including exploiting breaking news events.

    Users can then self-publish those books, without a literary agent or publishing house, with Amazon’s Kindle Direct Publishing service.

    In 2023, Amazon introduced a policy requiring Kindle Direct Publishing authors to disclose whether their creations are AI-generated, including the title, cover art and product description.

    “The Shooting of Charlie Kirk” did not appear to disclose this information.

    A book on Amazon titled “The Shooting of Charlie Kirk” that had an inaccurate publication date is not evidence the event was staged. We rate this claim False. 

    RELATED: ‘Rough road ahead’: Charlie Kirk’s assassination highlights the rise in US political violence 

    Sign up for PolitiFact texts

    [ad_2]

    Source link

  • Breathlessness. Unformed facial features. Manipulative. Here’s how to spot a political deepfake

    Breathlessness. Unformed facial features. Manipulative. Here’s how to spot a political deepfake

    [ad_1]

    You’ve probably seen the word “deepfakes” in the news lately, but are you confident you would be able to spot the difference between real and artificial intelligence-generated content? During the summer, a video of Vice President Kamala Harris saying that she was “the ultimate diversity hire” and “knew nothing about running the country” circulated on social media. Elon Musk, the owner of X, retweeted it. This was, in fact, a deepfake video.By posting it, Musk seemingly ignored X’s own misinformation policies and shared it with his 193 million followers. Although the Federal Communication Commission announced in February that AI-generated audio clips in robocalls are illegal, deepfakes on social media and in campaign advertisements are yet to be subject to a federal ban. A growing number of state legislatures have begun submitting bills to regulate deepfakes as concerns about the spread of misinformation and explicit content heighten on both sides of the aisle. In September, with less than 50 days before the election, California Gov. Gavin Newsom signed three bills that target deepfakes directly — one of which takes effect immediately. AB 2839 bans individuals and groups “from knowingly distributing an advertisement or other election material containing deceptive AI-generated or manipulated content.” This ban would take effect 120 days before an election and 60 days after it, an aim at reducing content that may spread misinformation as votes are being counted and certified. “Signing AB 2839 into law is a significant step in continuing to protect the integrity of our democratic process. With fewer than 50 days until the general election, there is an urgent need to protect against misleading, digitally altered content that can interfere with the election,” said Gail Pellerin, the chair of the Assembly Elections Committee.According to Public Citizen, 25 states have now either signed a bill into law that addresses political deepfakes or have a bill that is awaiting the governor’s signature. Do you know how to spot a deepfake?According to cyber news reporter and cybersecurity expert Kerry Tomlinson, “a deepfake is a computer-created image or voice or video of a person, either a person who doesn’t exist but seems real, or a person who does exist, making them do or say something they never actually did or said.”Tomlinson says there are several giveaways to identify a deepfake. Objects and parts of the face, such as earrings, teeth or glasses, may not be fully formed. Pay attention to the breathing. The speaker takes no breaths while speaking. Ask yourself: Is the message potentially harmful or manipulating?Can the information be verified?Ultimately, Tomlinson encourages people to “learn about how attackers are using deepfakes. Learn about how politicians and political parties are using deepfakes. Read about it. It’s as simple as that.”

    You’ve probably seen the word “deepfakes” in the news lately, but are you confident you would be able to spot the difference between real and artificial intelligence-generated content?

    During the summer, a video of Vice President Kamala Harris saying that she was “the ultimate diversity hire” and “knew nothing about running the country” circulated on social media. Elon Musk, the owner of X, retweeted it. This was, in fact, a deepfake video.

    By posting it, Musk seemingly ignored X’s own misinformation policies and shared it with his 193 million followers.

    Although the Federal Communication Commission announced in February that AI-generated audio clips in robocalls are illegal, deepfakes on social media and in campaign advertisements are yet to be subject to a federal ban.

    A growing number of state legislatures have begun submitting bills to regulate deepfakes as concerns about the spread of misinformation and explicit content heighten on both sides of the aisle.

    In September, with less than 50 days before the election, California Gov. Gavin Newsom signed three bills that target deepfakes directly — one of which takes effect immediately.

    AB 2839 bans individuals and groups “from knowingly distributing an advertisement or other election material containing deceptive AI-generated or manipulated content.”

    This ban would take effect 120 days before an election and 60 days after it, an aim at reducing content that may spread misinformation as votes are being counted and certified.

    “Signing AB 2839 into law is a significant step in continuing to protect the integrity of our democratic process. With fewer than 50 days until the general election, there is an urgent need to protect against misleading, digitally altered content that can interfere with the election,” said Gail Pellerin, the chair of the Assembly Elections Committee.

    According to Public Citizen, 25 states have now either signed a bill into law that addresses political deepfakes or have a bill that is awaiting the governor’s signature.

    Do you know how to spot a deepfake?

    According to cyber news reporter and cybersecurity expert Kerry Tomlinson, “a deepfake is a computer-created image or voice or video of a person, either a person who doesn’t exist but seems real, or a person who does exist, making them do or say something they never actually did or said.”

    Tomlinson says there are several giveaways to identify a deepfake.

    • Objects and parts of the face, such as earrings, teeth or glasses, may not be fully formed.
    • Pay attention to the breathing. The speaker takes no breaths while speaking.
    • Ask yourself: Is the message potentially harmful or manipulating?
    • Can the information be verified?

    Ultimately, Tomlinson encourages people to “learn about how attackers are using deepfakes. Learn about how politicians and political parties are using deepfakes. Read about it. It’s as simple as that.”

    [ad_2]

    Source link

  • How to Humanize AI Content: 3 Strategies for Authentic Engagement | Entrepreneur

    How to Humanize AI Content: 3 Strategies for Authentic Engagement | Entrepreneur

    [ad_1]

    Want to know why human-generated content gets 5.4 times more traffic than AI-generated material? Learn the game-changing strategies that can make your AI content feel more authentic and engaging.

    [ad_2]

    Ben Angel

    Source link