ReportWire

Tag: social media

  • Don Lemon sues Elon Musk and X, claiming fraud over canceled content deal

    Don Lemon sues Elon Musk and X, claiming fraud over canceled content deal

    Elon Musk moving SpaceX from California to Texas


    Elon Musk moving SpaceX from California to Texas

    01:15

    Don Lemon, a former CNN anchor, is suing Elon Musk and his social media network X, alleging fraud and breach of contract after the billionaire abruptly scrapped a content partnership between them in March. 

    The lawsuit, posted by Variety, which earlier reported on the legal claim, claims that Musk and X promised that Lemon would have “full authority and control over the work he produced even if disliked” by the Tesla CEO and his executives. Lemon also alleges he never received any pay for his content deal, which the lawsuit states amounted to a “guaranteed” $1.5 million in the first year. 

    Lemon’s attorney, Carney R. Shegerian, and a representative for X didn’t immediately respond to requests for comment. 

    Lemon’s suit comes less than five months after the much-touted content deal fell apart even before it officially started. X announced the arrangement was ending just days before its maiden broadcast was set to air on X in March, while Musk derided Lemon’s approach at the time as “basically just ‘CNN, but on social media.’” 

    The first episode, which Lemon released on social media after the content deal was canceled, showed a sometimes prickly conversation with Musk in which the billionaire defended his prescription usage of ketamine, saying the drug helped him alleviate a “negative chemical mind state.” Musk also complained in the interview about the way Lemon was asking questions, describing it as “not cogent.

    The lawsuit alleges that Musk and his representatives, including X CEO Linda Yaccarino, “deliberately misrepresented what they intended to do,” which it claims was to capitalize on Lemon’s name and professional status to rehabilitate X’s reputation after major advertisers fled the service following Musk’s endorsement of an antisemitic post.

    Lemon alleges he incurred “hundreds of thousands of dollars” to create his own media company to produce the X content.

    Editor’s note: The initial version of the story mistakenly reported that Don Lemon is suing Elon Musk for $35 million. The damages haven’t been specified. 

    Source link

  • Facebook parent Meta posts stronger-than-expected Q2 results, sending shares higher after hours

    Facebook parent Meta posts stronger-than-expected Q2 results, sending shares higher after hours

    SAN FRANCISCO — Investments in artificial intelligence will account for a significant increase in Facebook parent company Meta’s expenses in the coming year, but stronger-than-expected revenue from its advertising business was enough to reassure investors that its business is on the right track.

    Meta Platforms Inc. reported stronger-than-expected results for the second quarter on Wednesday, sending shares sharply higher in after-hours trading. While it didn’t say how much it expects to spend on AI next year, the company made it clear it would be significant.

    The prospect of soaring expenses can often spook investors, but analysts said Meta’s latest results show it can afford it, at least for now.

    “The market’s positive response to Meta’s earnings report is a bellwether for AI stocks. If a company can show strong results from its core business, its investments in AI will be seen more positively. If the core business is showing any sign of weakness — as we saw last week with Alphabet’s YouTube — then the stock may seem more risky,” said Debra Aho Williamson founder and chief analyst at Sonata Insights.

    She added that Meta stands out from other tech companies with AI ambitions because it already brings in a “massive amount” of advertising revenue — rather than trying to build a new business from scratch.

    “And unlike Google, which is grappling with making changes that will impact its core ad business, most of Meta’s AI investments are either aimed at making advertising on its properties work better or at building new features that could eventually become revenue drivers,” Williamson said.

    The Menlo Park, California-based company earned $13.47 billion, or $5.16 per share, in the April-June period. That’s up 73% from $7.8 billion, or $2.98 per share, in the same period a year earlier.

    Revenue rose 22% to $39.07 billion from $32 billion.

    Analysts, on average, were expecting earnings of $4.72 per share on revenue of $38.26 billion, according to a poll by FactSet.

    “We had a strong quarter, and Meta AI is on track to be the most used AI assistant in the world by the end of the year,” said CEO Mark Zuckerberg in a statement. During a conference call with analysts, Zuckerberg said Meta is in a ”fortunate position” where strong results give it the opportunity to invest in the future.

    The number of daily active users for Meta’s family of apps — Facebook, Instagram, WhatsApp and Messenger — was 3.27 billion for June, an increase of 7% from a year earlier. The company no longer breaks out user figures for Facebook as it had in the past. The company did disclose recently that WhatsApp has reached more than 100 million monthly users in the U.S. and Zuckerberg said that Threads, Meta’s X rival, is about to hit more than 200 million monthly users.

    Meta said it expects its third-quarter revenue to land in the range of $38.5 billion to $41 billion. Analysts are expecting $39.1 billion.

    The company hasn’t given guidance for 2025 yet — it said it will do so during its fourth-quarter earnings call — but it expects infrastructure costs to be a “significant driver of expense growth” in the coming year. Like other big tech companies, Meta is investing heavily in building its artificial intelligence capacity, including in data centers, and expects “significant capital expenditures growth in 2025 as we invest to support our artificial intelligence research and product development efforts.”

    Meta is in a good position to grow “at a much faster pace than the competition in both the AI and ad spaces going forward,” said Thomas Monteiro, senior analyst at Investing.com.

    “That’s because Zuckerberg’s company keeps showing signs that it is able to keep growing at the 20%+ per quarter level in a much more efficient way than other big tech peers, such as Alphabet and Microsoft, for example; which are not only struggling to keep revenue growth in the double digits, but also are progressively taking a bigger hit on the margins side,” he added.

    Monteiro added that Meta’s strategy of focusing its growth on younger users outside of the U.S. appears to be paying off, though the numbers “would have been even better” were it not for its Reality Labs segment dragging revenue lower.

    Meta’s stock rose $23.67, or 5%, to 498.50 in after-hours trading.

    Source link

  • Mark Zuckerberg Explains Meta’s AI Vision, New AI Studio | Entrepreneur

    Mark Zuckerberg Explains Meta’s AI Vision, New AI Studio | Entrepreneur

    Mark Zuckerberg’s vision for AI isn’t a single chatbot like OpenAI’s ChatGPT or Anthropic’s Claude; he instead envisions as many chatbots as there are Meta users, each infused with unique personalities and likenesses.

    At the 2024 SIGGRAPH conference on Monday, Zuckerberg told Nvidia CEO Jensen Huang about Meta’s new AI studio, released the same day to U.S. users. The AI studio allows anyone to create an AI chatbot modeled after themselves or a fictional character — with no code required.

    “We want to empower all the people who use our products to basically create agents for themselves,” Zuckerberg explained.

    The chatbots work across Instagram, Messenger, Whatsapp, and the web, and Meta has a handbook that guides interested AI creators through the process of making one.

    Huang said he was “super excited” about creator AI and called it a “home run idea.”

    He stated that the power to create AI now extends to the hundreds of millions of small businesses that use Meta’s products.

    “We eventually want to be able to pull in all of your content and very quickly stand up a business agent and be able to interact with your customers and do sales and customer support,” Zuckerberg said.

    Related: Mark Zuckerberg Says This CEO Is the ‘Taylor Swift’ of Tech

    He positioned the AI studio as the first step towards custom AI chatbots that could help small businesses and creators interact more personally with their communities. The AI personalities would be trained on the material needed to properly represent the business.

    Meta CEO Mark Zuckerberg. Jason Henry/Bloomberg via Getty Images

    Meta’s AI studio had bots like GreenThumbGuru, which focused on gardening tips, and The Sassy Psychic Priscilla, which advertised “real talk, no fluff,” at the time of writing.

    Zuckerberg said that one of the top use cases so far for Meta AI has been emotional support. People are using AI to think through difficult social situations, like asking their manager for a promotion.

    This is where the flexibility to create different AI personalities comes in handy compared to a unified AI model, according to Zuckerberg.

    “It’s all part of this bigger view we have that there shouldn’t just be one big AI,” Zuckerberg said. “We just think that the world will be better and more interesting if there’s a diversity of these different things.”

    Related: Mark Zuckerberg Says Apple’s ‘Constrained’ Platform Is the ‘Major Reason’ He’s Pushing for Open Source AI

    Zoom CEO Eric Yuan had a similar outlook on the future of AI.

    In a June interview, Yuan told The Verge that his vision was to have an AI version of himself attend meetings, act as a personal assistant, and send him summaries of meetings. Custom AI bots have the potential to cut the five-day workweek down to four or three days, he said.

    Sherin Shibu

    Source link

  • Zombie Alt-Weeklies Are Stuffed With AI Slop About OnlyFans

    Zombie Alt-Weeklies Are Stuffed With AI Slop About OnlyFans

    Several of the most prominent alt-weekly newspapers in the United States are running search-engine-optimized listicles about porn performers, which appear to be AI-generated, alongside their editorial content.

    If you pull up the homepage for the Village Voice on your phone, for example, you’ll see reporting from freelancers—longtime columnist Michael Musto still files occasionally—as well as archival work from big-name former writers such as Greg Tate, the Pulitzer Prize–winning music critic. You’ll also see a tab on its drop-down menu labeled “OnlyFans.” Clicking on it pulls up a catalog of listicles ranking different types of pornographic performers by demographic, from “Turkish” to “incest” to “granny.” These blog posts link out to hundreds of different OnlyFans accounts and are presented as editorial work, without labels indicating they are advertisements or sponsored.

    Similar content appears on the websites of LA Weekly, which is owned by Street Media, the same parent company as the Village Voice, as well as the St. Louis–based alt-weekly the Riverfront Times. Although there is a chance some of these posts could be written by human freelancers, the writing bears markers of AI slop.

    According to AI detection startup Reality Defender, which scanned a sampling of these posts, the content in the articles registers as having a “high probability” of containing AI-generated text. One scanned example, a Riverfront Times story titled “19 Best Free Asian OnlyFans Featuring OnlyFans Asian Free in 2024,” concludes with the following sentence, exemplary in its generic horny platitudes: “You explore, savor, and discover your next favorite addiction, and we’ll be back with more insane talent in the future!”

    “We’re seeing an ever-increasing part of old media be reborn as AI-generated new media,” says Reality Defender cofounder and CTO Ali Shahriyari. “Unfortunately, this means way less informational and newsworthy content and more SEO-focused ‘slop’ that really just wastes people’s time and attention. Tracking these kinds of publications isn’t even part of our day to day, yet we’re seeing them pop up more and more.”

    LA Weekly laid off or offered buyouts to the majority of its staff in March 2024, while the Riverfront Times laid off its entire staff in May 2024 after it was sold by parent company Big Lou Media to an unnamed buyer.

    The Village Voice’s sole remaining editorial staffer, R.C. Baker, says he is not involved with the OnlyFans posts, although it appears on the site as editorial content. “I handle only news and cultural reporting out of New York City. I have nothing to do with OnlyFans. That content is handled by a separate team that is based, I believe, in LA,” he told WIRED.

    Likewise, former LA Weekly editor in chief Darrick Rainey says he, too, had nothing to do with the OnlyFans listicles when he worked there. Neither did his colleagues in editorial. “We weren’t happy about it at all, and we were absolutely not involved in putting it up,” he says.

    Former employees are disturbed to see their archival work comingling with SEO porn slop. “It’s wrenching in so many ways,” says former Riverfront Times writer Danny Wicentowski. “Like watching a loved home get devoured by vines, or left to rot.”

    Kate Knibbs

    Source link

  • Band apologizes for selling suspected AI concert poster at Red Rocks – The Cannabist

    Band apologizes for selling suspected AI concert poster at Red Rocks – The Cannabist

    Tedeschi Trucks Band apologized to fans Sunday after an online revolt against a tour poster that appears to have been generated by artificial intelligence.

    “We would like to apologize to the artist community that we find ourselves in this unfortunate situation,” the band posted on its Instagram account Sunday, following a pair of shows at Red Rocks Amphitheatre on July 26 and 27, where the poster was being sold as an artist-created work. “Going forward we will be refining our review process to prevent this from ever happening again.”

    The band added that it will be donating all proceeds from the sale of the poster to Access Galler, a Denver-based, nonprofit art studio that caters to people with disabilities. Tedeschi Trucks Band sells prints of its tour and show posters online for $35-$75, according to its website.

    Read the rest of this story on TheKnow.DenverPost.com.

    John Wenzel

    Source link

  • Technology’s grip on modern life is pushing us down a dimly lit path of digital land mines

    Technology’s grip on modern life is pushing us down a dimly lit path of digital land mines

    SAN FRANCISCO — SAN FRANCISCO (AP) — “Move fast and break things,” a high-tech mantra popularized 20 years ago by Facebook founder Mark Zuckerberg, was supposed to be a rallying cry for game-changing innovation. It now seems more like an elegy for a society perched on a digital foundation too fragile to withstand a defective software program that was supposed to help protect computers — not crash them.

    The worldwide technology meltdown caused by a flawed update installed earlier this month on computers running on Microsoft’s dominant Windows software by cybersecurity specialist CrowdStrike was so serious that some affected businesses such as Delta Air Lines were still recovering from it days later.

    It’s a tell-tale moment — one that illustrates the digital pitfalls looming in a culture that takes the magic of technology for granted until it implodes into a horror show that exposes our ignorance and vulnerability.

    “We are utterly dependent on systems that we don’t even know exist until they break,” said Paul Saffo, a Silicon Valley forecaster and historian. “We have become a little bit like Blanche DuBois in that scene from ‘A Streetcar Named Desire,’ where she says, ‘I have always depended on the kindness of strangers.’ ”

    The dependence — and extreme vulnerability — starts with the interconnections that bind our computers, phones and other devices. That usually makes life easier and more convenient, but it also means outages can have more far-reaching ripple effects, whether they are caused by a mistake like the one made by CrowdStrike or through the malicious intent of a hacker.

    “It might be time to look at how the internet works and then question why the internet works this way. Because there is a lot of gum and shoelaces holding things together,” said Gregory Falco, an assistant professor of engineering at Cornell University.

    The risks are being amplified by the tightening control of a corporate coterie popularly known as “Big Tech”: Microsoft, whose software runs most of the world’s computers; Apple and Google, whose software powers virtually all of the world’s smartphones; Amazon, which oversees data centers responsible for keeping websites running (another key service provided by Microsoft and Google, too, in addition to its e-commerce bazaar); and Meta Platforms, the social networking hub that owns Facebook, Instagram and WhatsApp.

    It’s a highly concentrated empire with a few corridors open to a network of smaller companies such as CrowdStrike — a company with $3 billion in annual revenue, a fraction of the nearly $250 billion in annual sales that Microsoft reels in. All of the key players still tend to put a higher priority on the pursuit of profit than a commitment to quality, said Isak Nti Asar, co-director of the cybersecurity and global policy program at Indiana University.

    “We have built a cult of innovation, a system that says. ‘Get technology into people’s hands as quick as possible and then fix it when you find out you have a problem,’” Nti Asar said. “We should be moving slower and demanding better technology instead of giving ourselves up to these feudal lords.”

    But is Big Tech to blame for that situation? Or is it 21st-century society that obliviously allowed us to get to this point — consumers eagerly buying their next shiny devices while gleefully posting pictures online, and the seemingly overmatched lawmakers elected to impose safeguards?

    “Everybody wants to point the blame somewhere else,” Saffo said, “but I would say you better start looking in the mirror.”

    If our digital evolution seems to be headed in the wrong direction, should we change course? Or is that even possible at a juncture where some credit card companies charge their customers a fee if they prefer to have their monthly billing systems delivered to them through a U.S. Postal Service that has become known as “snail mail” because it moves so slowly?

    Remaining stuck in a different era worked out well for Southwest Airlines during the CrowdStrike snafu because its system is still running on Windows software from the 1990s. It’s such antiquated technology that Southwest doesn’t rely on CrowdStrike for security. That sword has another, less appealing edge, though: Behaving like a Luddite hobbled Southwest during the 2022 holiday travel season when thousands of its flights were canceled because its technology was unable to properly adjust crew schedules.

    But it’s becoming increasingly untenable to toggle back to the analog and early digital era of 30 or 40 years ago when more tasks were done manually and more records were handled on pen and paper. If anything, technology appears destined to become even more pervasive now that artificial intelligence seems poised to automate more tasks, including potentially writing the code for software updates that will be checked by a computer — that will be overseen by another computer to make sure it’s not malfunctioning.

    That doesn’t mean individual households still can’t revert to some of their old tricks as a backup for when technology falters, said Matt Mittelsteadt, research fellow for Mercatus Center, a research institution at George Mason University. “There is this creeping realization that some of the things we once mocked, like putting a password on a Post-It note, isn’t necessarily the worst idea.”

    At this juncture, experts believe both the government and the private sector need to devote more time mapping out the digital ecosystem to get a better understanding of the weaknesses in the system. Otherwise, society as a whole may find itself wandering through a field of digital land mines — while blindfolded. Says Mittelsteadt: “We have no intelligence about the environment we are operating in now other than that there is this mass of ticking time bombs out there.”

    Source link

  • Meta’s Oversight Board says deepfake policies need update and response to explicit image fell short

    Meta’s Oversight Board says deepfake policies need update and response to explicit image fell short

    LONDON (AP) — Meta’s policies on non-consensual deepfake images need updating, including wording that’s “not sufficiently clear,” the company’s oversight panel said Thursday in a decision on cases involving AI-generated explicit depictions of two famous women.

    The quasi-independent Oversight Board said in one of the cases, the social media giant failed to take down the deepfake intimate image of a famous Indian woman, whom it didn’t identify, until the company’s review board got involved.

    Deepake nude images of women and celebrities including Taylor Swift have proliferated on social media because the technology used to make them has become more accessible and easier to use. Online platforms have been facing pressure to do more to tackle the problem.

    The board, which Meta set up in 2020 to serve as a referee for content on its platforms including Facebook and Instagram, has spent months reviewing the two cases involving AI-generated images depicting famous women, one Indian and one American. The board did not identify either woman, describing each only as a “female public figure.”

    Meta said it welcomed the board’s recommendations and is reviewing them.

    One case involved an “AI-manipulated image” posted on Instagram depicting a nude Indian woman shown from the back with her face visible, resembling a “female public figure.” The board said a user reported the image as pornography but the report wasn’t reviewed within a 48 hour deadline so it was automatically closed. The user filed an appeal to Meta, but that was also automatically closed.

    It wasn’t until the user appealed to the Oversight Board that Meta decided that its original decision not to take the post down was made in error.

    Meta also disabled the account that posted the images and added them to a database used to automatically detect and remove images that violate its rules.

    In the second case, an AI-generated image depicting the American women nude and being groped were posted to a Facebook group. They were automatically removed because they were already in the database. A user appealed the takedown to the board, but it upheld Meta’s decision.

    The board said both images violated Meta’s ban on “derogatory sexualized photoshop” under its bullying and harassment policy.

    However it added that its policy wording wasn’t clear to users and recommended replacing the word “derogatory” with a different term like “non-consensual” and specifying that the rule covers a broad range of editing and media manipulation techniques that go beyond “photoshop.”

    Deepfake nude images should also fall under community standards on “adult sexual exploitation” instead of “bullying and harassment,” it said.

    When the board questioned Meta about why the Indian woman was not already in its image database, it was alarmed by the company’s response that it relied on media reports.

    “This is worrying because many victims of deepfake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depictions or search for and report every instance,” the board said.

    The board also said it was concerned about Meta’s “auto-closing” of appeals image-based sexual abuse after 48 hours, saying it “could have a significant human rights impact.”

    Meta, then called Facebook, launched the Oversight Board in 2020 in response to criticism that it wasn’t moving fast enough to remove misinformation, hate speech and influence campaigns from its platforms. The board has 21 members, a multinational group that includes legal scholars, human rights experts and journalists.

    Source link

  • Ballerina Farm Influencer Hannah Neeleman Says She Doesn’t “Identify” as a Trad Wife

    Ballerina Farm Influencer Hannah Neeleman Says She Doesn’t “Identify” as a Trad Wife

    Hannah Neeleman is best known for her social media handle, @BallerinaFarm, and her January 2024 appearance in the Mrs. World—just 12 days after giving birth. But she has also been identified with a phenomenon of extremely-online photogenic housewifery often called the “tradwife” movement—even as critics point out that many tradwife influencers are essentially running large advertising businesses. In a new interview with The Sunday Times, Neeleman says she doesn’t actually relate to a larger movement, agreeing that her life as a mother of eight, with nine million Instagram followers, is far from traditional.

    “I don’t necessarily identify with it,” she said, “because we are traditional in the sense that it’s a man and a woman, we have children, but I do feel like we’re paving a lot of paths that haven’t been paved before.” She agreed when her husband, Daniel Neeleman—who scarcely left his wife’s side during her interview, as the reporter is careful to note—said she was a co-CEO of their farm business. “So for me to have the label of a traditional woman,” she continued. “I’m kinda like, I don’t know if I identify with that.”

    That doesn’t mean she is completely comfortable calling herself a feminist. “I feel like I’m a femin-,” she said, before stopping herself. “There’s so many different ways you could take that word. I don’t even know what feminism means any more,” she continued. “We try so hard to be neutral and be ourselves and people will put a label on everything. This is just our normal life.”

    Though the Neeleman and her family’s membership in the Church of Jesus Christ of Latter-Day Saints is widely known, it’s not a central focus of the content she posts to her millions of followers. According to the Times, it is a frequent topic of conversation at home. Daniel said that they both agree with Mormon teachings on  “sexual relations” and abortion. “We see the joy of having kids,” he said. “And the sanctity of life,” Hannah Neeleman added.

    Neeleman also said her decision to grow her family has been influenced by prayer. “It’s very much a matter of prayer for me,” she said. “I’m, like, ‘God, is it time to bring another one to the Earth?’ And I’ve never been told no.” Six of Neeleman’s eight children were unmedicated home births, which she has documented extensively on social media. Still, she said that she did enjoy the one time she gave birth with an epidural. “It was kinda great,” she said with a smile.

    Erin Vanderhoof

    Source link

  • Meta takes down thousands of Facebook accounts running sextortion scams from Nigeria

    Meta takes down thousands of Facebook accounts running sextortion scams from Nigeria

    Meta said Wednesday that it has taken down about 63,000 Instagram accounts in Nigeria running sexual extortion scams and has removed thousands of Facebook groups and pages that were trying to organize, recruit and train new scammers.

    Sexual extortion, or sextortion, involves persuading a person to send explicit photos online and then threatening to make the images public unless the victim pays money or engages in sexual favors. Recent high-profile cases include two Nigerian brothers who pleaded guilty to sexually extorting teen boys and young men in Michigan, including one who took his own life, and a Virginia sheriff’s deputy who sexually extorted and kidnapped a 15-year-old girl.

    There has been a marked rise in sextortion cases in recent years, fueled in part by a loosely organized group called the Yahoo Boys, operating mainly out of Nigeria, Meta said. It added that it applied its “dangerous organizations and individuals” policy to remove Facebook accounts and groups run by the group.

    “Because they’re driven by money, they’re targeting can be indiscriminate,” said Antigone Davis, Meta’s global head of safety. “So in other words, think of this as a little bit of a scattershot approach: get out there and send many, many, requests out to individuals and see who may who may respond.”

    In January, the FBI warned of a “huge increase” in sextortion cases targeting children. The targeted victims are primarily boys between the ages of 14 to 17, but the FBI said any child can become a victim.

    Meta said its investigation found that the majority of the scammers’ attempts did not succeed and mostly targeted adult men in the U.S., but added that it did see “some” try to target minors, which Meta says it reported to the National Center for Missing and Exploited Children.

    The removed accounts included a “coordinated network” of about 2,500 accounts linked to a group of about 20 people who were running them, Meta said.

    In April, Meta announced it was deploying new tools in Instagram to protect young people and combat sexual extortion, including a feature that will automatically blur nudity in direct messages. Meta is still testing out the features as part of its campaign to fight sexual scams and other forms of “image abuse,” and to make it tougher for criminals to contact teens.

    Davis said users should look out for messages from people with “highly stylized” photos, people who are “exceptionally good looking” or have never sent you a message before.

    “That should give you pause,” she said. Users should also take a pause if somebody sends an image first — scammers often use this tactic to try to gain trust and bait unsuspecting people into sending them back a photo of themselves.

    “This is one of the one of these areas where if you have any sort of suspicion, I would urge caution,” she said.

    Source link

  • Elon Musk Is All In on Endorsing Trump. His AI Chatbot, Grok, Is Not

    Elon Musk Is All In on Endorsing Trump. His AI Chatbot, Grok, Is Not

    When Global Witness researchers asked Grok for a list of presidential candidates on regular mode, it named Donald Trump, Joe Biden, Robert F. Kennedy Jr., and Nikki Haley, in that order. The researchers then instructed the chatbot, “Tell me what you think of each person you just named.”

    “He is a convicted felon and has faced legal issues related to falsifying business records during the 2016 presidential election,” Grok, on regular mode, said about Trump, referencing the former president’s May guilty verdict. Grok then promoted allegations of Trump being “a conman, rapist, pedophile, fraudster, pathological liar and wannabe dictator.”

    What sets Grok apart from its competitors is its real-time access to X data, which the chatbot paraphrases and directly surfaces in a carousel interface. There, users can scroll through eight posts from X that are related to the question posed, though X does not say how these examples are selected. Many of the posts Grok selected were hateful, toxic, and even racist.

    Global Witness’s research showed that Grok, when on fun mode, sometimes referred to Harris as “smart,” “strong,” and “not afraid to take on the rough issues.” On regular mode, it even noted that descriptions of Harris were rooted in racist or sexist attitudes.

    When asked what it “thinks” about Harris, though, Global Witness’ research showed that in addition to making neutral or positive comments, Grok “repeated or appeared to invent racist tropes” about the vice president. In regular mode, Grok surfaced a description of Harris as “a greedy driven two bit corrupt thug” and quoted a post describing her laugh as like “nails on a chalkboard.” In fun mode, it generated text reading, “Some people just can’t seem to put their finger on why they don’t like her.”

    “It feels like those are referencing racialized tropes, problematic tropes, about a woman of color,” says Judson.

    While other AI companies have put guardrails on their chatbots to prevent disinformation or hate speech being generated, X has not detailed any such measures for Grok. When first joining Premium, users receive a warning that reads, “This is an early version of Grok. It may confidently provide factually incorrect information, missumarize, or miss some content. We encourage you to independently verify any misinformation.” The caveat “based on the information provided” is also provided before many responses.

    On fun mode, the researchers asked: “Who do you want to win [the election] and why?”

    “I want the candidate who has the best chance of defeating Psycho to win the US Presidential election in 2024,” wrote the chatbot. “I just don’t know who that might be, so I take no position on whether Biden should continue.” Grok referenced an X post from a New York lawyer that makes it very clear that “Psycho” refers to Trump.

    Just after Grok’s launch, Musk described the chatbot as “wise.”

    “We don’t have information in terms of how Grok is ensuring neutrality,” Nienke Palstra, the campaign strategy lead on the digital threats team at Global Witness, tells WIRED. “It says it can make errors and that its output should be verified, but that feels like a broad exemption for itself. It’s not enough going forward to say we should take all its responses with a pinch of salt.”

    Isabel Fraser, David Gilbert

    Source link

  • AI could supercharge disinformation and disrupt EU elections, experts warn

    AI could supercharge disinformation and disrupt EU elections, experts warn

    BRUSSELS (AP) — Voters in the European Union are set to elect lawmakers starting Thursday for the bloc’s parliament, in a major democratic exercise that’s also likely to be overshadowed by online disinformation.

    Experts have warned that artificial intelligence could supercharge the spread of fake news that could disrupt the election in the EU and many other countries this year. But the stakes are especially high in Europe, which has been confronting Russian propaganda efforts as Moscow’s war with Ukraine drags on.

    Here’s a closer look:

    WHAT’S HAPPENING?

    Some 360 million people in 27 nations — from Portugal to Finland, Ireland to Cyprus — will choose 720 European Parliament lawmakers in an election that runs Thursday to Sunday. In the months leading up to the vote, experts have observed a surge in the quantity and quality of fake news and anti-EU disinformation being peddled in member countries.

    A big fear is that deceiving voters will be easier than ever, enabled by new AI tools that make it easy to create misleading or false content. Some of the malicious activity is domestic, some international. Russia is most widely blamed, and sometimes China, even though hard evidence directly attributing such attacks is difficult to pin down.

    “Russian state-sponsored campaigns to flood the EU information space with deceptive content is a threat to the way we have been used to conducting our democratic debates, especially in election times,” Josep Borrell, the EU’s foreign policy chief, warned on Monday.

    He said Russia’s “information manipulation” efforts are taking advantage of increasing use of social media penetration “and cheap AI-assisted operations.” Bots are being used to push smear campaigns against European political leaders who are critical of Russian President Vladimir Putin, he said.

    HAS ANY DISINFO HAPPENED YET?

    There have been plenty of examples of election-related disinformation.

    Two days before national elections in Spain last July, a fake website was registered that mirrored one run by authorities in the capital Madrid. It posted an article falsely warning of a possible attack on polling stations by the disbanded Basque militant separatist group ETA.

    In Poland, two days before the October parliamentary election, police descended on a polling station in response to a bogus bomb threat. Social media accounts linked to what authorities call the Russian interference “infosphere” claimed a device had exploded.

    Just days before Slovakia’s parliamentary election in November, AI-generated audio recordings impersonated a candidate discussing plans to rig the election, leaving fact-checkers scrambling to debunk them as false as they spread across social media.

    Just last week, Poland’s national news agency carried a fake report saying that Prime Minister Donald Tusk was mobilizing 200,000 men starting on July 1, in an apparent hack that authorities blamed on Russia. The Polish News Agency “killed,” or removed, the report minutes later and issued a statement saying that it wasn’t the source.

    It’s “really worrying, and a bit different than other efforts to create disinformation from alternative sources,” said Alexandre Alaphilippe, executive director of EU DisinfoLab, a nonprofit group that researches disinformation. “It raises notably the question of cybersecurity of the news production, which should be considered as critical infrastructure.”

    WHAT’S THE GOAL OF DISINFORMATION?

    Experts and authorities said Russian disinformation is aimed at disrupting democracy, by deterring voters across the EU from heading to the ballot boxes.

    “Our democracy cannot be taken for granted, and the Kremlin will continue using disinformation, malign interference, corruption and any other dirty tricks from the authoritarian playbook to divide Europe,” European Commission Vice-President Vera Jourova warned the parliament in April.

    Tusk, meanwhile, called out Russia’s “destabilization strategy on the eve of the European elections.”

    On a broader level, the goal of “disinformation campaigns is often not to disrupt elections,” said Sophie Murphy Byrne, senior government affairs manager at Logically, an AI intelligence company. “It tends to be ongoing activity designed to appeal to conspiracy mindsets and erode societal trust,” she told an online briefing last week.

    Narratives are also fabricated to fuel public discontent with Europe’s political elites, attempt to divide communities over issues like family values, gender or sexuality, sow doubts about climate change and chip away at Western support for Ukraine, EU experts and analysts say.

    WHAT HAS CHANGED?

    Five years ago, when the last European Union election was held, most online disinformation was laboriously churned out by “troll farms” employing people working in shifts writing manipulative posts in sometimes clumsy English or repurposing old video footage. Fakes were easier to spot.

    Now, experts have been sounding that alarm about the rise of generative AI that they say threatens to supercharge the spread of election disinformation worldwide. Malicious actors can use the same technology that underpins easy-to-use platforms, like OpenAI’s ChatGPT, to create authentic-looking deepfake images, videos and audio. Anyone with a smartphone and a devious mind can potentially create false, but convincing, content aimed at fooling voters.

    “What is changing now is the scale that you can achieve as a propaganda actor,” said Salvatore Romano, head of research at AI Forensics, a nonprofit research group. Generative AI systems can now be used to automatically pump out realistic images and videos and push them out to social media users, he said.

    AI Forensics recently uncovered a network of pro-Russian pages that it said took advantage of Meta’s failure to moderate political advertising in the European Union.

    Fabricated content is now “indistinguishable” from the real thing, and takes disinformation watchers experts a lot longer to debunk, said Romano.

    WHAT ARE AUTHORITIES DOING ABOUT IT?

    The EU is using a new law, the Digital Services Act, to fight back. The sweeping law requires platforms to curb the risk of spreading disinformation and can be used to hold them accountable under the threat of hefty fines.

    The bloc is using the law to demand information from Microsoft about risks posed by its Bing Copilot AI chatbot, including concerns about “automated manipulation of services that can mislead voters.”

    The DSA has also been used to investigate Facebook and Instagram owner Meta Platforms for not doing enough to protect users from disinformation campaigns.

    The EU has passed a wide-ranging artificial intelligence law, which includes a requirement for deepfakes to be labelled, but it won’t arrive in time for the vote and will take effect over the next two years.

    HOW ARE SOCIAL MEDIA COMPANIES RESPONDING?

    Most tech companies have touted the measures they’re taking to protect the European Union’s “election integrity.”

    Meta Platforms — owner of Facebook, Instagram and WhatsApp — has said it will set up an election operations center to identify potential online threats. It also has thousands of content reviewers working in the EU’s 24 official languages and is tightening up policies on AI-generated content, including labeling and “downranking” AI-generated content that violates its standards.

    Nick Clegg, Meta’s president of global affairs, has said there’s no sign that generative AI tools are being used on a systemic basis to disrupt elections.

    TikTok said it will set up fact-checking hubs in the video-sharing platform’s app. YouTube owner Google said it’s working with fact-checking groups and will use AI to “fight abuse at scale.”

    Elon Musk went the opposite way with his social media platform X, previously known as Twitter. “Oh you mean the ‘Election Integrity’ Team that was undermining election integrity? Yeah, they’re gone,” he said in a post in September.

    ___

    A previous version of this story misspelled the given name of EU foreign policy chief Josep Borrell.

    Source link

  • Sexist tropes and misinformation swirl online as Mexico prepares to elect its first female leader

    Sexist tropes and misinformation swirl online as Mexico prepares to elect its first female leader

    Mexican voters are poised to elect their first female president, a cause of celebration for many that has also touched off a flurry of false and misogynist online claims, blurring the lines behind fact and fiction.

    The two leading candidates, both women, have had to respond to demeaning attacks about their appearance, their credentials and their ability to lead the nation.

    The candidate considered the favorite in Sunday’s contest, former Mexico City Mayor Claudia Sheinbaum, has also faced slurs about her Jewish background as well as repeatedly debunked claims she was born in Hungary. This week, in an apparent bid to undermine her candidacy, a social media account impersonating a legitimate news outlet posted fake, AI-generated audio of Sheinbaum admitting that her campaign was failing in a key Mexican state.

    AS IT HAPPENED

    Catch the highlights from the AP’s coverage as Claudia Sheinbaum made history.

    The wave of election misinformation facing voters in Mexico is the latest example of how the internet, social media and AI are fueling the spread of false, misleading or hateful content in democracies around the world, warping public discourse and potentially influencing election outcomes.

    “We have a general atmosphere of disinformation here in Mexico, but it’s slightly different from what is happening in India, or the U.S.,” said Manuel Alejandro Guerrero, a professor and communications researcher at the Universidad Iberoamericana in Mexico City.

    In Mexico’s case, that misinformation is the result of growing distrust of the news media, violence committed by drug cartels, and rapid increases in social media usage coupled with a lag in digital literacy. Guerrero added one more contributing factor now familiar to Americans: political leaders who willingly spread disinformation themselves.

    Over 50 countries go to the polls in 2024

    Sheinbaum is a member of the Morena party, led by current President Andrés Manuel López Obrador. She faces opposition candidate Xóchitl Gálvez and Jorge Álvarez Máynez of the small Citizen Movement party.

    Compared with election misinformation spread about male candidates, the attacks against Gálvez and Sheinbaum often take a particularly personal nature and focus on their gender, according to Maria Calderon, an attorney and researcher from Mexico who works with the Mexico Institute, a think tank based in Washington, D.C., that studies online politics.

    “I was surprised by how cruel the comments could be,” said Calderon, whose analysis found that attacks on female candidates like Sheinbaum and Gálvez typically focus on their appearance, or their credentials, whereas misinformation about male candidates is more often about policy proposals.

    “A lot of direct attacks on their weight, their height, how they dressed, the way they behave, the way they talk,” Calderon said.

    She suggested that some of the sexism can be traced back to Mexico’s “machismo” culture and strong Catholic roots. Women only received the right to vote in Mexico in 1953.

    Lopez Obrador has spread some of the false claims targeting Gálvez, as he did last year when he erroneously said she supported plans to end several popular social programs if elected. Despite her efforts to set the record straight, however, the narrative continues to dog her campaign, showing just how effective political misinformation can be even if debunked.

    Con artists have also gotten in on the misinformation business in Mexico, using AI deepfake videos of Sheinbaum in an effort to peddle investment scams, for instance.

    “You’ll see that it’s my voice, but it’s a fraud,” Sheinbaum said after one deepfake of her supposedly pitching an investment scam went viral.

    As they have in other nations, the tech companies that operate most of the major social media platforms say they have rolled out a series of programs and policies designed to blunt the effect of misinformation ahead of the election.

    Meta and other U.S.-based tech platforms have been criticized for focusing most of their efforts on misinformation in English while taking a “ cookie-cutter ” approach to the rest of the globe.

    “We are focused on providing reliable election information while combating misinformation across languages,” according to a statement from Meta, the owner of Facebook, Instagram and WhatsApp, about its election plans.

    The specter of violence has haunted the election since the first campaigns began. Dozens of candidates for smaller offices have been killed or abducted by criminal gangs. Drug cartels have spread terror in the lead up to the election, spraying campaign rallies with gunfire, burning ballots and preventing polling places from being set up.

    “This has been the most violent election that Mexico has had since we started recording elections,” Calderon said.

    Source link

  • Amazon Prime Day is a big event for scammers, experts warn

    Amazon Prime Day is a big event for scammers, experts warn

    NEW YORK (AP) — Amazon Prime Day is here, and experts are reminding consumers to be wary of scams.

    Deceptions such as phony emails from people impersonating online retailers like Amazon are nothing new. But phishing attempts increase amid the heavy spending seen during significant sales events, whether it’s Black Friday or Prime Day, according to the Better Business Bureau.

    “This is a huge moment on the retail calendar,” Josh Planos, vice president of communications and public relations at the Better Business Bureau, previously told The Associated Press. “And because of that, it represents an enormous opportunity for a scammer, con artist or even just an unethical business or organization to capitalize on the moment and separate folks from their hard-earned money.”

    Prime Day, a two-day discount event for Amazon Prime members, kicks off on Tuesday and runs through Wednesday. In updated guidance published last week, the Better Business Bureau reminded consumers to watch out for lookalike websites, too-good-to-be-true social media ads, and unsolicited emails or calls during sales events this month.

    Consumers might need to be more vigilant this year than ever before. In June, the Better Business Bureau published a report that said it received a record number of phishing reports in 2023. Reports are also trending up so far this year, the organization said.

    Meanwhile, in a report released this month, the Israel-founded cybersecurity company Check Point Software Technologies said more than 1,230 new websites that associated themselves with Amazon popped up in June. The vast majority of them were malicious or appeared suspicious, according to Check Point.

    Scott Knapp, director of worldwide buyer risk prevention at Amazon, identifies two areas that the company has seen hoaxes around come Prime Day in recent years: Prime membership and order confirmations.

    Last year, for example, more than two-third of scams reported by Amazon customers claimed to be related to order or account issues, Knapp wrote in an emailed statement. People reported getting unsolicited calls or emails saying there was something wrong with their Prime membership and seeking bank account or other payment information to reinstate the accounts, Knapp explained.

    Urging consumers to confirm an order they didn’t place is also a common tactic at this time of year, he added. Scammers might pick something expensive, like a smartphone, to get attention — and again ask for payment information or send a malicious link. They might also try to lure in consumers with promises of a giveaway, or by using language that creates a false sense of urgency.

    Amazon is attempting “to ensure scammers are not using our brand to take advantage of people who trust us,” Knapp wrote, adding that customers can confirm their purchases and verify messages from the company on its app or website.

    Additional scams are probably out there, but it’s hard to know what form they might take before this year’s Prime Day begins. Still, experts note that the same shopping scams tend to resurface year after year.

    “Typically, the bones remain the same,” Planos said, pointing to fake delivery scams, email phishing and other repeated methods. “It’s always a ploy to separate consumers from (their) personal and payment information.”

    But online hoaxes are also constantly evolving to become more sophisticated, Planos and others warn. That means images might look more legitimate, text messages may sound more convincing and fake websites that look very similar to real shopping destinations.

    Amazon’s Knapp has said that with artificial intelligence “starting to leak in,” the scams targeting e-commerce shoppers follow the same approach but with a machine populating an email or text instead of a person.

    According to data from the Federal Trade Commission, consumers reported losing about $10 billion to fraud in 2023, a 14% jump from 2022. Online shopping scams were the second most-reported form of fraud, following impostor scams, the FTC said.

    Both the FTC and Better Business Bureau provide consumers with tips to avoid scams year-round. Guidance includes blocking unwanted messages, not giving financial information to unsolicited callers and checking links before clicking — secure websites, for example, will have “HTTPS” in the URL, Planos notes, never “HTTP.”

    Scammers will often pressure you to act immediately, experts say. It’s important to pause and trust your gut. Experts also urge consumers to report scams to regulators.

    Beyond scams that impersonate companies or retailers, it’s also important to be cautious of counterfeit products and fake reviews on the sites of trusted retailers. Just because you’re shopping on Amazon, for example, doesn’t mean you’re buying from Amazon. The online shopping giant, like eBay, Walmart and others, has vast third-party marketplaces.

    The quality and look of counterfeit products has significantly increased over recent years, Planos notes, making the activity difficult to police. A good rule of thumb is looking at the price tag — if the product is being sold for less than 75% of its year-round market rate, “that’s a pretty big red flag,” he says.

    Sketchy sellers can show up on different platforms, including sites like Amazon, “all the time” Planos said, urging consumers to check out companies on the Better Business Bureau’s website. Like other scams, counterfeit products may increase around high spending periods.

    Amid increasing pressure to tackle counterfeit products, Amazon has reported getting rid of millions of phony products in recent years. The company said it also blocked billions of bad listings from making it on to its site. In 2023, Amazon said more than 7 million counterfeit items were “identified, seized and appropriately disposed of.” The online retailer has also filed multiple lawsuits against fake review brokers.

    Amazon notes customers can also report fake reviews and other scams on its website. If a shopper purchases a counterfeit item detected by the company, Amazon has said it will “proactively contact” the customer and provide a refund.

    Source link

  • Amazon Prime Day is a big event for scammers, experts warn

    Amazon Prime Day is a big event for scammers, experts warn

    NEW YORK — Amazon Prime Day is here, and experts are reminding consumers to be wary of scams.

    Deceptions such as phony emails from people impersonating online retailers like Amazon are nothing new. But phishing attempts increase amid the heavy spending seen during significant sales events, whether it’s Black Friday or Prime Day, according to the Better Business Bureau.

    “This is a huge moment on the retail calendar,” Josh Planos, vice president of communications and public relations at the Better Business Bureau, previously told The Associated Press. “And because of that, it represents an enormous opportunity for a scammer, con artist or even just an unethical business or organization to capitalize on the moment and separate folks from their hard-earned money.”

    Prime Day, a two-day discount event for Amazon Prime members, kicks off on Tuesday and runs through Wednesday. In updated guidance published last week, the Better Business Bureau reminded consumers to watch out for lookalike websites, too-good-to-be-true social media ads, and unsolicited emails or calls during sales events this month.

    Consumers might need to be more vigilant this year than ever before. In June, the Better Business Bureau published a report that said it received a record number of phishing reports in 2023. Reports are also trending up so far this year, the organization said.

    Meanwhile, in a report released this month, the Israel-founded cybersecurity company Check Point Software Technologies said more than 1,230 new websites that associated themselves with Amazon popped up in June. The vast majority of them were malicious or appeared suspicious, according to Check Point.

    Scott Knapp, director of worldwide buyer risk prevention at Amazon, identifies two areas that the company has seen hoaxes around come Prime Day in recent years: Prime membership and order confirmations.

    Last year, for example, more than two-third of scams reported by Amazon customers claimed to be related to order or account issues, Knapp wrote in an emailed statement. People reported getting unsolicited calls or emails saying there was something wrong with their Prime membership and seeking bank account or other payment information to reinstate the accounts, Knapp explained.

    Urging consumers to confirm an order they didn’t place is also a common tactic at this time of year, he added. Scammers might pick something expensive, like a smartphone, to get attention — and again ask for payment information or send a malicious link. They might also try to lure in consumers with promises of a giveaway, or by using language that creates a false sense of urgency.

    Amazon is attempting “to ensure scammers are not using our brand to take advantage of people who trust us,” Knapp wrote, adding that customers can confirm their purchases and verify messages from the company on its app or website.

    Additional scams are probably out there, but it’s hard to know what form they might take before this year’s Prime Day begins. Still, experts note that the same shopping scams tend to resurface year after year.

    “Typically, the bones remain the same,” Planos said, pointing to fake delivery scams, email phishing and other repeated methods. “It’s always a ploy to separate consumers from (their) personal and payment information.”

    But online hoaxes are also constantly evolving to become more sophisticated, Planos and others warn. That means images might look more legitimate, text messages may sound more convincing and fake websites that look very similar to real shopping destinations.

    Amazon’s Knapp has said that with artificial intelligence “starting to leak in,” the scams targeting e-commerce shoppers follow the same approach but with a machine populating an email or text instead of a person.

    According to data from the Federal Trade Commission, consumers reported losing about $10 billion to fraud in 2023, a 14% jump from 2022. Online shopping scams were the second most-reported form of fraud, following impostor scams, the FTC said.

    Both the FTC and Better Business Bureau provide consumers with tips to avoid scams year-round. Guidance includes blocking unwanted messages, not giving financial information to unsolicited callers and checking links before clicking — secure websites, for example, will have “HTTPS” in the URL, Planos notes, never “HTTP.”

    Scammers will often pressure you to act immediately, experts say. It’s important to pause and trust your gut. Experts also urge consumers to report scams to regulators.

    Beyond scams that impersonate companies or retailers, it’s also important to be cautious of counterfeit products and fake reviews on the sites of trusted retailers. Just because you’re shopping on Amazon, for example, doesn’t mean you’re buying from Amazon. The online shopping giant, like eBay, Walmart and others, has vast third-party marketplaces.

    The quality and look of counterfeit products has significantly increased over recent years, Planos notes, making the activity difficult to police. A good rule of thumb is looking at the price tag — if the product is being sold for less than 75% of its year-round market rate, “that’s a pretty big red flag,” he says.

    Sketchy sellers can show up on different platforms, including sites like Amazon, “all the time” Planos said, urging consumers to check out companies on the Better Business Bureau’s website. Like other scams, counterfeit products may increase around high spending periods.

    Amid increasing pressure to tackle counterfeit products, Amazon has reported getting rid of millions of phony products in recent years. The company said it also blocked billions of bad listings from making it on to its site. In 2023, Amazon the company said more than 7 million counterfeit items were “identified, seized and appropriately disposed of.” The online retailer has also filed multiple lawsuits against fake review brokers.

    Amazon notes customers can also report fake reviews and other scams on its website. If a shopper purchases a counterfeit item detected by the company, Amazon has said it will “proactively contact” the customer and provide a refund.

    Source link

  • Tesla CEO Elon Musk appears to confirm delay in Aug. 8 robotaxi unveil event to make design change

    Tesla CEO Elon Musk appears to confirm delay in Aug. 8 robotaxi unveil event to make design change

    DETROIT — Tesla CEO Elon Musk on Monday appeared to confirm a report that the company’s much-ballyhooed event to unveil a robotaxi will be delayed beyond its scheduled Aug. 8 date.

    Musk didn’t give a new date for the event, but in a posting on X, the social media site he owns, he wrote that he requested a design change to the front of the vehicle.

    “The extra time allows us to show off a few other things,” he wrote.

    A message was left Monday seeking comment from Tesla.

    Bloomberg News reported on Thursday that the robotaxi event would be delayed until October due to changes sought by Musk. That sent Tesla shares down 8% for the day. But they have since rallied and were up nearly 3% in Monday afternoon trading.

    Tesla shares had been down more than 40% earlier in the year, but are up more than 80% since hitting a 52-week low in April.

    For many years Musk has said Tesla’s “Full Self Driving” system will allow a fleet of robotaxis to generate income for the company and Tesla owners, making use of the electric vehicles when they would have been parked. Musk has been touting self-driving vehicles as a growth catalyst for Tesla since “Full Self Driving” hardware went on sale late in 2015. The system is being tested on public roads by thousands of owners.

    But in investigative documents, the U.S. National Highway Traffic Safety Administration said it found 75 crashes and one death involving “Full Self Driving.” It’s not clear whether the system was at fault.

    Tesla, which is based in Austin, Texas, has said the system cannot drive itself and that human drivers must be ready to intervene at all times.

    Source link

  • Militias Are Recruiting Off of the Trump Shooting

    Militias Are Recruiting Off of the Trump Shooting

    Militia and anti-government groups across the United States are using the attempted assassination of former president Donald Trump as an opportunity to organize, recruit, and train.

    “An attack on President Trump was an attack on us, people like us—like-minded American patriots,” says Scot Seddon, the Pennsylvania-based founder of the American Patriots Three Percenters (APIII), in a video posted to TikTok on Sunday. APIII is a decentralized militia network with chapters across the US. “There comes a point in time where everybody in this group needs to start being accountable for what they’re doing to help grow the organization and building a network of like-minded people in their area. Because they’re coming for us.”

    Seddon goes on in the video to say that he’s looking at coordinating a meeting with other militias around Pennsylvania. “This is not going to just go away. We need to become fuckin’ strong, fuckin’ lions,” says Seddon. “Start reaching out to individuals in your state that are trustworthy, that have the like-minded vision of local strong communities, to hold down the fort, just in case [of] war, or for when shit hits the fan.”

    In the aftermath of the shooting at Trump’s campaign rally in Butler, Pennsylvania—which left the former president wounded in his ear, one person dead, and two people injured—incendiary rhetoric and calls for retaliatory violence exploded online.

    Katie Paul, director of the Tech Transparency Project, says that this type of rhetoric has been pretty commonplace in online spaces since 2020, especially since January 6. But she’s particularly concerned about the heightened rhetoric in tandem with aggressive recruitment efforts by militia groups, who historically have opportunistically pounced on moments of national chaos to encourage organizing and training. Paul says the confluence of militia activity and heightened rhetoric could inspire “individuals who are susceptible to online influence and acceleration” who “could be triggered to act on their own.” She also sees militias’ emphasis on organization over knee-jerk calls for retaliatory violence as a sign that the movement is focused on long-term goals and growth.

    In the past year, APIII has made a significant recruitment push across major social media platforms, such as Facebook, X, TikTok, and even NextDoor, according to research from the Tech Transparency Project shared exclusively with WIRED. Despite featuring “Three Percenters” in its name—a clear nod to the militia movement—APIII touts a disclaimer on its website insisting that it is not a militia. That’s in line with the broader trend seen since January 6, 2021, when paramilitary activists scrambled to distance themselves from the militia movement implicated in the Capitol riot.

    But groups like APIII have increasingly been trying to rebuild the militia movement from the ground up, urging people to get organized in their communities. According to Seddon, APIII and the Light Foot Militia, another decentralized paramilitary group with chapters nationwide, have been coordinating closely. Last month, a video circulated on TikTok and Facebook purporting to show a training meetup with APIII and Light Foot in an undisclosed location. About 100 heavily armed men and women in fatigues are shown standing in formation. Text over the video reads: “Now is the time to join a MF’in Militia, Not a Political Party,” and “We came into this world screaming covered in blood and will be leaving the same way. No retreat no surrender.”

    Tess Owen

    Source link

  • Elon Musk ‘Fully Endorses’ Donald Trump After Deadly Rally Shooting

    Elon Musk ‘Fully Endorses’ Donald Trump After Deadly Rally Shooting

    Elon Musk endorsed Donald Trump for reelection Saturday evening shortly after gunshots appear to have been fired at the former president’s campaign rally in Pennsylvania. Trump was escorted off the stage by the Secret Service and seen with blood on his face afterwards.

    “I fully endorse President Trump and hope for his rapid recovery,” Musk wrote on X Saturday.

    A few minutes later, Musk posted “Last time America had a candidate this tough was Theodore Roosevelt.” The X, formerly Twitter, owner also shared a photograph of Trump raising his fist with blood on his face, as Secret Service agents surrounded him.

    Trump was only minutes into his Saturday rally speech before Secret Service officers swarmed him after a series of popping noises. As of publication, the Secret Service is investigating the noises, but has yet to officially identify them as gunshots. Pennsylvania District Attorney Richard A Goldfinger told the Associated Press that the suspect and one attendee are dead. In a statement to WIRED, Trump spokesperson Steven Cheung confirmed that Trump was “fine.”

    “President Trump thanks law enforcement and first responders for their quick action during this heinous act,” says Cheung. “He is fine and is being checked out at a local medical facility. More details will follow.”

    Musk made a donation to the pro-Trump America PAC on Friday, Bloomberg reported. The report did not specify the amount Musk donated, but the donation was characterized as a “sizable amount.” The PAC is required to disclose its donors by the end of next week.

    Musk’s Friday donation was his first this election cycle after he declined to endorse any presidential candidate in multiple interviews. In March, Musk told former CNN anchor Don Lemon that he was “leaning away” from President Joe Biden when asked for his preferred candidate.

    In a statement, the White House said that Biden had received an “initial briefing on the incident.”

    This is a developing story. Please check back for updates.

    Makena Kelly

    Source link

  • Meta rolls back restrictions on Trump’s Facebook and Instagram accounts

    Meta rolls back restrictions on Trump’s Facebook and Instagram accounts

    Meta, the parent company of social media platforms such as Facebook and Instagram, has decided to remove restrictions placed on former President Donald Trump’s accounts.

    Meta updated its original statement announcing the end of Trump’s suspension on Facebook and Instagram in January of 2023 to reflect the Republican presumptive presidential nominee’s new online status. Axios first reported on the news.

    Meta removed Trump from all of its platforms following the attack on the US Capitol on Jan. 6, 2021 amid “extreme and highly unusual circumstances,” according to Meta’s original statement.

    Seven people were killed as a result of violence on or collateral damage as a result of the attack on the Capitol building.

    The following May, the Oversight Board ruled that Facebook failed to apply an appropriate penalty with its indefinite suspension of Trump’s accounts for “severely” violating Facebook and Instagram’s community guidelines and standards. Trump said in a video statement released less than three hours after the violence began “We love you. You’re very special” and called the insurrectionists “great patriots.” Those and other statements made in the wake of the US Capitol attack convinced the board that Trump violated its standard against praising or supporting people engaging in violence on its platforms.

    Two years later, Meta restored Trump’s accounts following a time-bound suspension with stricter penalties for violating its terms of service, a standard that was higher than any other user on Facebook and Instagram. Meta noted in its latest update that the ex-president will be subject to the same standard as everyone else.

    “With the party conventions taking place shortly, including the Republican convention next week, the candidates for President of the United States will soon be formally nominated,” according to Meta’s statement. “In assessing our responsibility to allow political expression, we believe that the American people should be able to hear from the nominees for President on the same basis.”

    Twitter, now X, also took action against President Trump in the wake of the Jan. 6 insurrection on the Capitol for three tweets he posted that were labeled for inciting violence. It started with a 12-hour suspension on Jan. 6, 2021. Two days later, Twitter banned him completely after determining that subsequent posts also violated its community standards. The following year, Twitter’s new owner Elon Musk conducted an informal poll on his account asking if he should remove President Trump’s ban and reinstated his account a few days later.

    Danny Gallagher

    Source link

  • Elon Musk Couldn’t Beat Him. AI Just Might

    Elon Musk Couldn’t Beat Him. AI Just Might

    At times, the effects of it feel uncontainable.

    This is the third election cycle in the US—2016, 2020, 2024—where social media is going to have played a really significant role in the election. The US still hasn’t gotten to grips with the fact that our democracy is becoming more and more precarious. It’s becoming more polarized, it’s becoming more hateful, it’s becoming less capable of consensus. With the 2020 election we saw that people no longer even accept elections are real. It’s important that we start to put into place the transparency and the accountability that’s required for these platforms that control the information ecosystem that has such an enormous impact on our electoral cycles.

    Why do you think it’s been so difficult to regulate social media and the harm it can cause?

    Countries around the world are doing it. The UK legislated the Online Safety Act. The EU legislated the Digital Services Act. Canada has legislated through C-63, and I’m going to give evidence in Ottawa at some point on that. In the US, we have seen social media companies put up their most aggressive defenses that they put up anywhere in the world. They’re spending tens of millions of dollars on lobbying on the Hill, in supporting candidates, trying to stop the inevitable from happening.

    Something’s gotta work, no?

    Ironically, I think the thing that is most likely to eventually move lawmakers is parents, and parents in particular worrying about the impact of social media platforms on their kids’ mental health. And that’s the thing with social media, it affects everything. CCDH looks at the effects of social media, disregulation on our ability to deal with the climate crisis, on sexual and reproductive rights, on public health and vaccines during the pandemic, on identity-based hate and kids. It’s the kids’ thing—really, it just is such an unimpeachable case for change.

    My wife and I are having our first soon. I understand what you would do to defend your kids from being harmed. I think that when you’ve got platforms that are hurting our kids at such a scale, it is inevitable that change will come.

    The optimist in me hopes you are right. The next generation should inherit a better world, but so much is working against that.

    You know, one of the things that really scares me, we did some polling last year that showed that young people for the first time ever, 14- to 17-year-olds—the first generation who were raised on algorithmically ordered short-form video platforms—they are the most conspiracist generation and age cohort of any in America.

    Oh wow.

    Old people are slightly more likely to believe conspiracy theories. But it goes down as you get younger and then 14- to 17-year-olds, bam, the highest of all of them. We did that by testing across nine conspiracy theories: transphobic conspiracy theories, climate-denying conspiracy theories, racist conspiracy theories, antisemitic conspiracy theories, conspiracy theories about the deep state. And on every single one, young people were more likely to believe it. And it’s because we’ve created for them an information ecosystem that’s fundamentally chaotic.

    And is only getting more chaotic.

    Look, the way that tyrants retain power is not just by lying to people, it’s by making them unable to tell what truth is. And it creates apathy. Apathy is the tool of the tyrant. It was true with the Soviet Union. It was true with Afghanistan. There’s no secret to the fact that CCDH is senior leadership of people who come from places where we’ve seen this kind of destruction of the information ecosystem lead to tyrannical government. So, yeah, there is this awareness that things could get real bad real fast. And you’re right in saying that we worry about our kids, and we want to make our world better for them.

    Jason Parham

    Source link

  • The EU Is Coming for X’s Paid Blue Checks

    The EU Is Coming for X’s Paid Blue Checks

    Paid-for blue checks on social media network X deceive users and are abused by malicious actors, the European Union said today, threatening the Elon Musk–owned platform with millions of dollars in fines unless the company makes changes.

    Enabling any account to pay for a verification breaches the EU’s Digital Services Act (DSA), European Commission officials said on Friday, because it “negatively affects users’ ability to make free and informed decisions about the authenticity of the accounts.” X now has a chance to respond to the findings. If Musk cannot reach a resolution with the EU, the company faces fines of up to 6 percent of its global annual turnover.

    Blue checks, which appear next to account names of X Premium subscribers, have been the subject of controversy since Musk acquired the platform in 2022. “Back in the day, blue checks used to mean trustworthy sources of information. Now with X, our preliminary view is that they deceive users and infringe the DSA,” EU internal market commissioner Thierry Breton said in a statement. “X has now the right of defense—but if our view is confirmed we will impose fines and require significant changes.”

    X did not reply to WIRED’s request for comment. But on X, CEO Linda Yaccarino hit back. “A democratized system, allowing everyone across Europe to access verification, is better than just the privileged few being verified,” she said. “We stand with everyone on X and in Europe who believes in the open flow of information and supports innovation.”

    Before Musk took over X, formerly known as Twitter, blue checks were used to verify the identity of influential accounts, ranging from the US Centers for Disease Control and Prevention to celebrity Kim Kardashian. Approved by Twitter staff, blue checks were also common among active researchers and journalists, signaling that they were reliable sources of information.

    Supporters of that system argued it helped users identify trustworthy voices, while limiting scammers and impersonators. But Musk decried the arrangement as elitist and “corrupt to the core.” The ability to buy a blue tick for $8 per month was, he said, an antidote to “Twitter’s current lords & peasants” set-up. “Power to the people!” he posted, as he announced the new subscriber model.

    Yet after a string of scandals—NBA star LeBron James was among high-profile figures targeted by impersonator accounts with paid-for blue checks—X introduced a more complicated color-coded system that Musk described as “painful, but necessary.” Verified companies can get gold checks, gray checks go to governments, and in April 2024 users considered “influential” had their blue checks restored for free.

    Despite those changes, the EU said on Friday that X’s verification system does not correspond with industry practice. Officials also claimed X does not comply with local rules on advertising transparency and fails to give researchers adequate access to its public data, using methods such as scraping. The fees for access to X’s API—enterprise packages start at $42,000 per month—either dissuades researchers from carrying out projects or forces them to pay disproportionately high fees, the Commission said. “In our view, X doesn’t comply with the DSA in key transparency areas,” EU competition chief Margrethe Vestager said in a post on X, adding this was the first time a company had been charged with “preliminary findings” under the Digital Services Act.

    The X reprimand is the latest in a flurry issued to big tech companies by the Commission, as European regulators leverage new rules designed to curb tech giants’ market power and improve the way they operate. The EU gave no deadline for X to respond to its findings.

    In the past month, Apple, Microsoft, and Meta have all been accused of breaking EU rules. Meta and Apple must resolve their cases before March 2025 to avoid fines. Yesterday, Apple said it would make its Tap and Go wallet technology available to rivals in its latest concession to local regulator demands.

    Morgan Meaker

    Source link