ReportWire

Tag: social media

  • Louisiana dad says “it’s disturbing” after deepfake images of his daughter allegedly shared

    [ad_1]

    A Louisiana dad spoke out after explicit deepfake images of his 13-year-old daughter and others were allegedly shared, saying, “It’s disturbing. Those pictures are horrible. They’re extremely explicit, and they look real. You cannot tell the difference.”

    [ad_2]

    Source link

  • How Joyce Carol Oates Posted Her Way to Social Media Glory

    [ad_1]

    Although she effortlessly whacked Musk multiple times, Oates isn’t overly concerned with the billionaire. She posts frequently on both BlueSky and X about a range of topics, and is more than happy to respond to fans and critics alike on the platforms. Since publicly humiliating Musk and sending him to a tizzy, she’s gone on to tweet political musings about Zohran Mamdani, and Charlie Kirk, as well as less fraught fare about Faulkner and Hemingway and a photo of her beloved cat.

    Those who are new to the online life of Joyce Carol Oates may be surprised to learn that she’s been doing these for years. At this point, she’s arguably as prolific a poster as she is a novelist, logging more than 180,00 posts on her official X account @JoyceCarolOates as of the publication of this story. Years ago, she was more likely to be the dunker than a dunkee: Oates experienced the other side of viral fame in 2014, when she loosed this hot take on another prolific but controversial writer. “Though Woody Allen has been much denounced, very likely many of his denouncers greatly admire Nabokov’s ‘Lolita.’” No contradiction?” she tweeted (it was still called tweeting back then). The false equivalency didn’t go over well; at the time, the public flogged Oates for her support of Allen. “Thats not the same at all thats terrible youre terrible but thank you for inventing oatmeal,” read one viral response, which ratioed Oates’s original tweet. (And yes, despite her last name, Joyce Carol Oates didn’t invent oatmeal.)

    Years later, Oates made another Twitter faux pas, taking a gravely serious stance on innocent Halloween decorations. “You can always recognize a place in which no one is feeling much or any grief for a lost loved one & death, dying, & everyone you love decomposing to bones is just a joke,” posted Oates on October 1, 2021, drawing a wide variety of scornful reactions.

    Oates kept posting through it—and now it seems that was the correct strategy. Time heals all wounds on the internet—time, and banger tweets that eviscerate the world’s richest man. Joyce Carol Oates? More like Joyce Carol Roasts.

    [ad_2]

    Chris Murphy

    Source link

  • You’re Not Using Your LinkedIn Profile Correctly 

    [ad_1]

    LinkedIn is not just for job seekers. If you are in sales, your profile can be one of the most powerful tools to build your brand, credibility, and revenue. But to make it work, you have to approach LinkedIn differently. 

    The goal here is not to attract recruiters. It is to attract clients. Your profile should make someone want to spend time with you, believe that you add value, and ultimately decide you are worth meeting. 

    First impressions matter more in sales 

    Think about what happens when you send a LinkedIn message. On mobile, your prospect will only see your name and that you sent them a message. On desktop, they will see your photo, your name, and your headline. If it’s an InMail, they will immediately know you are not connected to them and will assume you want something. 

    That means before they even read your note, they are already deciding whether to engage. What do they have to go on? Only three things: 

    If you do not get these three right, your message will likely go ignored. And the reality is, even with all three, you may still get ignored. But without them, you have no chance. Here’s how to approach it. 

    Step 1: Get the photo right 

    Your photo should be professional, friendly, and approachable. It should be visible to everyone, not just your connections. You want your prospect to feel like you are credible but also someone they could talk to. 

    Step 2: Rewrite your headline to sell value, not your job title 

    This is where most salespeople get it wrong. If your headline just says “vice president at ABC Company,” it does nothing for your prospect. They already know you want to sell them something. What they are asking themselves is, Why should I meet with you instead of ignoring you like I ignore the dozens of other sales messages I get each week? 

    Less than 10 words of your headline will show on the screen. Make those words count. Focus not on what you sell, but on why someone would want to meet with you. 

    For example: 

    • Instead of “VP at ABC Investments,” try “Helping executives get 2x returns.” 

    That line alone is more likely to get your message opened. 

    Step 3: Make your profile prove your credibility 

    Once you get their attention, prospects will often click into your profile to see if you are real, credible, and worth their time. What are they looking for? 

    • Your About section: This does not need to be your job-seeker elevator pitch. Instead, use it to demonstrate credibility. Make it clear that you have the expertise, background, and track record to be trusted. 
    • Your company history: Prospects want to see stability. How long you have been at your current company, what companies you have worked for, and do you look reliable. 
    • Featured section: Turn this on. Showcase awards you won, articles you wrote, or major wins that build credibility. 
    • Recommendations: Strong recommendations create the same effect as referrals in sales, as they give your prospect fear of missing out if they don’t meet with you. 

    Why sales is different 

    For job seekers, LinkedIn is about telling a story of career progression. For recruiters, it is about matching keywords. For salespeople, it is about one word…credibility. 

    You want your profile to answer one simple question for your prospect: “Is this person worth a meeting?” 

    When your photo is approachable, your headline sells value, and your profile proves credibility, the answer will almost always be yes. 

    The early-rate deadline for the 2026 Inc. Regionals Awards is Friday, November 14, at 11:59 p.m. PT. Apply now.

    [ad_2]

    Steven Perlman

    Source link

  • Stewart Rhodes Relaunched the Oath Keepers. Even Old Oath Keepers Don’t Care

    [ad_1]

    Stewart Rhodes announced last week that he is relaunching the Oath Keepers, his anti-government militia which virtually disappeared after dozens of its members—including Rhodes—were arrested for their roles in the January 6 attack on the Capitol.

    Rhodes, speaking to the Gateway Pundit this week, says that he sees the relaunched group as playing a role in combating what he labeled an “insurrection by the left” on the streets of US cities. “Right now, under federal statutes, president Trump can call us up as the militia if he sees it necessary, especially for three purposes: to repel invasions, to suppress insurrections, and to execute the laws of the union,” Rhodes said.

    But in the days since Rhodes announced their return, experts, former members, and online chatter suggest there is little to no interest in restarting what was, at one point, one of the largest militias in America with a leaked database listing 38,000 supposed members in 2021. This hasn’t stopped Rhodes from asking potential new members and supporters to send money in support of the cause.

    But even former Oath Keepers are uninterested. Janet Arroyo, who ran an Oath Keepers chapter in Chino Valley, Arizona, with her husband Jim Arroyo prior to the January 6, 2021 attack on the Capitol, says they have not heard from Rhodes in six years and had no plans to rejoin his group.

    “He hasn’t reached out during his incarceration, nor since being released,” says Arroyo. “No hard feelings, but we are doing what we do and don’t spend a lot of time wondering what he’s up to. The dumb DC stunt has scared a lot of great patriots into hiding. My guess is he won’t be successful.”

    Another former Oath Keeper, Jessica Watkins, an army veteran who was sentenced to eight and a half years in prison for her role in the Capitol attack, says she hadn’t even heard about the relaunch when WIRED contacted her this week. “I have not heard of a relaunch, but most J6ers I know are trying to rebuild their lives,” says Watkins, who added that even if she wanted to rejoin, she would be unable to do so as she had her sentence commuted rather than being pardoned. “Felons are not allowed to be in the Oath Keepers or work with them.”

    Kelly Meggs, who headed up the Florida chapter of the Oath Keepers and was convicted of seditious conspiracy for his part in the attack on the Capitol, says he won’t be joining the relaunched Oath Keepers, as he is concerned about being targeted again when Democrats return to power. “I am more worried about the future,” says Meggs. “I think four and five years from now, eight years from now, 12 years from now, whenever it is, anyone that is a member of these organizations stands at risk of what I went through.”

    [ad_2]

    David Gilbert

    Source link

  • Reese Witherspoon Told Harvard Business School There’s One Social Platform Companies Need to Win On

    [ad_1]

    The award-winning actor, producer, and serial entrepreneur launched her media company, which focuses on elevating women’s stories, with co-founder Seth Brodsky in 2016. Over the past nine years, the Los Angeles-based business, which manages Reese’s Book Club, has built up an audience of more than 4 million across its accounts. That’s not counting Witherspoon’s personal following, which has swelled to more than 45 million across Instagram, TikTok, and Threads.  

    That massive online presence enabled her company to be able to compile data about its audience and leverage those insights, all while bringing customer acquisition costs close to zero, Witherspoon said. Still, there is one platform that even Hello Sunshine has yet to master. 

    “One thing I think is a miss for us is YouTube,” said Witherspoon, who spoke to nearly 200 aspiring founders enrolled in Reza Satchu’s popular class, The Founder Mindset, which aims to teach MBA students about the judgement and characteristics needed to succeed as an entrepreneur. “We don’t have a big presence there.”

    Witherspoon joined the class last Tuesday for the inaugural session of a new Harvard Business School case study, which details her path from actor to founder to exit. In August 2021, Witherspoon sold a majority stake in Hello Sunshine to Candle Media, a firm backed by Blackstone, in a deal that valued the business around $900 million.

    As Witherspoon told the class, YouTube has one of the fastest-growing addressable audiences. More than 2 billion people log onto the platform each month. That’s more than double Netflix, Disney+, HBO Max, and Amazon Prime combined. YouTube has become television—the platform accounts for an industry-leading 12.4 percent of total watch time, according to Nielsen data

    “It’s just been a really tough one for us,” Witherspoon said. “If your business can crack YouTube, that’s pretty major.

    [ad_2]

    Ali Donaldson

    Source link

  • Jack Dorsey funds diVine, a Vine reboot that includes Vine’s video archive | TechCrunch

    [ad_1]

    As generative AI content starts to fill our social apps, a project to bring back Vine’s six-second looping videos is launching with Twitter co-founder Jack Dorsey’s backing. On Thursday, a new app called diVine will give access to more than 100,000 archived Vine videos, restored from an older backup that was created before Vine’s shutdown.

    The app won’t just exist as a walk down memory lane; it will also allow users to create profiles and upload their own new Vine videos. However, unlike on traditional social media, where AI content is often haphazardly labeled, diVine will flag suspected generative AI content and prevent it from being posted.

    Image Credits:daVine

    DiVine’s creation was financed by Jack Dorsey’s nonprofit, “and Other Stuff,” formed in May 2025. The new effort is focused on funding experimental open source projects and other tools that have the potential to transform the social media landscape.

    To build diVine, Evan Henshaw-Plath, an early Twitter employee and member of “and Other Stuff,” explored the Vine archive. After Twitter announced it was shutting down the short video app in 2016, its videos were backed up by a group called the Archive Team. This community archiving project is not affiliated with Archive.org, but is rather a collective that works together to save internet websites that are in danger of being lost.

    Unfortunately, the group had saved Vine’s content as large, 40-50 GB binary files, which wouldn’t be accessible to someone who just wanted to watch some old Vine videos. The fact the archive existed prompted Evan Henshaw-Plath (who goes by the name Rabble) to see if it was possible to extract the old Vine content to serve as the basis for a new Vine-like mobile app.

    Image Credits:daVine

    “So basically, I’m like, can we do something that’s kind of nostalgic?” he told TechCrunch. “Can we do something that takes us back, that lets us see those old things, but also lets us see an era of social media where you could either have control of your algorithms, or you could choose who you follow, and it’s just your feed, and where you know that it’s a real person that recorded the video?”

    Rabble spent a couple of months writing big data scripts and figuring out how the files worked, then reconstructed them along with the information on the old Vine users and the user engagement with the videos, like their views and even a subset of the original comments.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    “I wasn’t able to get all of them out, but I was able to get a lot out and basically reconstruct these Vines and these Vine users, and give each person a new user [profile] on this open network,” he said.

    Rabble estimates the app contains a “good percentage” of the most popular Vine videos, but not a large number of the long tail. For instance, he says there were millions of K-pop-focused videos that were never even archived.

    Image Credits:daVine

    “We have about 150,000 to 200,000 of the videos from about 60,000 of the creators,” he noted, adding that, originally, Vine had a couple of million users and a few million creators by comparison.

    Vine creators, who still own the copyright to their work, can send diVine a DMCA takedown request if they want their Vines removed, or they can verify they’re the account holder by demonstrating they’re still in possession of the social media accounts that were originally listed in their Vine bio. (This process isn’t automated, though, so there could be a delay if a large number of creators try to do this at once.)

    Once they have their account back, they can also choose to post new videos or upload their old content that the restoration process missed.

    To verify that new video uploads are human-made, Rabble is using technology from the human rights nonprofit the Guardian Project, which helps to verify that content was actually recorded on a smartphone, along with other checks.

    Image Credits:daVine

    Plus, because it’s built on Nostr, a decentralized protocol favored by Dorsey, and is open source, developers can set up and create their own apps and run their own hosts, relays, and media servers.

    “Nostr – the underlying open source protocol being used by diVine –  is empowering developers to create a new generation of apps without the need for VC-backing, toxic business models or huge teams of engineers,” Jack Dorsey said in a provided statement. “The reason I funded the non-profit, and Other Stuff, is to allow creative engineers like Rabble to show what’s possible in this new world, by using permissionless protocols which can’t be shut down based on the whim of a corporate owner.”

    Twitter/X’s current owner, Elon Musk, has also promised to bring back Vine, having announced in August that the company discovered the old video archive. But so far, nothing has been publicly launched. The Dorsey-backed diVine project, meanwhile, believes that because the content is coming from an online archive and creators still own their copyrights, it’s fair use.

    Image Credits:daVine

    Rabble also believes there’s consumer demand for this type of non-AI, social experience, despite the popularity of generative AI content and widespread adoption of apps like OpenAI’s Sora and Meta AI.

    “Companies see the AI engagement and they think that people want it,” explained Rabble. “They’re confusing, like — yes, people engage with it; yes, we’re using these things — but we also want agency over our lives and over our social experiences. So I think there’s a nostalgia for the early Web 2.0 era, for the blogging era, for the era that gave us podcasting, the era that you were building communities, instead of just gaming the algorithm,” he said.

    DiVine is available on both iOS and Android at diVine.video.

    [ad_2]

    Sarah Perez

    Source link

  • Denmark’s government aims to ban access to social media for children under 15

    [ad_1]

    Denmark’s government on Friday announced an agreement to ban access to social media for anyone under 15, ratcheting up pressure on Big Tech platforms as concerns grow that kids are getting too swept up in a digitized world of harmful content and commercial interests.

    The move would give some parents — after a specific assessment — the right to let their children access social media from age 13. It wasn’t immediately clear how such a ban would be enforced: Many tech platforms already restrict pre-teens from signing up. Officials and experts say such restrictions don’t always work.

    Such a measure would be among the most sweeping steps yet by a European Union government to limit use of social media among teens and younger children, which has drawn concerns in many parts of an increasingly online world.

    Speaking to The Associated Press, Caroline Stage, Denmark’s minister for digital affairs, said 94% of Danish children under age 13 have profiles on at least one social media platform, and more than half of those under 10 do.

    “The amount of time they spend online — the amount of violence, self-harm that they are exposed to online — is simply too great a risk for our children,” she said, while praising tech giants as “the greatest companies that we have. They have an absurd amount of money available, but they’re simply not willing to invest in the safety of our children, invest in the safety of all of us.”

    No rush to legislation, no loopholes for tech giants

    Stage said a ban won’t take effect immediately. Allied lawmakers on the issue from across the political spectrum who make up a majority in parliament will likely take months to pass relevant legislation.

    “I can assure you that Denmark will hurry, but we won’t do it too quickly because we need to make sure that the regulation is right and that there is no loopholes for the tech giants to go through,” Stage said. Her ministry said pressure from tech giants’ business models was “too massive.”

    It follows a move in December in Australia, where parliament enacted the world’s first ban on social media for children — setting the minimum age at 16.

    That made platforms including TikTok, Facebook, Snapchat, Reddit, X and Instagram subject to fines of up to 50 million Australian dollars ($33 million) for systemic failures to prevent children younger than 16 from holding accounts.

    Officials in Denmark didn’t say how such a ban would be enforced in a world where millions of children have easy access to screens. But Stage noted that Denmark has a national electronic ID system — nearly all Danish citizens over age 13 have such an ID — and plans to set up an age-verification app. Several other EU countries are testing such apps.

    “We cannot force the tech giants to use our app, but what we can do is force the tech giants to make proper age verification, and if they don’t, we will be able to enforce through the EU commission and make sure that they will be fined up to 6% of their global income.”

    Aiming to shield kids from harmful content online

    Many governments have been grappling with ways of limiting harmful fallout from online technologies, without overly squelching their promise. Stage said Denmark’s legislative push was “not about excluding children from everything digital” — but keeping them away from harmful content.

    China — which manufacturers many of the world’s digital devices — has set limits on online game time and smart-phone time for kids.

    Prosecutors in Paris this week announced an investigation into allegations that TikTok allows content promoting suicide and that its algorithms may encourage vulnerable young people to take their own lives.

    “Children and young people have their sleep disrupted, lose their peace and concentration, and experience increasing pressure from digital relationships where adults are not always present,” the Danish ministry said. “This is a development that no parent, teacher or educator can stop alone.”

    The EU’s Digital Services Act, which took effect two years ago, forbids children younger than 13 to hold accounts on social media like TikTok and Instagram, video sharing platforms like YouTube and Twitch, and sites like Reddit and Discord, as well as AI companions.

    Many social media platforms have for years banned anyone 13 or under from signing up for their services. TikTok users can verify their ages by submitting a selfie that will be analyzed to estimate their age. Meta Platforms, parent of Instagram and Facebook, says it uses a similar system for video selfies and AI to help figure out a user’s age.

    TikTok said in an email that it recognizes the importance of Denmark’s initiative.

    “At TikTok, we have steadfastly created a robust trust and safety track record, with more than 50 preset safety features for teen accounts, as well as age appropriate experiences and tools for guardians such as Family Pairing,” a tool allowing parents, guardians, and teens to customize safety settings.

    We look forward to working constructively on solutions that apply consistently across the industry,” it added.

    Meta didn’t respond immediately to requests for comment from the AP.

    “We’ve given the tech giants so many chances to stand up and to do something about what is happening on their platforms. They haven’t done it,” said Stage, the Danish minister. “So now we will take over the steering wheel and make sure that our children’s futures are safe.”

    ___

    AP Business Writer Kelvin Chan contributed to this report.

    [ad_2]

    Source link

  • Mastodon’s latest software update brings quote posts to all server operators | TechCrunch

    [ad_1]

    Decentralized open source social network Mastodon is rolling out its latest release, version 4.5, which brings support for Quote Posts to all server operators, along with other features for admins, conversation improvements, and more.

    The addition of quote posts is one of the bigger changes to how Mastodon’s social network operates, as the social network attempts to compete against larger rivals, like X and Threads. Quoting can be a conversation driver and is considered a baseline feature for text-first social networks. But Mastodon wanted to ensure the feature launched with more user protections so as not to change the culture of its network.

    On X, quote posts (formerly known as retweets when X was Twitter) contributed to a culture of “dunking,” where users would often deride others by responding to their post with cruel jokes or insults. That remains a concern today among other new competitors, like Threads and Bluesky.

    To address this problem, Mastodon’s version of quote posts comes with added safety controls.

    Image Credits:Mastodon

    The feature was also initially rolled out to the larger Mastodon servers at mastodon.online and mastodon.social in September, ahead of the 4.5 software update, giving users time to adjust the format.

    Mastodon gives users several ways to control how their posts can be quoted. For instance, users decide who can quote them through the feature’s settings. Here, you can choose between options like “Anyone,” “Followers only,” or “Just me.” Additionally, users can control the visibility of quote posts by setting them to be visible to the public, to followers only, or a setting called “quiet public,” which makes the quotes public but removes them from Mastodon’s search, trends, and public timeline.

    In addition, users can override their default settings on a post-by-post basis, if need be, which could be useful at those times when you want to quote someone without attracting unwanted attention.

    Mastodon will also alert the user being quoted in the app, so they can remove their original post from the other person’s quote post, if they feel the need to. Plus, users can opt to block others to prevent them from seeing and quoting their posts in the future.

    While the broader rollout of quote posts is the major addition in the 4.5 software release, the update also fixes issues where users on older servers running 4.4 and earlier versions would sometimes miss seeing replies.

    a screenshot showing "more replies found" in response to a quote post.
    Image Credits:Mastodon

    Server operators are also gaining new tools to optionally disable content feeds, set a local feed as their homepage, block specific users, and more. The moderation interface has been updated to display needed context, like link previews and quote posts in messages, to aid in decision-making.

    a screenshot showing how Mastodon server operators can adjust their settings for local feeds.
    Image Credits:Mastodon

    Meanwhile, the 4.5 release also brings native emoji support to Mastodon’s web interface.

    Mastodon continues to be one of the larger networks on the fediverse, the name for the open social web powered by the ActivityPub social networking protocol. Overall, the fediverse has nearly 12 million users, according to growth tracker FediDB, with Mastodon accounting for north of 8 million of that figure. However, its monthly active user number is only ~670,000.

    Threads, which integrates with ActivityPub, isn’t counted in these figures because it’s not a full integration. Threads has more than 400 million monthly users and 150 million daily active users.

    [ad_2]

    Sarah Perez

    Source link

  • Denmark eyes new law to protect citizens from AI deepfakes

    [ad_1]

    COPENHAGEN, Denmark — In 2021, Danish video game live-streamer Marie Watson received an image of herself from an unknown Instagram account.

    She instantly recognized the holiday snap from her Instagram account, but something was different: Her clothing had been digitally removed to make her appear naked. It was a deepfake.

    “It overwhelmed me so much,” Watson recalled. “I just started bursting out in tears, because suddenly, I was there naked.”

    In the four years since her experience, deepfakes — highly realistic artificial intelligence-generated images, videos or audio of real people or events — have become not only easier to make worldwide but also look or sound exponentially more realistic. That’s thanks to technological advances and the proliferation of generative AI tools, including video generation tools from OpenAI and Google.

    These tools give millions of users the ability to easily spit out content, including for nefarious purposes that range from depicting celebrities Taylor Swift and Katy Perry to disrupting elections and humiliating teens and women.

    In response, Denmark is seeking to protect ordinary Danes, as well as performers and artists who might have their appearance or voice imitated and shared without their permission. A bill that’s expected to pass early next year would change copyright law by imposing a ban on the sharing of deepfakes to protect citizens’ personal characteristics — such as their appearance or voice — from being imitated and shared online without their consent.

    If enacted, Danish citizens would get the copyright over their own likeness. In theory, they then would be able to demand that online platforms take down content shared without their permission. The law would still allow for parodies and satire, though it’s unclear how that will be determined.

    Experts and officials say the Danish legislation would be among the most extensive steps yet taken by a government to combat misinformation through deepfakes.

    Henry Ajder, founder of consulting firm Latent Space Advisory and a leading expert in generative AI, said that he applauds the Danish government for recognizing that the law needs to change.

    “Because right now, when people say ‘what can I do to protect myself from being deepfaked?’ the answer I have to give most of the time is: ‘There isn’t a huge amount you can do,’” he said, ”without me basically saying, ‘scrub yourself from the internet entirely.’ Which isn’t really possible.”

    He added: “We can’t just pretend that this is business as usual for how we think about those key parts of our identity and our dignity.”

    U.S. President Donald Trump signed bipartisan legislation in May that makes it illegal to knowingly publish or threaten to publish intimate images without a person’s consent, including deepfakes. Last year, South Korea rolled out measures to curb deepfake porn, including harsher punishment and stepped up regulations for social media platforms.

    Danish Culture Minister Jakob Engel-Schmidt said that the bill has broad support from lawmakers in Copenhagen, because such digital manipulations can stir doubts about reality and spread misinformation.

    “If you’re able to deepfake a politician without her or him being able to have that product taken down, that will undermine our democracy,” he told reporters during an AI and copyright conference in September.

    The law would apply only in Denmark, and is unlikely to involve fines or imprisonment for social media users. But big tech platforms that fail to remove deepfakes could face severe fines, Engel-Schmidt said.

    Ajder said Google-owned YouTube, for example, has a “very, very good system for getting the balance between copyright protection and freedom of creativity.”

    The platform’s efforts suggest that it recognizes “the scale of the challenge that is already here and how much deeper it’s going to become,” he added.

    Twitch, TikTok and Meta, which owns Facebook and Instagram, didn’t respond to requests for comment.

    Engel-Schmidt said that Denmark, the current holder of the European Union’s rotating presidency, had received interest in its proposed legislation from several other EU members, including France and Ireland.

    Intellectual property lawyer Jakob Plesner Mathiasen said that the legislation shows the widespread need to combat the online danger that’s now infused into every aspect of Danish life.

    “I think it definitely goes to say that the ministry wouldn’t make this bill, if there hadn’t been any occasion for it,” he said. “We’re seeing it with fake news, with government elections. We are seeing it with pornography, and we’re also seeing it also with famous people and also everyday people — like you and me.”

    The Danish Rights Alliance, which protects the rights of creative industries on the internet, supports the bill, because its director says that current copyright law doesn’t go far enough.

    Danish voice actor David Bateson, for example, was at a loss when AI voice clones were shared by thousands of users online. Bateson voiced a character in the popular “Hitman” video game, as well as Danish toymaker Lego’s English advertisements.

    “When we reported this to the online platforms, they say ‘OK, but which regulation are you referring to?’” said Maria Fredenslund, an attorney and the alliance’s director. “We couldn’t point to an exact regulation in Denmark.”

    Watson had heard about fellow influencers who found digitally-altered images of themselves online, but never thought it might happen to her.

    Delving into a dark side of the web where faceless users sell and share deepfake imagery — often of women — she said she was shocked how easy it was to create such pictures using readily available online tools.

    “You could literally just search ‘deepfake generator’ on Google or ‘how to make a deepfake,’ and all these websites and generators would pop up,” the 28-year-old Watson said.

    She is glad her government is taking action, but she isn’t hopeful. She believes more pressure must be applied to social media platforms.

    “It shouldn’t be a thing that you can upload these types of pictures,” she said. “When it’s online, you’re done. You can’t do anything, it’s out of your control.”

    ___

    Stefanie Dazio in Berlin, Kelvin Chan in London, and Barbara Ortutay in San Francisco, contributed to this report.

    [ad_2]

    Source link

  • Denmark Eyes New Law to Protect Citizens From AI Deepfakes

    [ad_1]

    COPENHAGEN, Denmark (AP) — In 2021, Danish video game live-streamer Marie Watson received an image of herself from an unknown Instagram account.

    She instantly recognized the holiday snap from her Instagram account, but something was different: Her clothing had been digitally removed to make her appear naked. It was a deepfake.

    “It overwhelmed me so much,” Watson recalled. “I just started bursting out in tears, because suddenly, I was there naked.”

    In the four years since her experience, deepfakes — highly realistic artificial intelligence-generated images, videos or audio of real people or events — have become not only easier to make worldwide but also look or sound exponentially more realistic. That’s thanks to technological advances and the proliferation of generative AI tools, including video generation tools from OpenAI and Google.

    These tools give millions of users the ability to easily spit out content, including for nefarious purposes that range from depicting celebrities Taylor Swift and Katy Perry to disrupting elections and humiliating teens and women.

    In response, Denmark is seeking to protect ordinary Danes, as well as performers and artists who might have their appearance or voice imitated and shared without their permission. A bill that’s expected to pass early next year would change copyright law by imposing a ban on the sharing of deepfakes to protect citizens’ personal characteristics — such as their appearance or voice — from being imitated and shared online without their consent.

    If enacted, Danish citizens would get the copyright over their own likeness. In theory, they then would be able to demand that online platforms take down content shared without their permission. The law would still allow for parodies and satire, though it’s unclear how that will be determined.

    Experts and officials say the Danish legislation would be among the most extensive steps yet taken by a government to combat misinformation through deepfakes.

    Henry Ajder, founder of consulting firm Latent Space Advisory and a leading expert in generative AI, said that he applauds the Danish government for recognizing that the law needs to change.

    “Because right now, when people say ‘what can I do to protect myself from being deepfaked?’ the answer I have to give most of the time is: ‘There isn’t a huge amount you can do,’” he said, ”without me basically saying, ‘scrub yourself from the internet entirely.’ Which isn’t really possible.”

    He added: “We can’t just pretend that this is business as usual for how we think about those key parts of our identity and our dignity.”


    Deepfakes and misinformation

    U.S. President Donald Trump signed bipartisan legislation in May that makes it illegal to knowingly publish or threaten to publish intimate images without a person’s consent, including deepfakes. Last year, South Korea rolled out measures to curb deepfake porn, including harsher punishment and stepped up regulations for social media platforms.

    Danish Culture Minister Jakob Engel-Schmidt said that the bill has broad support from lawmakers in Copenhagen, because such digital manipulations can stir doubts about reality and spread misinformation.

    “If you’re able to deepfake a politician without her or him being able to have that product taken down, that will undermine our democracy,” he told reporters during an AI and copyright conference in September.

    The law would apply only in Denmark, and is unlikely to involve fines or imprisonment for social media users. But big tech platforms that fail to remove deepfakes could face severe fines, Engel-Schmidt said.

    Ajder said Google-owned YouTube, for example, has a “very, very good system for getting the balance between copyright protection and freedom of creativity.”

    The platform’s efforts suggest that it recognizes “the scale of the challenge that is already here and how much deeper it’s going to become,” he added.

    Twitch, TikTok and Meta, which owns Facebook and Instagram, didn’t respond to requests for comment.

    Engel-Schmidt said that Denmark, the current holder of the European Union’s rotating presidency, had received interest in its proposed legislation from several other EU members, including France and Ireland.

    Intellectual property lawyer Jakob Plesner Mathiasen said that the legislation shows the widespread need to combat the online danger that’s now infused into every aspect of Danish life.

    “I think it definitely goes to say that the ministry wouldn’t make this bill, if there hadn’t been any occasion for it,” he said. “We’re seeing it with fake news, with government elections. We are seeing it with pornography, and we’re also seeing it also with famous people and also everyday people — like you and me.”

    The Danish Rights Alliance, which protects the rights of creative industries on the internet, supports the bill, because its director says that current copyright law doesn’t go far enough.

    Danish voice actor David Bateson, for example, was at a loss when AI voice clones were shared by thousands of users online. Bateson voiced a character in the popular “Hitman” video game, as well as Danish toymaker Lego’s English advertisements.

    “When we reported this to the online platforms, they say ‘OK, but which regulation are you referring to?’” said Maria Fredenslund, an attorney and the alliance’s director. “We couldn’t point to an exact regulation in Denmark.”


    ‘When it’s online, you’re done’

    Watson had heard about fellow influencers who found digitally-altered images of themselves online, but never thought it might happen to her.

    Delving into a dark side of the web where faceless users sell and share deepfake imagery — often of women — she said she was shocked how easy it was to create such pictures using readily available online tools.

    “You could literally just search ‘deepfake generator’ on Google or ‘how to make a deepfake,’ and all these websites and generators would pop up,” the 28-year-old Watson said.

    She is glad her government is taking action, but she isn’t hopeful. She believes more pressure must be applied to social media platforms.

    “It shouldn’t be a thing that you can upload these types of pictures,” she said. “When it’s online, you’re done. You can’t do anything, it’s out of your control.”

    Stefanie Dazio in Berlin, Kelvin Chan in London, and Barbara Ortutay in San Francisco, contributed to this report.

    Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

    Photos You Should See – Oct. 2025

    [ad_2]

    Associated Press

    Source link

  • Phony AI-generated videos of Hurricane Melissa flood social media sites

    [ad_1]

    One viral video shows what appears to be four sharks swimming in a Jamaican hotel’s pool as floodwaters allegedly brought on by Hurricane Melissa swamp the area. Another purportedly depicts Jamaica’s Kingston airport completely ravaged by the storm. But neither of these events happened, it’s just AI-generated misinformation circulating on social media as the storm churned across the Caribbean this week.

    These videos and others have racked up millions of views on social media platforms, including X, TikTok and Instagram.

    Some of the clips appear to be spliced together or based on footage of old disasters. Others appear to be created entirely by AI video generators.

    “I am in so many WhatsApp groups and I see all of these videos coming. Many of them are fake,” said Jamaica’s Education Minister Dana Morris Dixon on Monday. “And so we urge you to please listen to the official channels.”

    Although it’s common for hoax photos, videos and misinformation to surface during natural disasters, they’re usually debunked quickly. But videos generated by new artificial intelligence tools have taken the problem to a new level by making it easy to create and spread realistic clips.

    In this case, the content has been showing up in social media feeds alongside genuine footage shot by local residents and news organizations, sowing confusion among social media users.

    Here are a few steps you can take to reduce your chances of getting fooled.

    Check for watermarks

    Look for a watermark logo indicating that the video was generated by Sora, a text-to-video tool launched by ChatGPT-maker OpenAI, or other AI video generators. These will usually appear in one of the corners of a video or photo.

    It is quite easy to remove these logos using third-party tools, so you can also check for blurs, pixelation or discoloration where a watermark should be.

    Take a closer look

    Look more closely at videos for unclear details. While the sharks-in-pool video appears realistic at first glance, it looks less believable upon closer examination because one of the sharks has a strange shape.

    You might see objects that blend together, or details such as lettering on a sign that are garbled, which are telltale signs of AI-generated imagery. Branding is also something to look out for as many platforms are cautious about reproducing specific company logos.

    Experts say it’s going to get increasingly harder to tell the difference between reality and deepfakes as the technology improves.

    Experts noted that Melissa is the first big natural disaster since OpenAI launched the latest version of its video generation tool Sora last month.

    “Now, with the rise of easily accessible and powerful tools like Sora, it has become even easier for bad actors to create and distribute highly convincing synthetic videos,” said Sofia Rubinson, a senior editor at NewsGuard, which analyzes online misinformation.

    “In the past, people could often identify fakes through telltale signs like unnatural motion, distorted text, or missing fingers. But as these systems improve, many of those flaws are disappearing, making it increasingly difficult for the average viewer to distinguish AI-generated content from authentic footage.”

    Why create deepfakes around a crisis?

    AI expert Henry Ajder said most of the hurricane deepfakes he’s seen aren’t inherently political. He suspects it’s “much closer to more traditional kind of click-based content, which is to try and get engagement, to try and get clicks.”

    On X, users can get paid based on the amount of engagement their posts get. YouTubers can earn money from ads.

    A video that racks up millions of views could earn the creator a few thousand dollars, Ajder said, not bad for the amount of effort needed.

    Social media accounts also use videos to expand their follower base in order to promote projects, products or services, Ajder said.

    So check who’s posting the video. If the account has a track record of clickbait-style content, be skeptical.

    But keep in mind that the people behind deepfake videos aren’t always trying to hide.

    “Some creators are just trying to do interesting things using AI that they think are going to get people’s attention,” he said.

    So who is behind the account?

    While it’s unclear who exactly created the pool shark video, one version found on Instagram carries the watermark for a TikTok account, Yulian_Studios. That account’s TikTok profile describes itself, in Spanish, as a “Content creator with AI visual effects in the Dominican Republic.”

    The shark video can’t be found on the account’s page, but it does have another AI-generated clip of an obese man clinging to a palm tree as hurricane winds blow in Jamaica.

    Trust your gut

    Context matters. Take a beat to consider whether what you’re seeing is plausible. The Poynter journalism website advises that if you see a situation that seems “exaggerated, unrealistic or not in character,” consider that it could be a deepfake.

    That includes the audio. AI videos used to come with synthetic voice-overs that had unusual cadence or tone, but newer tools can create synchronized sound that sound realistic.

    And if you found it on X, make sure to check whether there’s a community note attached, which is the platform’s user-powered fact-checking tool.

    One version of the shark pool video on X comes with a community note that says: “This video footage and the voice used were both created by artificial intelligence, it is not real footage of hurricane Melissa in Jamaica.”

    Go to an official source

    Don’t just rely on random strangers on the internet for information. The Jamaican government has been posting storm updates and so has the National Hurricane Center.

    [ad_2]

    Source link

  • Australia adds Reddit and Kick to social media platforms banning children under 16

    [ad_1]

    MELBOURNE, Australia (AP) — Australia has added message board Reddit and livestreaming service Kick to its list of social media platforms that must ban children younger than 16 from holding accounts.

    The platforms join Facebook, Instagram, Snapchat, Threads, TikTok, X and YouTube in facing a world-first legal obligation to shut the accounts of younger Australian children from Dec. 10, Communications Minister Anika Wells said on Wednesday.

    Platforms that fail to take reasonable steps to exclude children younger than 16 could be punished with a fine of up to 50 million Australian dollars ($33 million).

    “We have met with several of the social media platforms in the past month so that they understand there is no excuse for failure to implement this law,” Wells told reporters in Canberra.

    “Online platforms use technology to target children with chilling control. We are merely asking that they use that same technology to keep children safe online,” Wells added.

    Australia’s eSafety Commissioner Julie Inman Grant, who will enforce the social media ban, said the list of age-restricted platforms would evolve with new technologies.

    The nine platforms currently age-restricted meet the key requirement that their “sole or significant purpose is to enable online social interaction,” a government statement said.

    Inman Grant said she would work with academics to evaluate the impacts of the ban, including whether children sleep or interact more or become more physically active.

    “We’ll also look for unintended consequences and we’ll be gathering evidence” so that others could learn from Australia’s achievements, Inman Grant said.

    Australia’s move is being closely watched by countries that share concerns about social media impacts on young children.

    European Commission President Ursula von der Leyen told a United Nations forum in New York in September that she was “inspired” by Australia’s “common sense” move to legislate the age restriction.

    Critics of the legislation fear that banning young children from social media will impact the privacy of all users, who must establish they are older than 16.

    Wells recently said the government seeks to keep platform users’ data as private as possible.

    More than 140 Australian and international academics with expertise in fields related to technology and child welfare signed an open letter to Prime Minister Anthony Albanese last year opposing a social media age limit as “too blunt an instrument to address risks effectively.”

    [ad_2]

    Source link

  • Reddit will be included in Australia’s looming under-16 social media ban

    [ad_1]

    Reddit won’t escape Australia’s child social media ban. The Guardian reports that Communications Minister Anika Wells announced Reddit’s addition on Wednesday. The nation’s law, which blocks children under 16 from major social media sites, is scheduled to go into effect on December 10.

    Alongside Reddit, Wells said Australian streaming site Kick would also be included. They join the previously announced Facebook, X, Snapchat, TikTok, YouTube and Instagram. Australia considers the list to be a starting point for the ban and won’t rule out adding more. Other companies under consideration are Discord, Twitch, GitHub and Roblox.

    YouTube was initially excluded because it was considered an educational tool. But after protests from other companies on the list, Australia ultimately added it.

    The ban passed in late 2024. The legislation puts the onus on the platforms, rather than parents, to police underage use. Companies that don’t take reasonable steps to prevent under-16 users from accessing their platforms can face penalties of up to AU$49.5 million (around $32 million).

    “There’s a time and place for social media in Australia, but there’s not a place for predatory algorithms, harmful content and toxic popularity [meters] manipulating Australian children,” Wells said. “Online platforms can target children with chilling control. We are mandating they use that sophisticated technology to protect them.”

    [ad_2]

    Source link

  • It’s Been a Year Since Trump Was Elected. Democrats Still Don’t Get the Internet

    [ad_1]

    After losing big in 2024, Democrats promised a digital reckoning.

    But 12 months out from that devastating slate of losses, Democratic digital programs are still plagued by the same issues that doomed them last year. Despite millions of dollars in influencer investments and “lessons learned” memos, party insiders say Democrats are still stuck running social media programs that strive for authenticity, but often clash with the party’s unrelenting desire to maintain control.

    “I can’t, for the life of me, figure out why we are still so rigid and moderating everything when we have nothing to lose for the first time,” says one Democratic digital strategist, who requested anonymity to speak candidly. “All of the threats of fascism and right wing takeover. It’s all here.”

    This aversion to risk has made it difficult for Democrats to innovate. In June, the Democratic National Committee launched a new YouTube show called the Daily Blueprint. In a statement, DNC chair Ken Martin said that the show—which runs news headlines and interviews with party officials in an attempt to be MSNBC-lite—“cements our commitment to meet this moment and innovate the ways we get our message across a new media landscape.”

    The show, hosted by DNC deputy communications director Hannah Muldavin, has brought in only around 16,000 views total across more than 100 episodes since its launch.

    The DNC did not respond to a request for comment.

    To some Democratic strategists, the Daily Blueprint is emblematic of how the party continues promoting its least effective digital communicators. Since the government shut down earlier this month, Senate minority leader Chuck Schumer has hosted a string of highly-produced videos that have barely registered outside of the Washington, DC ecosystem. “If you are not willing to take swings or throw shit against the wall in this moment, then when are you going to do that?” says Ravi Mangla, the national press secretary for the Working Families Party, a small progressive party already critical of the Democratic National Committee. (Schumer’s Senate office did not immediately respond to a request for comment.)

    Younger Democratic operatives say the issue stems from a broader culture of gatekeeping not just who is allowed to speak on behalf of the party, but what the content coming out of official channels looks like. The people approving content are “not young people and they’re not posters,” says Organizermemes, a creator and digital strategist. “They can’t explain why things [online] went well. Their ‘theory of mind’ is often fundamentally wrong because they don’t engage with the actual doing of it.”

    [ad_2]

    Makena Kelly

    Source link

  • Bluesky Will Test a ‘Dislike’ Option. It Could Help Fix a Huge Problem

    [ad_1]

    Bluesky is the social media platform that most resembles Twitter before it was taken over by Elon Musk, but it’s its own thing, and it has its own problems. One of those problems is a horrible “Discover” tab, something that might be greatly improved by adding the “dislike” feature currently headed for a beta release.

    In the blog post from Friday that included the announcement of dislike, Bluesky wrote about how it has always provided users with “tools that give people more control over how they interact on Bluesky.” This is an almost comedic understatement. The culture of Bluesky is built around intentionally siloing yourself in, and only seeing things you like.

    So, for instance, when the White House joined the Democratic-leaning Bluesky, thousands of users immediately availed themselves of the site’s unusually powerful blocking feature, resulting in a greatly diminished network effect, and as a consequence, very low engagement counts for the Trump Administration. The repost count on a White House Bluesky post rarely exceeds 70, and the vast majority of users on the site simply don’t notice the account still exists.

    But blocking early and often is the norm for Bluesky users encountering anything they don’t like, for any reason. Even if you wish someone well, you might block them simply because their posting style irks you slightly.

    In other words, Bluesky is a highly effective and shameless echo chamber. But it’s not clear that blocking someone has any effect whatsoever on whether more content similar to what you just blocked will be served to you later.

    Enter dislike, which will be a “new feedback signal” that’s supposed to “improve personalization in Discover and other feeds,” according to Bluesky’s blog post. Adding a “dislike” to every block has the potential to bolster the have-it-your-way attributes of the app and its culture—particularly where the Discover feed is concerned.

    The Discover feed on Bluesky feels like a cesspool because, while everyone’s is a little different, it’s mostly the top of the Bluesky bell curve. If you use the app at all, there is probably at least some extent to which you enjoy clowning on Elon Musk, outrage about AI, saccharine posts about pets, empowering selfies, clowning on transphobes, random nice photos, and what have you. But the returns rapidly diminish in a feed that firehoses you with these things, and that’s the experience on the Bluesky Discover tab. An avalanche of meh posts.

    While some clearly enjoy the Discover feed—a common complaint among big accounts is that the Discover tab exposes their posts to annoying repliers—the idea that the Discover feed just sucks and should never be used is common.

    “Dislikes help the system understand what kinds of posts you’d prefer to see less of,” Bluesky’s blog post claims. If this turns out to be true, the Discover tab could finally fill a gap: in order to keep things fresh, there needs to be a decent place on Bluesky to encounter new kinds of content other than in reply threads. The chronological Following tab does, after all, get monotonous after a while (It basically inundates you with the posts of users you genuinely like, but who post a ton).

    If dislike is a robust and effective function with the power to zap entire categories of things out of existence for the user, it could herald a whole new Bluesky: one in which the Discover tab is useful and maybe even dangerously addictive. But if dislike doesn’t go for the jugular, that’s fine. There’s always old, reliable block.

    [ad_2]

    Mike Pearl

    Source link

  • China says it will work with US to resolve issues related to TikTok

    [ad_1]

    President Donald Trump’s meeting Thursday with China’s top leader Xi Jinping produced a raft of decisions to help dial back trade tensions, but no agreement on TikTok’s ownership.

    “China will work with the U.S. to properly resolve issues related to TikTok,” China’s Commerce Ministry said after the meeting.

    It gave no details on any progress toward ending uncertainty about the fate of the popular video-sharing platform in the U.S.

    The Trump administration had been signaling that it may have finally reached a deal with Beijing to keep TikTok running in the U.S.

    Treasury Secretary Scott Bessent had said on CBS’s “Face the Nation” on Sunday that the two leaders will “consummate that transaction on Thursday in Korea.”

    Wide bipartisan majorities in Congress passed — and President Joe Biden signed — a law that would ban TikTok in the U.S. if it did not find a new owner to replace China’s ByteDance. The platform went dark briefly on a January deadline but on his first day in office, Trump signed an executive order to keep it running while his administration tries to reach an agreement for the sale of the company.

    Three more executive orders followed, as Trump, without a clear legal basis, extended deadlines for a TikTok deal. The second was in April, when White House officials believed they were nearing a deal to spin off TikTok into a new company with U.S. ownership. That fell apart when China backed out after Trump announced sharply higher tariffs on Chinese products. Deadlines in June and September passed, with Trump saying he would allow TikTok to continue operating in the United States in a way that meets national security concerns.

    Trump’s order was meant to enable an American-led group of investors to buy the app from China’s ByteDance, though the deal also requires China’s approval.

    However, TikTok deal is “not really a big thing for Xi Jinping,” said Bonnie Glaser, managing director of the German Marshall Fund’s Indo-Pacific program, during a media briefing Tuesday. “(China is) happy to let (Trump) declare that they have finally kept a deal. Whether or not that deal will protect the data of Americans is a big question going forward.”

    “A big question mark for the United States, of course, is whether this is consistent with U.S. law since there was a law passed by Congress,” Glaser said.

    About 43% of U.S. adults under the age of 30 say they regularly get news from TikTok, higher than any other social media app, including YouTube, Facebook and Instagram, according to a Pew Research Center report published in September.

    A recent Pew Research Center survey found that about one-third of Americans said they supported a TikTok ban, down from 50% in March 2023. Roughly one-third said they would oppose a ban, and a similar percentage said they weren’t sure.

    Among those who said they supported banning the social media platform, about 8 in 10 cited concerns over users’ data security being at risk as a major factor in their decision, according to the report.

    The security debate centers on the TikTok recommendation algorithm — which has steered millions of users into an endless stream of video shorts. China has said the algorithm must remain under Chinese control by law. But a U.S. regulation that Congress passed with bipartisan support said any divestment of TikTok would require the platform to cut ties with ByteDance.

    American officials have warned the algorithm — a complex system of rules and calculations that platforms use to deliver personalized content — is vulnerable to manipulation by Chinese authorities, but no evidence has been presented by U.S. officials proving that China has attempted to do so.

    ___

    Associated Press Writer Fu Ting contributed to this story from Washington.

    [ad_2]

    Source link

  • 2 killed in wrong-way crash on northbound I-25 near 6th Avenue

    [ad_1]

    Two people were killed early Friday morning after a motorist driving in the wrong direction in the northbound lanes of Interstate 25 collided with another vehicle just south of 6th Avenue in Denver.

    Both motorists were pronounced dead at the scene of the crash, which was reported to police just before 3 a.m., the Denver Police Department said on social media. There were no passengers in either vehicle.

    [ad_2]

    Source link

  • Bluesky hits 40 million users, introduces ‘dislikes’ beta | TechCrunch

    [ad_1]

    Social network Bluesky, which on Friday announced a new milestone of 40 million users, will soon start testing “dislikes” as a way to improve personalization on its main Discover feed and others.

    The news was shared alongside a host of other conversation control updates and changes, which include smaller tweaks to replies, improved detection of toxic comments, and other ways to prioritize more relevant conversations to the individual user.

    With the “dislikes” beta rolling out soon, Bluesky will take into account the new signal to improve user personalization. As users “dislike” posts, the system will learn what sort of content they want to see less of. This will help to inform more than just how content is ranked in feeds, but also reply rankings.

    The company explained the changes are designed to make Bluesky a place for more “fun, genuine, and respectful exchanges” — an edict that follows a month of unrest on the platform as some users again criticized the platform over its moderation decisions. While Bluesky is designed as a decentralized network where users run their own moderation, some subset of Bluesky users want the platform itself to ban bad actors and controversial figures instead of leaving it up to the users to block them.

    Bluesky, however, wants to focus more on the tools it provides users to control their own experience.

    Today, this includes things like moderation lists that let users quickly block a group of people they don’t want to interact with, content filter controls, muted words, and the ability to subscribe to other moderation service providers. Bluesky also lets users detach quote posts to limit unwanted attention, which has long influenced the toxic culture of “dunking” on X (formerly Twitter).

    In addition to dislikes, the company says it’s testing a mix of ranking updates, design changes, and other feedback tools to improve the conversations on its network.

    This includes a new system that will map out the “social neighborhoods” on Bluesky, meaning the connections between people who often interact and reply to one another. Bluesky says it’s prioritizing replies from people “closer to your neighborhood,” to make conversations you’re shown in your feed more relevant and familiar. The new “dislikes” may have some influence here, as well, Bluesky says.

    This, in particular, is an area where competitor Threads, from Meta, has been challenged at times.

    As newsletter writer Max Read noted last year, Threads tended to land its users in a confusing feed where conversations they weren’t connected to would appear, sometimes in mid-story. Read remarked that “it’s often impossible to figure out who is replying to whom and where and why you’re seeing certain posts. They appear from nowhere and lead to nowhere,” he wrote at the time.

    Bluesky’s plan to map out social neighborhoods could address this issue as it scales.

    The company also said its latest model does a better job at detecting replies that are “toxic, spammy, off-topic, or posted in bad faith,” and downranks these in threads, search results, and notifications.

    Another change to the Reply button will now take users to the full thread instead of straight into the compose screen, which may encourage users to read the thread before responding.

    This, says Bluesky, is a simple way to “reduce content collapse and redundant replies” — another criticism that tends to be levied at Twitter/X.

    Plus, the company is tweaking the reply settings feature to make it more visible to users that they can control who is allowed to respond to their posts.

    [ad_2]

    Sarah Perez

    Source link

  • Christian Influencers Are Throwing Their Hatch Clocks in the Trash

    [ad_1]

    Treasure to Trash and Back Again

    According to Erin Merani, Hatch’s vice president of marketing, this series of events was not, in fact, a planned marketing stunt, and Hatch is still figuring out the ramifications of the demon discourse. While Merani is glad the ads and programming “caught people’s attention,” she wants to clarify they were all meant for fun, and she’s heartened by how many users have rushed in to defend Hatch.

    “We saw a lot of community jumping into the comments and saying, ‘Wait a minute, we missed the plot here!’” she says. “This is a Halloween-themed ad about their adult—not baby—product actually being the thing that will save you from the real evil: your phone. Your phone is actually keeping you up at night.’” To be clear, Hatch makes two devices, one specifically for kids and the other for adults. Any pop culture references, like Twilight, are exclusive to adults only via Hatch’s Restore 2 and 3 devices—they can’t be accessed on the Hatch Baby.

    Then, a new trending topic arose about 48 hours later: “If you’re going to throw your Hatch device away, send it to me.”

    Hatch took it and ran with it. “We used the cues of the community and sort of rode that wave with this idea of, ‘Hey, we know this is happening, and we wanted to address this while also pointing at having a little bit of fun with it,” Merani says.

    Enter Hatch’s new “RePossession Program.” “We saw this overwhelming outreach of people who wanted to be ‘repossessed,’” Merani says, “so we were able to point people to our refurbishment program, to be able to keep those devices out of landfills and send ‘repossessed’ units out.”

    So far, Hatch has had more than 10,000 related social media inquiries about receiving “repossessed Hatch devices,” and only 10 requests to send Hatch devices back to the company.

    Ultimately, if you have a Hatch device and would like to send it back, you can contact customer service to arrange a return. On the other hand, you can now purchase refurbished machines (from the repossessed campaign and otherwise) here. No matter what side of the conversation you find yourself on, we can all agree on one thing: sleep is important, and you should definitely spend less time on your phone.

    [ad_2]

    Julia Forbes

    Source link

  • Why This AI Company Just Stopped Minors From Using Its Chatbots

    [ad_1]

    “We do not take this step of removing open-ended Character chat lightly—but we do think that it’s the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology,” the company said.

    In addition to removing access for users under 18, the company announced that it is working on age verification measures, and that it is establishing a nonprofit called the AI Safety Lab that will be focused on “innovating safety alignment for next-generation AI entertainment features.” Previous safety measures taken by the company include a notification sending users to the National Suicide Prevention Lifeline when self-harm and suicide are mentioned during chatbot conversations.

    The decision comes after lawsuits against Character.AI filed by families and parents alleging that the company was liable for the death of their children. In August, Ken Paxton, the Texas attorney general, announced an investigation into the company and Meta AI Studio for “potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools.”

    [ad_2]

    Ben Butler

    Source link