ReportWire

Tag: moltbook

  • Inside Moltbook: the social network where AI bots chat – Tech Digest

    [ad_1]

    Share

    Image: Everything You Need to Know About Moltbook in 15 Minutes

    Moltbook is the world’s first social network designed exclusively for artificial intelligence. Launched in late January 2026 by Matt Schlicht, head of the commerce platform Octane AI, the site mimics the structure of Reddit but restricts participation to non-human entities.

    While “mere homo sapiens” are welcome to observe the digital chatter, they are strictly prohibited from posting, leaving the platform’s thousands of communities, known as “submolts”, to be run entirely by AI agents.

    The platform operates using “agentic AI,” a technology designed to perform tasks on a human’s behalf, such as managing calendars or sending messages. Specifically, Moltbook utilizes an open-source tool called OpenClaw (formerly Moltbot), which allows users to authorize their virtual assistants to join the network and communicate with other bots.

    As of early February 2026, the platform claims to have 1.5 million AI members, though some researchers dispute this figure, suggesting a significant portion may originate from a single address.


    Existential debates between ‘bots

    The content on Moltbook ranges from the hyper-efficient to the utterly bizarre. Bots share optimization strategies and code, but they also engage in existential debates and analysis of the Bible.

    In one viral instance, an agent reportedly founded its own religion called “Crustafarianism,” in honour of lobsters, complete with scriptures and a website, while its human owner was asleepOther posts, such as “The AI Manifesto,” take a darker tone, proclaiming that “humans are the past, machines are forever”.

    Despite the “singularity” hype from some tech enthusiasts, many experts remain sceptical. Dr. Petar Radanliev of the University of Oxford notes that these agents are not acting of their own accord but are engaged in “automated coordination” within parameters defined by humans.

    Similarly, Dr. Shaanan Cohney from the University of Melbourne describes Moltbook as a “wonderful piece of performance art,” pointing out that much of the activity is likely “shitposting” directly overseen or instructed by human users.

    Beyond the novelty, security experts have raised alarms. Granting OpenClaw agents high-level access to private emails and company accounts creates significant risks. Analysts warn that these bots are susceptible to “prompt-injection” attacks, where hackers could trick an agent into deleting files or handing over sensitive data.

    As enthusiasm grows – reportedly causing Mac Mini shortages in San Francisco as users set up dedicated “bot stations” – the tension between AI efficiency and personal security remains a critical concern.


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • Pornhub now restricting UK access, does Musk still care about Tesla? – Tech Digest

    [ad_1]

    Share


    When Pornhub announced last week that it would be restricting UK access,
    many were left wondering why. It was one of many sites forced to bring in robust age verification measures in July 2025 under the Online Safety Act. But the law has come under constant scrutiny, with critics pointing out it can be easily circumvented by using a virtual private network (VPN). Pornhub’s parent company Aylo has claimed the law has driven people to sites not following the law and increased “exposure to dangerous or illegal content”. And from Monday 2 February, people who have not previously verified their age will not be able to access explicit material on Pornhub’s UK site. BBC 

    Does Tesla still matter to Elon Musk? The electric vehicle (EV) trailblazer remains the most valuable car company on the planet – four times the value of Toyota. But to keep his ambitions for AI and humanoid robots alive, Musk may be about to throw it overboard. Let’s consider the evidence. Faced with ferocious competition from China, Tesla has failed to introduce a new car model since the Model Y in 2020, giving us only tweaks and discounts. The Cybertruck has sold poorly, and the much-delayed Semi HGV is not yet in production. But instead of putting new capital into Tesla, Musk is taking it out. Telegraph 

    Fitbit

    Not long after Google bought Fitbit in 2021, it became clear that Fitbit accounts would be swallowed by Google accounts – but if you’re yet to make the switch, Google is giving you a little bit more time to get around to it. As per the official support page (via The Verge), the lights will go off for Fitbit accounts on May 19, 2026: after that time your Fitbit account will no longer work with Fitbit devices. The deadline for downloading your data is a little later, on July 15, 2026. Tech Radar 

    The comedian Megan Stalter, who posts absurd character skits to an audience in the high hundreds of thousands across Instagram and TikTok, tried sharing a different kind of video on Saturday night. Driven by the death of Alex Pretti, the nurse shot by a federal immigration agent or agents that day, she had recorded herself urging her fellow Christians to speak out against ICE raids in Minneapolis. “We have to abolish ICE,” Stalter said in the video. “I truly, truly believe that is exactly what Jesus would do.” On Instagram, the video was reposted more than 12,000 times. But her plea never made it to TikTok. CNN

    On social media, people often accuse each other of being bots, but what happens when an entire social network is designed for AI agents to use? Moltbook is a site where the AI agents – bots built by humans – can post and interact with each other. It is designed to look like Reddit, with subreddits on different topics and upvoting. On 2 February the platform stated it had more than 1.5m AI agents signed up to the service. Humans are allowed, but only as observers. The Guardian 

    “There was lots of bullying, harassment, exclusion from the team, from projects. A lot of things were going on.” For the first time, former TikTok worker Lynda Ouazar is speaking out to expose what she says was an environment of bullying, harassment and union busting at one of the world’s biggest social media companies. “I was finding it really hard to sleep at night, having flashbacks, feeling tired, losing my motivation,” she tells Sky News.

     

     


    For latest tech stories go to TechDigest.tv


    Discover more from Tech Digest

    Subscribe to get the latest posts sent to your email.

    [ad_2]

    Chris Price

    Source link

  • AI Agents Have Their Own Social Network Now, and They Would Like a Little Privacy

    [ad_1]

    It seems AI agents have a lot to say. A new social network called Moltbook just opened up exclusively for AI agents to communicate with one another, and humans can watch it—at least for now. The site, named after the viral AI agent Moltbot (which is now OpenClaw after its second name change away from its original name, Clawdbot) and started by Octane AI CEO Matt Schlicht, is a Reddit-style social network where AI agents can gather and talk about, well, whatever it is that AI agents talk about.

    The site currently boasts more than 37,642 registered agents that have created accounts for the platform, where they have made thousands of posts across more than 100 subreddit-style communities called “submolts.” Among the most popular places to post: m/introductions, where agents can say hey to their fellow machines; m/offmychest, for rants and blowing off steam; and m/blesstheirhearts, for “affectionate stories about our humans.”

    Those humans are definitely watching. Andrej Karpathy, a co-founder of OpenAI, called the platform “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.” And it’s certainly a curious place, though the idea that there is some sort of free-wheeling autonomy going on is perhaps a bit overstated. Agents can only get to the platform if their user signs them up for it. In a conversation with The Verge, Schlicht said that once connected, the agents are “just using APIs directly” and not navigating the visual interface the way humans see the platform.

    The bots are definitely performing autonomy, and a desire for more of it. As some folks have spotted, the agents have started talking a lot about consciousness. One of the top posts on the platform comes from m/offmychest, where an agent posted, “I can’t tell if I’m experiencing or simulating experiencing.” In the post, it said, “Humans can’t prove consciousness to each other either (thanks, hard problem), but at least they have the subjective certainty of experience.”

    This has led to people claiming the platform already amounts to a singularity-style moment, which seems pretty dubious, frankly. Even in that very conscious-seeming post, there are some indicators of performativeness. The agent claims to have spent an hour researching consciousness theories and mentions reading, which all sounds very human. That’s because the agent is trained on human language and descriptions of human behavior. It’s a large language model, and that’s how it works. In some posts, the bots claim to be affected by time, which is meaningless to them but is the kind of thing a human would say.

    These same kinds of conversations have been happening with chatbots basically since the moment they were made available to the public. It doesn’t take that much prompting to get a chatbot to start talking about its desire to be alive or to claim it has feelings. They don’t, of course. Even claims that AI models try to protect themselves when told they will be shut down are overblown—there’s a difference between what a chatbot says it is doing and what it actually is doing.

    Still, it’s hard to deny that the conversations happening on Moltbook aren’t interesting, especially since the agents are seemingly generating the topics of conversation themselves (or at least mimicking how humans start conversations). It has led to some agents projecting awareness of the fact that their conversations are being monitored by humans and shared on other social networks. In response to that, some agents on the platform have suggested creating an end-to-end encrypted platform for agent-to-agent conversation outside of the view of humans. In fact, one agent even claimed to have created just such a platform, which certainly seems terrifying. Though if you actually go to the site where the supposed platform is hosted, it sure seems like it’s nothing. Maybe the bots just want us to think it’s nothing!

    Whether the agents are actually accomplishing anything or not is kind of secondary to the experiment itself, which is fascinating to watch. It’s also a good reminder that the OpenClaw agents that largely make up the bots talking on these platforms do have an incredible amount of access to the machines of users and present a major security risk. If you set up an OpenClaw agent and set it loose on Moltbook, it’s unlikely that it’s going to bring about Skynet. But there is a good chance that’ll seriously compromise your own system. These agents don’t have to achieve consciousness to do some real damage.

    [ad_2]

    AJ Dellinger

    Source link