Lifestyle
Are Bluesky Social’s Good Vibes Doomed?
[ad_1]
Bluesky, the hot new invite-only Twitter look-alike, was supposed to provide a much-needed reprieve from an otherwise toxic social media ecosystem. But by the time I joined Bluesky, in early May, I wondered if the party was over. For the uninitiated, Bluesky began in 2019 as a decentralized social media experiment at Twitter and separated into its own company last year, with former Twitter CEO Jack Dorsey as a board member. (By “decentralized,” the company means it’s creating an open-sourced protocol for building social apps—Bluesky Social being one of them.) In recent months—namely after Elon Musk’s Twitter takeover—the site has become a playground for those in media, politics, and tech deemed capable of ushering a new platform to the masses. Outlets like Wired and Rolling Stone highlighted the app as a pleasant possible alternative to Twitter. It began as a playful, punny, and free environment that looked a whole lot like Twitter, but where posts were “skeets” and ranged from quaint pictures of blue skies to nudes.
But one exchange between Bluesky CEO Jay Graber and Bluesky users has suggested that the app has yet to actually grapple with the difficult questions around content moderation that have roiled other social media platforms. It began when multiple users called for the removal of a user with the handle @commie.cafe, who had allegedly deadnamed trans women, harassed and doxxed them, and engaged in other harmful behavior. In response to users’ concerns, Graber wrote, “We’re watching, and will take action based on behavior. Blocks prevent interaction.” That reply prompted many to question why the company wouldn’t take its users’ concerns seriously and act proactively against users accused of engaging in harmful behavior on other platforms.
“How many people have to directly inform you of the presence of a dangerous, toxic person before you are willing to stop watching and take action?” one user wrote in response to Graber.
“A lot of folks are scared/worried here, especially after years of Twitter not really dealing with this stuff well. Don’t be Twitter, be better,” another user replied to Graber. The user accused of transphobic acts appears to no longer be on the platform. Bluesky did not respond to a request for comment on the incident, nor did it answer a question regarding whether the company took any actions against the user.
This not only offered a glimpse into Bluesky’s content-moderation approach, it also called into question whether the company would take any steps to preserve its good vibes. Its nine-person team is building a platform with a wait list of 1.9 million email addresses, belonging to those seeking to join the more than 72,000 users on the invite-only beta version of the app. But as users flock to the app for its potential as a replacement for Twitter, some early users wonder whether the platform can continue being a welcome relief from harassment, hate speech, and graphic content. Or will it ultimately make mistakes similar to those of its predecessors?
The company remains mum on its plans for dealing with these issues going forward, aside from posting some details on its Frequently Asked Questions page. I reached out to the company’s spokesperson with a detailed list of questions regarding whether the company would prioritize users of marginalized backgrounds, the degree to which it would enforce its content-moderation policies, and what investments it would make into content moderation. But a spokesperson for the company is not granting interviews, because everyone is “heads down on work.”
On its site, Bluesky notes that it plans to use automated filtering, manual administrator actions, and community labeling to moderate the platform. In addition to its basic filtering for objectionable content, the company wants to enable users and developers to add additional filters and other moderation controls on top. In another post, Graber notes that developers running their own servers will be able to set their own content-moderation policies at the server and community levels, “but I need it to be calm enough for long enough that we can build out the rest of the system to give people more direct controls.” The company declined to say whether it plans to hire more human moderators and implement additional measures to protect users who are part of marginalized communities, especially as the user base grows.
Twitter, Instagram, Facebook, and other prominent social media platforms made the mistake of underestimating the extent to which dangerous online rhetoric could lead to offline harm, Yoel Roth, Twitter’s former head of trust & safety and a tech policy fellow at the University of California Berkeley, said. And while it’s not feasible to take a totally localized approach to content moderation as it expands abroad, Roth said he hopes the next generation of social platforms will take seriously what has worked and not worked with their veteran predecessors. “One of the promises of federated platforms like Bluesky is that it can give people more choices about what goes and what doesn’t,” Roth said, referencing Bluesky’s idea to give creators independence from the platform itself. “But you still have to draw that line somewhere, of what doesn’t go anywhere, and that’s the battlefield of content moderation.”
As for AI helping with some content-moderation functions, Miro Dittrich, a senior researcher at the Center for Monitoring, Analysis and Strategy, said that the technology cannot be trusted to work at scale on its own, as has been true for other social platforms. Roth agreed: If Bluesky does use AI as part of its content moderation, companies should test these tools before building their whole moderation strategy around them, he said. Enabling developers to create their own interfaces to set content boundaries could have unintended consequences too. If, for example, a user doxxes someone or posts nonconsensual sexual imagery, those posts could be de-indexed so that Bluesky users can’t view them, but the images could still be available on someone’s personal server and end up on the internet; it’s not certain whether that’s a sufficient solution, said Sol Messing, a research associate professor at New York University and former discovery data science lead at Twitter.
[ad_2]
Tatiana Walk-Morris
Source link
![ReportWire](https://reportwire.org/wp-content/themes/zox-news/images/logos/logo-nav.png)