ReportWire

Tag: trust and safety

  • Safety by design | TechCrunch

    Safety by design | TechCrunch

    [ad_1]

    W
    elcome to the TechCrunch Exchange, a weekly startups-and-markets newsletter. It’s inspired by the daily TechCrunch+ column where it gets its name. Want it in your inbox every Saturday? Sign up here.

    Tech’s ability to reinvent the wheel has its downsides: It can mean ignoring blatant truths that others have already learned. But the good news is that new founders are sometimes figuring it out for themselves faster than their predecessors. — Anna

    AI, trust and safety

    This year is an Olympic year, a leap year . . . and also the election year. But before you accuse me of U.S. defaultism, I’m not only thinking of the Biden vs. Trump sequel: More than 60 countries are holding national elections, not to mention the EU Parliament’s.

    Which way each of these votes swings could have an impact on tech companies; different parties tend to have different takes on AI regulation, for instance. But before elections even take place, tech will also have a role to play to guarantee their integrity.

    Election integrity likely wasn’t on Mark Zuckerberg’s mind when he created Facebook, and perhaps not even when he bought WhatsApp. But 20 and 10 years later, respectively, trust and safety is now a responsibility that Meta and other tech giants can’t escape, whether they like it or not. This means working toward preventing disinformation, fraud, hate speech, CSAM (child sexual abuse material), self-harm and more.

    However, AI will likely make the task more difficult, and not just because of deepfakes or from empowering larger numbers of bad actors. Says Lotan Levkowitz, a general partner at Grove Ventures:

    All these trust and safety platforms have this hash-sharing database, so I can upload there what is a bad thing, share with all my communities, and everybody is going to stop it together; but today, I can train the model to try to avoid it. So even the more classic trust and safety work, because of Gen AI, is getting tougher and tougher because the algorithm can help bypass all these things.

    From afterthought to the forefront

    Although online forums had already learned a thing or two on content moderation, there was no social network playbook for Facebook to follow when it was born, so it is somewhat understandable that it would need a while to rise to the task. But it is disheartening to learn from internal Meta documents that as far back as 2017, there was still internal reluctance at adopting measures that could better protect children.

    Zuckerberg was one of the five social media tech CEOs who recently appeared at a U.S. Senate hearing on children’s online safety. Testifying was not a first by far for Meta, but that Discord was included is also worth noting; while it has branched out beyond its gaming roots, it is a reminder that trust and safety threats can occur in many online places. This means that a social gaming app, for instance, could also put its users at risk of phishing or grooming.

    Will newer companies own up faster than the FAANGs? That’s not guaranteed: Founders often operate from first principles, which is good and bad; the content moderation learning curve is real. But OpenAI is much younger than Meta, so it is encouraging to hear that it is forming a new team to study child safety — even if it may be a result of the scrutiny it’s subjected to.

    Some startups, however, are not waiting for signs of trouble to take action. A provider of AI-enabled trust and safety solutions and part of Grove Ventures’ portfolio, ActiveFence is seeing more inbound requests, its CEO Noam Schwartz told me.

    “I’ve seen a lot of folks reaching out to our team from companies that were just founded or even pre-launched. They’re thinking about the safety of their products during the design phase [and] adopting a concept called safety by design. They are baking in safety measures inside their products, the same way that today you’re thinking about security and privacy when you’re building your features.”

    ActiveFence is not the only startup in this space, which Wired described as “trust and safety as a service.” But it is one of the largest, especially since it acquired Spectrum Labs in September, so it’s good to hear that its clients include not only big names afraid of PR crises and political scrutiny, but also smaller teams that are just getting started. Tech, too, has an opportunity to learn from past mistakes.



    [ad_2]

    Anna Heim

    Source link

  • Twitter cuts workers addressing hate speech and trust and safety as Elon Musk’s chaotic revamp continues

    Twitter cuts workers addressing hate speech and trust and safety as Elon Musk’s chaotic revamp continues

    [ad_1]

    Twitter Inc., under new owner Elon Musk, has made deeper cuts into its already radically diminished trust and safety team handling global content moderation, as well as to the unit related to hate speech and harassment, according to people familiar with the matter. 

    At least a dozen more cuts on Friday night affected workers in the company’s Dublin and Singapore offices, according to the people, who asked not to be identified discussing non-public changes. They included Nur Azhar Bin Ayob, the head of site integrity for Twitter’s Asia-Pacific region, a relatively recent hire; and Analuisa Dominguez, Twitter’s senior director of revenue policy.

    Workers on teams handling the social network’s misinformation policy, global appeals and state media on the platform were also eliminated. 

    Ella Irwin, Twitter’s head of trust and safety, confirmed several members of the teams were cut but denied that they targeted some of the areas mentioned by Bloomberg. 

    “It made more sense to consolidate teams under one leader (instead of two) for example,” Irwin said in an emailed response to a request for comment. 

    She said Twitter did eliminate roles in areas of the company that didn’t get enough “volume” to justify continued support. But she said that Twitter had increased staffing in its appeals department, and that it would continue to have a head of revenue policy and a head for the platform’s Asia-Pacific region for trust and safety.

    Musk bought Twitter for $44 billion in October, partly financing the deal with almost $13 billion of debt that entailed interest repayments of around $1.5 billion a year. He has since embarked on a frantic mission to revamp the social-media platform, which he has said is at risk of going bankrupt and was losing $4 million a day as of early November. 

    Speaking on a Twitter Spaces event last month, the mercurial entrepreneur likened the company to a “plane that is headed towards the ground at high speed with the engines on fire and the controls don’t work.”

    Since taking over the company, Musk has overseen firings or departures of roughly 5,000 of Twitter’s 7,500 employees and instituted a “hardcore” work environment for those remaining.

    Twitter faces multiple suits over unpaid bills, including for private chartered plane flights, software services and rent at one of its San Francisco offices.

    Our new weekly Impact Report newsletter examines how ESG news and trends are shaping the roles and responsibilities of today’s executives. Subscribe here.

    [ad_2]

    Kurt Wagner, Bloomberg

    Source link