ReportWire

Tag: iab-computing

  • Twitter shifts course, allowing governments to post automated weather alerts and transit updates ‘for free’ | CNN Business

    Twitter shifts course, allowing governments to post automated weather alerts and transit updates ‘for free’ | CNN Business

    [ad_1]



    CNN
     — 

    Twitter said Tuesday it will permit public institutions such as transit agencies and the National Weather Service to post large volumes of automated tweets for free, provided that the accounts doing so are “verified gov or publicly owned services.”

    The announcement marks another sudden pivot in Twitter’s attempts to charge institutional users for access to its platform — reflecting an apparent concession to those who warned that Twitter’s paywall plan would disrupt consumers’ ability to receive timely updates from first responders, weather agencies and other vital services for which Twitter has become an essential distribution channel.

    Last week, New York’s Metropolitan Transit Authority announced it would stop posting real-time transit alerts on Twitter, citing reliability issues with Twitter’s platform and saying it does not pay tech platforms for the ability to provide the updates. In recent weeks, multiple regional accounts run by the National Weather Service have also warned followers to expect fewer weather updates in connection with Twitter’s platform changes.

    Tuesday’s shift comes amid a widespread backlash to Twitter’s paid plans, which cost as much as $2.5 million per year for top-level access privileges allowing organizations to download and post large volumes of tweets in an automated fashion.

    The changes, which took effect in March, restricted third parties from easily accessing Twitter’s application programming interface, or the technology that allows outside software to plug into Twitter’s platform. The changes provoked especially strong opposition from third-party app developers whose projects depend on uninterrupted Twitter access, as well as from academic researchers that study platform manipulation and misinformation, who said even the most expensive new plans provided just a fraction of the data Twitter once offered at no or low cost.

    On Tuesday, Twitter’s official account for developers acknowledged the impact the company’s paywall could have on civil society.

    “One of the most important use cases for the Twitter API has always been public utility,” it said. “Verified gov or publicly owned services who tweet weather alerts, transport updates and emergency notifications may use the API, for these critical purposes, for free.”

    But while Twitter appeared to be backing off from an attempt to charge vital services substantial fees for API access, the announcement left it ambiguous regarding how Twitter planned to ensure that critical public safety and transit accounts would be “verified.”

    Requiring that the accounts be verified under Twitter’s paid subscription program, Twitter Blue, could still involve forcing institutions to pay to access Twitter’s API.

    Twitter’s developer account didn’t immediately respond to CNN’s questions seeking clarification on the matter.

    [ad_2]

    Source link

  • Twitter is adding calls and encrypted messaging | CNN Business

    Twitter is adding calls and encrypted messaging | CNN Business

    [ad_1]


    London
    CNN
     — 

    Twitter is adding encrypted messaging to the platform Wednesday, and calls will follow shortly, CEO Elon Musk tweeted late Tuesday.

    “Release of encrypted DMs [direct messages] V1.0 should happen tomorrow. This will grow in sophistication rapidly. The acid test is that I could not see your DMs even if there was a gun to my head,” he said.

    “Coming soon will be voice and video chat from your handle to anyone on this platform, so you can talk to people anywhere in the world without giving them your phone number.”

    The move comes as Musk, who took control of Twitter six months ago, looks for ways to return the platform to growth. Its future looks increasingly uncertain in the face of dwindling advertising revenue and increased competition from rivals such as Mastodon and BlueSky, developed by Twitter co-founder and former CEO Jack Dorsey.

    Adding calls and encrypted messaging could allow Twitter to compete with Mark Zuckerberg’s Meta, which owns Facebook

    (FB)
    Messenger and WhatsApp. Billions of people around the world use those platforms to communicate daily with family and friends, including in groups. Twitter, meanwhile, reported 238 million monetizable daily users last July.

    Since taking the company private in October, Musk has turned Twitter on its head. A number of users, celebrities and media organizations have said they plan to leave the platform over recent policy changes, which they say threaten to make it less safe and reliable.

    Right-wing TV host Tucker Carlson said Tuesday he would relaunch his program on Twitter, which he praised as the only remaining large free-speech platform in the world after Fox News fired him last month.

    [ad_2]

    Source link

  • Chipmakers look to Japan as worries about China grow | CNN Business

    Chipmakers look to Japan as worries about China grow | CNN Business

    [ad_1]

    Japanese Prime Minister Fumio Kishida said he welcomed and expected more investment from global chipmakers, after meeting top executives on Thursday before a Group of Seven summit.

    China is set to be high on the agenda of the annual G7 leaders meeting that begins on Friday, with the United States increasingly urging its allies to counter the Asian giant’s chip and advanced technology development.

    Growing Taiwan and US tensions with China have brought serious challenges to the semiconductor industry. Taiwan is a major producer of chips used in everything from cars and smartphones to fighter jets.

    Ensuring diversified, resilient supply chains is a key component of the economic security theme being emphasized by Japan at the talks, White House national security adviser Jake Sullivan told reporters on Air Force One.

    Kishida told the executives, including those from Micron Technology Inc

    (MU)
    , Intel Corp

    (INTC)
    and Taiwan Semiconductor Manufacturing Co

    (TSM)
    (TSMC), that stabilizing supply chains would be a topic of discussion at the G7 talks in the western city of Hiroshima.

    “I am very pleased with your positive attitude towards investment in Japan, and would like the government as a whole to work on further expanding direct investment in Japan and support the semiconductor industry,” Kishida said.

    An industry ministry official later said Kishida wanted to foster cooperation to strengthen semiconductor supply chains, while Industry Minister Yasutoshi Nishimura said Japan would use 1.3 trillion yen ($9.63 billion) of the supplementary budget from the last fiscal year to support its chip business.

    In particular, Kumamoto prefecture in southwestern Japan is quickly becoming a hotbed for tech investment from companies including TSMC and Fujifilm Holdings Corp

    (FUJIF)
    .

    Micron said in a statement that it would bring extreme ultraviolet (EUV) technology to Japan, becoming the first semiconductor company to do so, and expected to invest up to 500 billion yen ($3.6 billion) with support from the Japanese government.

    Bloomberg News reported the financial incentives would total about 200 billion yen.

    An industry ministry official said no decision had been made on whether Japan would give a subsidy to Micron, but that one would be made as soon as possible.

    [ad_2]

    Source link

  • AI industry and researchers sign statement warning of ‘extinction’ risk | CNN Business

    AI industry and researchers sign statement warning of ‘extinction’ risk | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Dozens of AI industry leaders, academics and even some celebrities on Tuesday called for reducing the risk of global annihilation due to artificial intelligence, arguing in a brief statement that the threat of an AI extinction event should be a top global priority.

    “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read the statement published by the Center for AI Safety.

    The statement was signed by leading industry officials including OpenAI CEO Sam Altman; the so-called “godfather” of AI, Geoffrey Hinton; top executives and researchers from Google DeepMind and Anthropic; Kevin Scott, Microsoft’s chief technology officer; Bruce Schneier, the internet security and cryptography pioneer; climate advocate Bill McKibben; and the musician Grimes, among others.

    The statement highlights wide-ranging concerns about the ultimate danger of unchecked artificial intelligence. AI experts have said society is still a long way from developing the kind of artificial general intelligence that is the stuff of science fiction; today’s cutting-edge chatbots largely reproduce patterns based on training data they’ve been fed and do not think for themselves.

    Still, the flood of hype and investment into the AI industry has led to calls for regulation at the outset of the AI age, before any major mishaps occur.

    The statement follows the viral success of OpenAI’s ChatGPT, which has helped heighten an arms race in the tech industry over artificial intelligence. In response, a growing number of lawmakers, advocacy groups and tech insiders have raised alarms about the potential for a new crop of AI-powered chatbots to spread misinformation and displace jobs.

    Hinton, whose pioneering work helped shape today’s AI systems, previously told CNN he decided to leave his role at Google and “blow the whistle” on the technology after “suddenly” realizing “that these things are getting smarter than us.”

    Dan Hendrycks, director of the Center for AI Safety, said in a tweet Tuesday that the statement first proposed by David Kreuger, an AI professor at the University of Cambridge, does not preclude society from addressing other types of AI risk, such as algorithmic bias or misinformation.

    Hendrycks compared Tuesday’s statement to warnings by atomic scientists “issuing warnings about the very technologies they’ve created.”

    “Societies can manage multiple risks at once; it’s not ‘either/or’ but ‘yes/and,’” Hendrycks tweeted. “From a risk management perspective, just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well.”

    [ad_2]

    Source link

  • The US Senate is working to get up to speed on AI basics ahead of any legislation | CNN Business

    The US Senate is working to get up to speed on AI basics ahead of any legislation | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    The US Senate is inching forward on a plan to regulate artificial intelligence, after months of seeing how ChatGPT and similar tools stand to supercharge — or disrupt— wide swaths of society.

    But despite outlining broad contours of the plan, senators are still likely months away from introducing a comprehensive bill setting guardrails for the industry, let alone passing legislation and getting it signed into law. The deliberate pace of progress contrasts with the blistering speed with which companies and organizations have embraced generative AI, and the flood of investment into the industry.

    The Senate’s plan calls for briefing lawmakers on the basic facts of artificial intelligence over the summer, before beginning to consider legislation in the following months, even as some senators have begun to pitch proposals.

    The efforts reflect how, despite urgent calls by civil society groups and industry for guardrails on the technology, many lawmakers are still getting up to speed.

    To help educate members, Senate Majority Leader Chuck Schumer on Tuesday announced a series of three senators-only information sessions to take place in the coming weeks.

    The closed-door briefings will cover topics ranging from AI’s current capabilities and competition in AI development to how US national security and defense agencies are already putting the technology to use. The latter session, Schumer said, will be the first-ever classified senators’ briefing on AI.

    “The Senate must deepen our expertise in this pressing topic,” Schumer wrote in a letter to colleagues announcing the briefings. “AI is already changing our world, and experts have repeatedly told us that it will have a profound impact on everything from our national security to our classrooms to our workforce, including potentially significant job displacement.”

    Schumer had earlier kicked off a high-level push for AI legislation in April, when he proposed shaping any eventual bill around four principles promoting transparency and democratic values.

    The information sessions are expected to wrap up by the time Congress breaks for August recess, according to South Dakota Republican Sen. Mike Rounds, one of three other senators Schumer has tapped to lead on a comprehensive AI bill.

    By that point, Rounds told reporters Wednesday on the sidelines of a Washington conference, there may be “lots of different ideas floating” but not necessarily a bill to speak of.

    Schumer, Rounds and the other leading lawmakers on the AI working group — New Mexico Democratic Sen. Martin Heinrich and Indiana Republican Sen. Todd Young — haven’t settled on how to coordinate various legislative proposals yet.

    Options include forming a select committee to craft a comprehensive AI bill, or “splitting out and having lots of different committees come up with different pieces of legislation,” Rounds said.

    The AI hype has produced high-profile hearings and scattershot policy proposals. Last month, OpenAI CEO Sam Altman testified before a Senate Judiciary subcommittee, wowing lawmakers by asking for regulation and by giving a technical demonstration to enthralled members of the House the evening before.

    Sen. Michael Bennet has introduced legislation to create a new federal agency with authority to regulate AI, for example. And on Wednesday, Sen. Josh Hawley unveiled his own framework for AI legislation that called for letting Americans sue companies for harms created by AI models.

    Rounds told reporters Schumer has not set a timeframe for coming up with AI legislation, adding that the current goal is to allow ideas to “melt for a while.”

    But he predicted that with AI’s expected impact on many agencies and industries, it would be impossible not to foresee a wide-ranging and open legislative process reflecting input from many sources, akin to how the Senate crafts the annual spending package known as the National Defense Authorization Act.

    “You bring in all of these ideas, and then you very quietly start to meld this bill together, kind of behind the scenes in a way,” he said. “You go through a committee process in which you deliver a bill that says this could pass, and then you allow other members to come in and offer their amendments to it as well. That has worked well year-in and year-out for the NDAA.”

    [ad_2]

    Source link

  • Schumer outlines plan for how Senate will regulate AI | CNN Business

    Schumer outlines plan for how Senate will regulate AI | CNN Business

    [ad_1]



    CNN
     — 

    Senate Majority Leader Chuck Schumer announced a broad, open-ended plan for regulating artificial intelligence on Wednesday, describing AI as an unprecedented challenge for Congress that effectively has policymakers “starting from scratch.”

    The plan, Schumer said at a speech in Washington, will begin with at least nine panels to identify and discuss the hardest questions that regulations on AI will have to answer, including how to protect workers, national security and copyright and to defend against “doomsday scenarios.” The panels will be composed of experts from industry, academia and civil society, with the first sessions taking place in September, Schumer said.

    The Senate will then turn to committee chairs and other vocal lawmakers on AI legislation to develop bills reflecting the panel discussions, Schumer added, arguing that the resulting US solution could leapfrog existing regulatory proposals from around the world.

    “If we can put this together in a very serious way, I think the rest of the world will follow and we can set the direction of how we ought to go in AI, because I don’t think any of the existing proposals have captured that imagination,” Schumer said, reflecting on other recent proposals such as the European Union’s draft AI Act, which last week was approved by the European Parliament.

    The speech represents Schumer’s most definitive remarks to date on a problem that has dogged Congress for months amid the wide embrace of tools such as ChatGPT: How to catch up, or get ahead, on policymaking for a technology that is already in the hands of millions of people and evolving rapidly.

    In the wake of ChatGPT’s viral success, Silicon Valley has raced to develop and deploy a new crop of generative AI tools that can produce images and writing almost instantly, with the potential to change how people work, shop and interact with each other. But these same tools have also raised concerns for their potential to make factual errors, spread misinformation and perpetuate biases, among other issues.

    In contrast to the fast pace of AI advancements, Schumer has stressed the importance of a deliberate approach, focusing on getting lawmakers acquainted with the basic facts of the technology and the issues it raises before seeking to legislate. He and three other colleagues began last week by convening the first in a series of closed-door briefings on AI for senators that is expected to run through the summer.

    In his remarks Wednesday, Schumer appeared to acknowledge criticism of his pace.

    “I know many of you have spent months calling on us to act,” he said. “I hear you. I hear you loud and clear.”

    But he described AI as a novel issue for which Congress lacks a guide.

    “It’s not like labor, or healthcare, or defense, where Congress has had a long history we can work off of,” he said. “Experts aren’t even sure which questions policymakers should be asking. In many ways, we’re starting from scratch.”

    Schumer described his plan as laying “a foundation for AI policy” that will do “years of work in a matter of months.”

    To guide that process, Schumer expanded on a set of principles he first announced in April. Formally unveiling the framework on Wednesday, Schumer said any legislation on AI should be geared toward facilitating innovation before addressing risks to national security or democratic governance.

    “Innovation first,” Schumer said, “but with security, accountability, [democratic] foundations and explainability.”

    The last two pillars of his framework, Schumer said, may be among the most important, as unrestricted artificial intelligence could undermine electoral processes or make it impossible to critically evaluate an AI’s claims.

    Schumer’s remarks were restrained in calling for any specific proposals. At one point, he acknowledged that a consensus may even emerge that recommends against major government intervention on the technology.

    But he was clear on one point: “We do — we do — need to require companies to develop a system where in simple and understandable terms users understand why the system produced a particular answer, and where that answer came from.”

    The Senate may still be a long way off from unveiling any comprehensive proposal, however. Schumer predicted that the process is likely to take longer than weeks but shorter than years.

    “Months would be the proper timeline,” he said.

    [ad_2]

    Source link

  • ‘Serious concerns’: Top companies raise alarm over Europe’s proposed AI law | CNN Business

    ‘Serious concerns’: Top companies raise alarm over Europe’s proposed AI law | CNN Business

    [ad_1]


    Dortmund, Germany
    CNN
     — 

    Dozens of Europe’s top business leaders have pushed back on the European Union’s proposed legislation on artificial intelligence, warning that it could hurt the bloc’s competitiveness and spur an exodus of investment.

    In an open letter sent to EU lawmakers Friday, C-suite executives from companies including Siemens

    (SIEGY)
    , Carrefour

    (CRERF)
    , Renault

    (RNLSY)
    and Airbus

    (EADSF)
    raised “serious concerns” about the EU AI Act, the world’s first comprehensive AI rules.

    Other prominent signatories include big names in tech, such as Yann LeCun, chief AI scientist of Meta

    (FB)
    , and Hermann Hauser, founder of British chipmaker ARM.

    “In our assessment, the draft legislation would jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing,” the group of more than 160 executives said in the letter.

    They argue that the draft rules go too far, especially in regulating generative AI and foundation models, the technology behind popular platforms such as ChatGPT.

    Since the craze over generative AI began this year, technologists have warned of the potential dark side of systems that allow people to use machines to write college essays, take academic tests and build websites. Last month, hundreds of top experts warned about the risk of human extinction from AI, saying mitigating that possibility “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    The EU proposal applies a broad brush to such software “regardless of [its] use cases,” and could push innovative companies and investors out of Europe because they would face high compliance costs and “disproportionate liability risks,” according to the executives.

    “Such regulation could lead to highly innovative companies moving their activities abroad” and investors withdrawing their capital from European AI, the group wrote.

    “The result would be a critical productivity gap between the two sides of the Atlantic.”

    The executives are calling for policymakers to revise the terms of the bill, which was agreed upon by European Parliament lawmakers earlier this month and is now being negotiated with EU member states.

    “In a context where we know very little about the real risks, the business model, or the applications of generative AI, European law should confine itself to stating broad principles in a risk-based approach,” the group wrote.

    The business leaders called for a regulatory board of experts to oversee these principles and ensure they can be continuously adapted to changes in the fast-moving technology.

    The group also urged lawmakers to work with their US counterparts, noting that regulatory proposals had also been made in the United States. EU lawmakers should try to “create a legally binding level playing field,” the executives wrote.

    If such action isn’t taken and Europe is constrained by regulatory demands, it could hurt the region’s international standing, the group suggested.

    “Like the invention of the Internet or the breakthrough of silicon chips, generative AI is the kind of technology that will be decisive for the performance capacity and therefore the significance of different regions,” it said.

    Tech experts have increasingly called for greater regulation of AI as it becomes more widely used. In recent months, the United States and China have also laid out plans to regulate the technology. Sam Altman, CEO of ChatGPT maker OpenAI, has used high-profile trips around the world in recent weeks to call for co-ordinated international regulation of AI.

    The EU rules are the world’s “first ever attempt to enact” legally binding rules that apply to different areas of AI, according to the European Parliament.

    Negotiators of the AI Act hope to reach an agreement before the end of the year, and once the final rules are adopted by the European Parliament and EU member states, the act will become law.

    As they stand now, the rules would ban AI systems deemed to be harmful, including real-time facial recognition systems in public spaces, predictive policing tools and social scoring systems, such as those in China.

    The Act also outlines transparency requirements for AI systems. For instance, systems such as ChatGPT would have to disclose that their content was AI-generated and provide safeguards against the generation of illegal content.

    Engaging in prohibited AI practices could lead to hefty fines: up to €40 million ($43 million) or an amount equal to up to 7% of a company’s worldwide annual turnover, whichever is higher.

    But penalties would be “proportionate” and consider the market position of small-scale providers, suggesting there could be some leniency for startups.

    Not everyone has pushed back on the legislation so far. Earlier this month, Digital Europe, a trade association that counts SAP

    (SAP)
    and Ericsson

    (ERIC)
    among its members, called the rules “a text we can work with.”

    “However, there remain some areas which can be improved to ensure Europe becomes a competitive hub for AI innovation,” the group said in a statement.

    Dragos Tudorache, a Romanian member of parliament who led the bill’s drafting, said he was convinced that those who signed the new letter “have not read the text but have rather reacted on the stimulus of a few.”

    “The only concrete suggestions made are in fact what the [draft] text now contains: an industry-led process for defining standards, governance with industry at the table, and a light regulatory regime that asks for transparency. Nothing else,” he said in a statement.

    “It is a pity that the aggressive lobby of a few is capturing other serious companies in the net, which unfortunately undermines the undeniable lead that Europe has taken.”

    Brando Benifei, an Italian member of parliament who also led the drafting of the legislation, told CNN “we will listen to all concerns and stakeholders when dealing with AI regulation, but we have a firm commitment to deliver clear and enforceable rules.”

    “Our work could positively affect the global conversation and direction when dealing with artificial intelligence and its impact on fundamental rights, without hindering the necessary pursuit of innovation,” he said.

    [ad_2]

    Source link

  • Tired of Elon Musk? Here are the Twitter alternatives you should know about | CNN Business

    Tired of Elon Musk? Here are the Twitter alternatives you should know about | CNN Business

    [ad_1]



    CNN
     — 

    When Elon Musk took over Twitter in October and began upending the platform, there weren’t many viable alternatives for frustrated users. Now, there may be too many.

    A growing number of services have launched or gained traction in recent months by appealing to users who are uncomfortable with Musk’s decisions to slash Twitter’s staff, overhaul the verification process, reinstate numerous incendiary accounts and most recently impose temporary read limits on tweets.

    Bluesky, Mastodon and Spill are among the many social apps vying for users over the last several months, with services that look and feel strikingly similar to Twitter. But now this increasingly crowded marketplace may be disrupted by the most dominant social media company: Meta.

    Meta’s Twitter clone, Threads, launched Wednesday and amassed more than 70 million sign-ups as of Friday morning thanks to a decision to tie the app to Instagram. Its user base is already far more than newer rivals and puts Threads on pace to rapidly catch up to Twitter, which had 238 million active users last year before Musk took the company private.

    In interviews, some other Twitter competitors took jabs at Meta’s effort and expressed confidence in their ability to grow and maintain an audience, even if it ends up being much smaller than what Mark Zuckerberg’s company can attract.

    “Threads leans heavily on celebrities and people with large Instagram followings, and therefore risks being more of a megaphone for the established, rather than something for everyone,” Sarah Oh, a former Twitter employee and founder of rival app T2, told CNN in an email.

    Spill co-founder and CEO Alphonzo Terrell said the company is “thrilled to see so much innovation in the social space” and remains “confident in our roadmap.”

    Here’s what you should know about the current crop of services trying to take on Twitter.

    Threads is Meta’s long-anticipated answer to Twitter and the biggest threat to the social network Musk bought for $44 billion. Threads is intended to offer a space for real-time conversations online, a function that has long been Twitter’s core selling point, and it’s doing so in part by adoption many of Twitter’s most recognizable features.

    The app has already attracted a long list of celebrities, brands and other VIP users, as well as many who clearly appear to be frustrated with Musk’s Twitter. And Zuckerberg isn’t just looking to catch up to Twitter; he wants to build a service that’s far larger.

    “It’ll take some time, but I think there should be a public conversations app with 1 billion+ people on it. Twitter has had the opportunity to do this but hasn’t nailed it,” Zuckerberg wrote on Threads. “Hopefully we will.”

    Launched by former Twitter employees, Spill says it strives to be a “visual conversation at the speed of culture.”

    The site is visual heavy and pushes GIFs, memes and video, making it more of a destination for creative communities. Spill has also emerged as a haven for Black Twitter users and marginalized communities seeking a safe space online.

    While the traction for Threads was unique, Spill has gained recently, too. Last weekend, amid renewed chaos at Twitter over the read limits, Spill gained “hundreds of thousands of new users,” according to Terrell, the CEO.

    T2, another service created by former Twitter employees, offers a social feed of posts with 280-character limits. The key selling point that sets it apart from others is its focus on safety, according to Oh, the founder.

    “We really do want to create an experience that allows people to share what they want to share without fearing risk of things like abuse and harassment, and we feel like we’re really well positioned to deliver on that,” Oh told CNN in February.

    In a statement this week, Oh doubled down on safety as a possible differentiator with Threads as well, raising the question of whether Meta had “learned from their past mistakes” after years of scrutiny on its struggles to police its own platforms.

    Bluesky, a service backed by Twitter co-founder Jack Dorsey, looks identical to Twitter, with one key difference. The app runs on a decentralized network, which provides users more control over how the service is run, the data is stored, and the content is moderated.

    Bluesky was formed independently of Twitter while Dorsey was serving as CEO but it was funded by the company until it became an independent organization in February 2022. In a tweet introducing the idea in 2019, Dorsey said it also plans to “build an open community around it, inclusive of companies & organizations, researchers, civil society leaders,” but warned “this isn’t going to happen overnight.”

    This week, Dorsey appeared to acknowledge that the market is now flooded with “Twitter clones.”

    Also built on decentralized networks, Mastodon launched before Musk took over Twitter but skyrocketed in popularity after the acquisition.

    Mastodon lets users join a slew of different servers run by various groups and individuals, rather than one central platform controlled by a single company like Twitter or Instagram. Mastodon is also free of ads. It’s developed by a nonprofit run by Eugen Rochko, who created Mastodon in 2016.

    After joining, users pick a server, with options from general-interest servers such as mastodon.world; regional servers like sfba.social, which is aimed at people in the San Francisco Bay Area; and ones aimed at various interests (many servers review new sign-ups before approving them.)

    Launched publicly in June 2022, Cohost offers a text-based social media feed with followers, reposts, likes and comments, similar to Twitter. However, the product is chronologically based with no ads, no trending topics and no displayed interactions (think hidden like counts and follower lists).

    Part of Cohost’s goal is to create a less hostile space for open dialogue, according to the website.

    “People who hear ‘Facebook has a Twitter replacement now!’ and don’t immediately run for the hills are unlikely to be interested in anything we’re doing,” said Jae Kaplan, co-founder of anti-software software club, the company that develops cohost. “We’re in separate market niches. I doubt they’re going to do anything to try and appeal to our users, and we’re not going to do anything to try and appeal to their users.”

    [ad_2]

    Source link

  • New lawsuit claims Elon Musk’s Twitter owes more severance to former employees | CNN Business

    New lawsuit claims Elon Musk’s Twitter owes more severance to former employees | CNN Business

    [ad_1]


    New York
    CNN
     — 

    A former Twitter employee on Wednesday filed a new lawsuit against Twitter and its owner, Elon Musk, alleging that the company failed to provide the full amount of severance it had promised employees prior to mass layoffs last November.

    The lawsuit, which was filed in federal district court in California and seeks class action status, asks the court to order Musk and Twitter to pay the additional severance benefits allegedly owed to former employees, in an amount no less than $500 million.

    The complaint was brought on behalf of Courtney McMillian, a former human resources leader at Twitter who was part of the mass layoffs Musk conducted the week after he bought the company last year. It alleges that Twitter made repeated assurances to employees about its severance plan amid Musk’s takeover in an effort to retain workers. In particular, the complaint claims that Twitter had promised senior employees severance of six months of base pay plus one week for every year of service, in addition to other benefits. Instead, Musk’s Twitter provided laid off employees with a total of three months of pay, including the state and federally mandated notice periods.

    In response to a request for comment on the lawsuit, Twitter sent CNN an automated poop emoji.

    Musk has cut around 80% of Twitter’s staff from prior to the takeover in his nine months owning the company.

    The lawsuit is just the latest legal action brought against Twitter by former employees with severance-related claims. More than 1,500 former employees have filed arbitration claims, after Twitter pushed for anyone who had signed an arbitration agreement while working at the company to pursue their claims out of court.

    But Kate Mueting, a lawyer working on the suit, said that Wednesday’s case relies on a federal law, the Employee Retirement Income Security Act, that the firm argues was exempt from Twitter’s arbitration agreement. That means that, if the suit is granted its request for class action status, former employees may be able to participate whether or not they signed the arbitration agreement.

    Twitter is also facing lawsuits from vendors, landlords and business partners who claim the company has failed to pay what they are owed, as well as music publishers who have alleged copyright infringement on the platform. A lawyer for the company last week also sent a letter threatening to sue Meta over its new rival platform, Threads.

    [ad_2]

    Source link

  • TikTok ‘stress test’ shows it’s not ‘fully ready’ for looming EU social media rules, commissioner says | CNN Business

    TikTok ‘stress test’ shows it’s not ‘fully ready’ for looming EU social media rules, commissioner says | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    TikTok has “more work” to do to meet tough new European standards that are coming for social media and content moderation, according to a top EU official who performed a “stress test” of the company this week.

    The report by EU Commissioner Thierry Breton comes ahead of a looming Aug. 25 deadline for platforms such as TikTok to comply with the Digital Services Act (DSA) — a package of regulations aimed at battling misinformation, potential privacy abuses and illegal content, among other things.

    European Commission staff conducted the TikTok test on Monday at the company’s Dublin offices, according to a statement from the commissioner, and Breton outlined the results of the voluntary inspection to CEO Shou Chew on Tuesday.

    “TikTok is dedicating significant resources to compliance,” Breton said, pointing to changes TikTok has made to its recommendation algorithms and its transparency procedures as evidence the company appears to be taking its obligations seriously.

    But, he added, the test results also showed “more work is needed to be fully ready for the compliance deadline.”

    “Now it is time to accelerate to be fully compliant,” Breton said, indicating that officials will be revisiting at the end of the summer whether TikTok has closed the gap.

    TikTok didn’t immediately respond to a request for comment on the test results.

    TikTok isn’t the only large tech platform to submit to an EU stress test. Last month, European officials evaluated Twitter’s platform for DSA compliance and also announced plans to stress test Facebook-parent Meta’s services.

    [ad_2]

    Source link