ReportWire

Tag: iab-technology & computing

  • TikTok ‘stress test’ shows it’s not ‘fully ready’ for looming EU social media rules, commissioner says | CNN Business

    TikTok ‘stress test’ shows it’s not ‘fully ready’ for looming EU social media rules, commissioner says | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    TikTok has “more work” to do to meet tough new European standards that are coming for social media and content moderation, according to a top EU official who performed a “stress test” of the company this week.

    The report by EU Commissioner Thierry Breton comes ahead of a looming Aug. 25 deadline for platforms such as TikTok to comply with the Digital Services Act (DSA) — a package of regulations aimed at battling misinformation, potential privacy abuses and illegal content, among other things.

    European Commission staff conducted the TikTok test on Monday at the company’s Dublin offices, according to a statement from the commissioner, and Breton outlined the results of the voluntary inspection to CEO Shou Chew on Tuesday.

    “TikTok is dedicating significant resources to compliance,” Breton said, pointing to changes TikTok has made to its recommendation algorithms and its transparency procedures as evidence the company appears to be taking its obligations seriously.

    But, he added, the test results also showed “more work is needed to be fully ready for the compliance deadline.”

    “Now it is time to accelerate to be fully compliant,” Breton said, indicating that officials will be revisiting at the end of the summer whether TikTok has closed the gap.

    TikTok didn’t immediately respond to a request for comment on the test results.

    TikTok isn’t the only large tech platform to submit to an EU stress test. Last month, European officials evaluated Twitter’s platform for DSA compliance and also announced plans to stress test Facebook-parent Meta’s services.

    [ad_2]

    Source link

  • ChatGPT creator pulls AI detection tool due to ‘low rate of accuracy’ | CNN Business

    ChatGPT creator pulls AI detection tool due to ‘low rate of accuracy’ | CNN Business

    [ad_1]



    CNN
     — 

    Less than six months after ChatGPT-creator OpenAI unveiled an AI detection tool with the potential to help teachers and other professionals detect AI generated work, the company has pulled the feature.

    OpenAI quietly shut down the tool last week citing a “low rate of accuracy,” according to an update to the original company blog post announcing the feature.

    “We are working to incorporate feedback and are currently researching more effective provenance techniques for text,” the company wrote in the update. OpenAI said it is also committed to helping “users to understand if audio or visual content is AI-generated.”

    The news may renew concerns about whether the companies behind a new crop of generative AI tools are equipped to build safeguards. It also comes as educators prepare for the first full school year with tools like ChatGPT publicly available.

    The sudden rise of ChatGPT quickly raised alarms among some educators late last year over the possibility that it could make it easier than ever for students to cheat on written work. Public schools in New York City and Seattle banned students and teachers from using ChatGPT on the district’s networks and devices. Some educators moved with remarkable speed to rethink their assignments in response to ChatGPT, even as it remained unclear how widespread use of the tool was among students and how harmful it could really be to learning.

    Against that backdrop, OpenAI announced the AI detection tool in February to allow users to check if an essay was written by a human or AI. The feature, which worked on English AI-generated text, was powered by a machine learning system that takes an input and assigns it to several categories. After pasting a body of text such as a school essay into the new tool, it gave one of five possible outcomes, ranging from “likely generated by AI” to “very unlikely.”

    But even on its launch day, OpenAI admitted the tool was “imperfect” and results should be “taken with a grain of salt.”

    “We really don’t recommend taking this tool in isolation because we know that it can be wrong and will be wrong at times – much like using AI for any kind of assessment purposes,” Lama Ahmad, policy research director at OpenAI, told CNN at the time.

    While the tool might provide another reference point, such as comparing past examples of a student’s work and writing style, Ahmad said “teachers need to be really careful in how they include it in academic dishonesty decisions.”

    Although OpenAI may be shelving its tool for now, there are some alternatives on the market.

    Other companies such as Turnitin have also rolled out AI plagiarism detection tools that could help teachers identify when assignments are written by the tool. Meanwhile, Princeton student Edward Tuan introduced a similar AI detection feature, called ZeroGPT.

    [ad_2]

    Source link

  • Pay $84 a year for Twitter Blue or lose your checkmark beginning April 1, Twitter says | CNN Business

    Pay $84 a year for Twitter Blue or lose your checkmark beginning April 1, Twitter says | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Twitter’s free blue “verified” checkmarks for notable users may finally be coming to an end.

    Ever since Elon Musk took control of the company in October, he’s been threatening to remove the “legacy” checkmarks that confirmed the identities of users like government officials, corporations, journalists, celebrities and other high-profile tweeters.

    Now Musk may follow through: “On April 1st, we will begin winding down our legacy verified program and removing legacy verified checkmarks,” the company wrote in a tweet Thursday.

    A caveat, however: Twitter says this policy will go into effect starting on April Fool’s Day. Musk in particular has been known for April 1 trolling, including in 2018 when he falsely tweeted that his electric vehicle company Tesla

    (TSLA)
    had gone bankrupt.

    “To keep your blue checkmark on Twitter, individuals can sign up for Twitter Blue,” the company’s tweet continued.

    Twitter Blue is a subscription service that Musk relaunched late last year that costs individuals $84 a year or $8 a month. Charging fees provides a revenue stream for Twitter — and a needed one, as Twitter currently collects virtually all of its revenue from advertisers, who have been fleeing the social media platform since he took over.

    Charging for Twitter verification provides both additional revenue to Twitter and a way for Musk to show his disdain for government agencies, journalists and others. Yet building a replacement for the legacy verification program has proved to be fraught.

    Twitter Blue first launched in the pre-Musk days of 2021, as a subscription service offering “power features” like undoing a tweet and saving bookmarks to folders. Musk relaunched the program in November 2022, including a blue checkmark in the features for paying users.

    Immediately the program was flooded with users who paid for counterfeit accounts pretending to be users such as former President Donald Trump, Rudy Giuliani, LeBron James and Nintendo.

    Before being suspended, the impostor Nintendo account tweeted an image of video game character Mario giving the viewer the middle finger. The LeBron James account falsely claimed the athlete had requested a trade. The fake Trump account tweeted, “This is why Elon Musk’s plan doesn’t work.”

    Musk pulled the Twitter Blue program for a few weeks and relaunched it yet again in December, with additional steps for reviewing and approving subscribers. Beyond the checkmark, Blue also lets paying users edit a tweet up to 5 times within 30 minutes, create tweets up to 4,000 characters long and post HD videos.

    The company also says Twitter Blue users will see 50% fewer ads in their home timelines, and that their tweets will be prioritized among replies, mentions and searches.

    For companies and other organizations, Twitter Blue costs $1,000 a month for the main account and $50 a month for each additional related account.

    – CNN’s Brian Fung contributed to this report.

    [ad_2]

    Source link

  • Welcome to the era of viral AI generated ‘news’ images | CNN Business

    Welcome to the era of viral AI generated ‘news’ images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Pope Francis wearing a massive, white puffer coat. Elon Musk walking hand-in-hand with rival GM CEO Mary Barra. Former President Donald Trump being detained by police in dramatic fashion.

    None of these things actually happened, but AI-generated images depicting them did go viral online over the past week.

    The images ranged from obviously fake to, in some cases, compellingly real, and they fooled some social media users. Model and TV personality Chrissy Teigen, for example, tweeted that she thought the pope’s puffer coat was real, saying, “didn’t give it a second thought. no way am I surviving the future of technology.” The images also sparked a slew of headlines, as news organizations rushed to debunk the false images, especially those of Trump, who was ultimately indicted by a Manhattan grand jury on Thursday but has not been arrested.

    The situation demonstrates a new online reality: the rise of a new crop of buzzy artificial intelligence tools has made it cheaper and easier than ever to create realistic images, as well as audio and videos. And these images are likely to pop up with increasing frequency on social media.

    While these AI tools may enable new means of expressing creativity, the spread of computer-generated media also threatens to further pollute the information ecosystem. That risks adding to the challenges for users, news organizations and social media platforms to vet what’s real, after years of grappling with online misinformation featuring far less sophisticated visuals. There are also concerns that AI-generated images could be used for harassment, or to further drive divided internet users apart.

    “I worry that it will sort of get to a point where there will be so much fake, highly realistic content online that most people will just go with their tribal instincts as a guide to what they think is real, more than actually informed opinions based on verified evidence,” said Henry Ajder, a synthethic media expert who works as an advisor to companies and government agencies, including Meta Reality Labs’ European Advisory Council.

    Images, compared to the AI-generated text that has also recently proliferated thanks to tools like ChatGPT, can be especially powerful in provoking emotions when people view them, said Claire Leibowicz, head of AI and media integrity at the Partnership on AI, a nonprofit industry group. That can make it harder for people to slow down and evaluate whether what they’re looking at is real or fake.

    What’s more, coordinated bad actors could eventually attempt to create fake content in bulk — or suggest that real content is computer-generated — in order to confuse internet users and provoke certain behaviors.

    “The paranoia of an impending Trump … potential arrest created a really useful case study in understanding what the potential implications are, and I think we’re very lucky that things did not go south,” said Ben Decker, CEO of threat intelligence group Memetica. “Because if more people had had that idea en masse, in a coordinated fashion, I think there’s a universe where we could start to see the online to offline effects.”

    Computer-generated image technology has improved rapidly in recent years, from the photoshopped image of a shark swimming through a flooded highway that has been repeatedly shared during natural disasters to the websites that four years ago began churning out mostly unconvincing fake photos of non-existent people.

    Many of the recent viral AI-generated images were created by a tool called Midjourney, a less than year-old platform that allows users to create images based on short text prompts. On its website, Midjourney describes itself as “a small self-funded team,” with just 11 full-time staff members.

    A cursory glance at a Facebook page popular among Midjourney users reveals AI-generated images of a seemingly inebriated Pope Francis, elderly versions of Elvis and Kurt Cobain, Musk in a robotic Tesla bodysuit and many creepy animal creations. And that’s just from the past few days.

    Midjourney has emerged as a popular tool for users to create AI-generated images.

    The latest version of Midjourney is only available to a select number of paid users, Midjourney CEO David Holz told CNN in an email Friday. Midjourney this week paused access to the free trial of its earlier versions due to “extraordinary demand and trial abuse,” according to a Discord post from Holz, but he told CNN it was unrelated to the viral images. The creator of the Trump arrest images also claimed he was banned from the site.

    The rules page on the company’s Discord site asks users: “Don’t use our tools to make images that could inflame, upset, or cause drama. That includes gore and adult content.”

    “Moderation is hard and we’ll be shipping improved systems soon,” Holz told CNN. “We’re taking lots of feedback and ideas from experts and the community and are trying to be really thoughtful.”

    In most cases, the creators of the recent viral images don’t appear to have been acting malevolently. The Trump arrest images were created by the founder of the online investigative journalism outlet Bellingcat, who clearly labeled them as his fabrications, even if other social media users weren’t as discerning.

    There are efforts by platforms, AI technology companies and industry groups to improve the transparency around when a piece of content is generated by a computer.

    Platforms including Meta’s Facebook and Instagram, Twitter and YouTube have policies restricting or prohibiting the sharing of manipulated media that could mislead users. But as use of AI-generated technologies grows, even such policies could threaten to undermine user trust. If, for example, a fake image accidentally slipped through a platform’s detection system, “it could give people false confidence,” Ajder said. “They’ll say, ‘there’s a detection system that says it’s real, so it must be real.’”

    Work is also underway on technical solutions that would, for example, watermark an AI-generated image or include a transparent label in an image’s metadata, so anyone viewing it across the internet would know it was created by a computer. The Partnership on AI has developed a set of standard, responsible practices for synthetic media along with partners like ChatGPT-creator OpenAI, TikTok, Adobe, Bumble and the BBC, which includes recommendations such as how to disclose an image was AI-generated and how companies can share data around such images.

    “The idea is that these institutions are all committed to disclosure, consent and transparency,” Leibowicz said.

    A group of tech leaders, including Musk and Apple co-founder Steve Wozniak, this week wrote an open letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” Still, it’s not clear whether any labs will take such a step. And as the technology rapidly improves and becomes accessible beyond a relatively small group of corporations committed to responsible practices, lawmakers may need to get involved, Ajder said.

    “This new age of AI can’t be held in the hands of a few massive companies getting rich off of these tools, we need to democratize this technology,” he said. “At the same time, there are also very real and legitimate concerns of having a radical open approach where you just open source a tool or have very minimal restrictions on its use is going to lead to a massive scaling of harm … and I think legislation will probably play a role in reigning in some of the more radically open models.”

    [ad_2]

    Source link

  • Apple’s Weather app briefly went down and rained on everyone’s morning | CNN Business

    Apple’s Weather app briefly went down and rained on everyone’s morning | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Anyone using their iPhone to check the weather on Tuesday may have had better luck just looking out the window.

    Apple’s default Weather app briefly went down for many users on Tuesday morning, showing blank screens with no data. The result: many users felt clueless about what was happening outside.

    “The Apple Weather app has been down all morning and I never imagined how much disruption that would cause,” wrote one Twitter user. Another tweeted an apparent “Top Gun” reference: “Biggest storm of the season is about to hit Fargo and the Apple weather app is down. I’m flying blind, Goose.”

    There are numerous other sources one could use to determine the weather, including various apps, websites, local news reports and, of course, one’s own eyes. But the apparent disruption from the outage highlights how reliant some have grown on certain popular applications.

    Apple confirmed the outage in a Twitter reply to a frustrated user, noting that some app users may be experiencing a “temporary outage.” The company’s

    System Status page
    also flagged the Weather app as facing an ongoing issue.

    Apple did not immediately respond to CNN’s request for comment.

    One CNN reporter saw only a handful of cities on the Weather app home screen load with full data, while most cities remained completely blank. The app usually displays information including hourly forecast, 10-day forecast, air quality index, precipitation, UV index and more.

    The app was revamped as part of the iOS 16 release in September after Apple bought popular weather service Dark Sky in 2020 and fully integrated its features into the newest operating system.

    [ad_2]

    Source link

  • How Elon Musk upended Twitter and his own reputation in 6 months as CEO | CNN Business

    How Elon Musk upended Twitter and his own reputation in 6 months as CEO | CNN Business

    [ad_1]


    New York
    CNN
     — 

    When Elon Musk first agreed to buy Twitter, he promised to make the company “better than ever,” with greater transparency, fewer bots, a stronger business and more of what he called “free speech.”

    But six months after Musk took control of Twitter, the future of the company and the platform have never been less certain.

    After acquiring the social media platform for $44 billion in late October, Musk reportedly now values Twitter at around $20 billion — and some who track the company believe even that estimate is likely high. Musk repeatedly warned that Twitter could be at risk of filing for bankruptcy only to claim he had brought it back from the brink thanks to his slashing costs, both by laying off 80% of Twitter’s staff and allegedly by failing to pay some of its bills, according to multiple lawsuits. But it’s not clear just how and when Musk might return Twitter to growth.

    He has antagonized journalists and news outlets that have long been central to the platform’s success, overseen policy changes that threaten to make Twitter less safe or reliable, made the platform less transparent to researchers and scared away many top advertisers. Musk’s primary plan to grow Twitter’s business through an overhauled subscription strategy has resulted in much chaos but only a limited number of actual subscriptions.

    In the process, Musk has also upended his own reputation. Once known by much of the public primarily for his innovative efforts to launch rockets and build electric cars, Musk has instead spent much of the past six months in the headlines for controversial policy and feature changes at Twitter, draconian cuts to staff resulting in frequent service disruptions, and briefly banning several prominent journalists. He’s also tweeted a long list of eccentric remarks from his personal Twitter account, including sharing conspiracy theories and publicly mocking a Twitter worker with a disability who was unsure whether he’d been laid off.

    “If he had done nothing except cut costs, then Twitter would have been okay,” said Leslie Miley, a former Twitter engineering manager who started its product safety and security team and left the company in 2015. He has since held roles at Google, Microsoft and the Obama Foundation. “If you had just let everyone go, treated them with respect, and just let the service run for two years, you probably would be okay.”

    Now, though, Miley said he expects Twitter will “eventually go down the road of MySpace.”

    “It’s going to take a little bit longer … [but] I think Twitter is on its way to irrelevance,” he said, “there is no strategy to acquire or retain users because you are offering them no value.”

    Twitter, which has slashed much of its public relations team under Musk, responded to CNN’s request for comment on this story with the auto-reply from its press email that it has used for weeks: a poop emoji.

    For years, what differentiated Twitter from other social platforms was that it served as a central hub for real-time news. It was a place for ordinary people to read and even engage in conversation with celebrities, business leaders and other newsmakers.

    Many of Musk’s recent moves at the platform threaten to undermine that purpose, not to mention the larger information ecosystem — and it’s not clear the efforts will improve the company’s business.

    “Twitter has never been perfect, it had a lot of problems but it was critical global infrastructure for information that Elon Musk is now systematically, frankly, vandalizing,” former Twitter chair of global news Vivian Schiller told CNN in a recent interview.

    Most recently, Musk removed the legacy blue check marks that verified the identities of prominent users, saying he would instead make the checks available only to those who pay $8 per month for Twitter Blue in the interest of “treating everyone equally.”

    “There shouldn’t be a different standard for celebrities,” Musk said in a tweet earlier this month.

    But the move may make it easier for bad actors to impersonate high-profile people and harder for users to trust the veracity and authenticity of information on the platform. What’s more, Musk then decided to sponsor the blue checks for certain celebrities, including Stephen King and LeBron James, in effect creating exactly the “different standard” for famous users he’d professed to want to avoid.

    Now, Musk says content from verified users will be promoted on the platform, potentially making it harder for users who can’t afford a subscription, or simply don’t want to pay Musk for one, to find an audience on the platform. And the new paid verification system won’t necessarily rid the platform of bots, an issue Musk spent months railing on while trying to get out of the acquisition deal last year, according to Filippo Menczer, a computer science professor at Indiana University and director of the Observatory on Social Media.

    “You can create fake accounts and pay $8 [for a blue check] … so if you are a well-funded bad actor, you can do more damage now than you could before,” Menczer said. “And if you are a reliable source and you’re not well-funded, your information will not be as visible as before.”

    Menczer added that the result could be “less free speech, because you’re drowning out the speech of regular people [with speech] by people who either have the technical skills or the money to manipulate the system.”

    Twitter’s move to charge users of its API will also make it harder for researchers to identify and warn the platform about inauthentic activity, Menczer said, and could disrupt other positive uses of the platform that contributed to its reputation as a news hub. Weather agencies, for example, have warned that the change could make it harder for them to release automated emergency weather alerts.

    Any social network lives or dies based on its ability to retain and attract users — and there’s real reason for Twitter to be worried.

    A number of users, celebrities and media organizations have said they plan to leave Twitter over Musk’s recent policy changes — which often appear to be made on a whim without any real principles.

    NPR, BBC and CBC left Twitter after opposing a controversial new “government-funded media” label that they say was misleading. CenterLink, a global nonprofit that represents hundreds of centers providing services to LGBTQ communities, said it would no longer use Twitter after the platform removed protections for transgender users from its hateful conduct policy. And some high-profile users, such as bullying activist Monica Lewinsky, have threatened to exit the platform over the blue check change, now that they may be at greater risk of impersonation on Twitter.

    There remain few alternatives that offer similar features and scale to Twitter, but a growing list of upstart competitors has emerged since Musk’s takeover. At least one large rival, Facebook-parent Meta, has also confirmed it’s working on a service that sounds a lot like Twitter.

    “Almost everything he said he was going to do, he has screwed up in any number of ways,” Miley said. “If it weren’t so damaging to people and organizations who have depended upon the platform, it would be funny. But it’s not actually funny because it has degraded people’s ability to communicate effectively.”

    All of the chaos has made it difficult to convince advertisers, which previously made up 90% of Twitter’s revenue, to rejoin the platform, after many halted spending in the wake of Musk’s takeover over concerns about increased hate speech, as well as confusion about layoffs and the platform’s future direction.

    Just 43% of Twitter’s top 1,000 advertisers as of September — the month before Musk’s takeover — were still advertising on the platform in April, according to data from market intelligence firm Sensor Tower.

    Musk, for his part, has said that Twitter’s usage has increased since his takeover and that advertisers are steadily returning to the platform. But because he took the company private, he is not obligated to make financial disclosures and followers of the company are left to take him at his word.

    Musk built his reputation by overhauling Tesla, helping to launch a widespread shift away from gas cars to electric vehicles and growing SpaceX into a space transport juggernaut. Now, he appears to be attempting a similar overhaul at Twitter — upending the tried-and-true digital advertising business in favor of a subscription model that no other social media platform has yet been able to find large scale success with.

    “I give him some credit for trying a different business model, I think the business model based on user data is quite abusive,” said Luigi Zingales, professor at the University of Chicago Booth School of Business, although Musk has also attempted to improve Twitter’s targeted advertising business.

    Some other tech companies have followed his lead in some places. Facebook-parent Meta copied Twitter by launching a paid verification option. And Meta, along with a number of other tech companies, have undergone multiple rounds of cost-cutting since last fall. Twitter appears to have given cover for some of these ideas, and other firms’ somewhat more principled approaches made them look better by comparison.

    For Twitter and Musk, the stakes for success are high: Musk’s relationships with banks and investors for future endeavors could hinge in part on his performance at the social media firm, which he took on billions of dollars in debt to purchase. Banks “will sit down and say, what kind of cred does this guy have? Will we find him making these shoot-from-the-lip sort of dictates that, in fact, throw our money down a hole?” said Columbia Business School management professor William Klepper.

    Any change to Musk’s reputation from his time leading Twitter could also ultimately have ripple effects for his broader business empire, causing potential investors, recruits and customers to think twice about betting on one of his companies. Tesla

    (TSLA)
    shareholders recently complained to the company’s board that Musk appears “overcommitted.”

    “His reputation has been diminished significantly with Twitter … and once you lose it, it’s very difficult to recover,” Klepper said. “It would be a good opportunity for [Musk] to rethink whether or not … he’s really leadership material.”

    Musk in December pledged to step down as Twitter CEO after millions of users voted in favor of his exit in a poll he posted to the platform. But for now, he remains “Chief Twit.”

    [ad_2]

    Source link

  • New York MTA resumes transit alerts on Twitter | CNN Business

    New York MTA resumes transit alerts on Twitter | CNN Business

    [ad_1]



    CNN
     — 

    New York’s Metropolitan Transportation Authority said it would resume posting automated transit alerts to Twitter on Thursday after the social media company backtracked on a plan to charge public service accounts for access to the platform.

    In a statement Thursday, MTA Acting Chief Customer Officer Shanifah Rieara said Twitter had tried to charge the MTA more than $500,000 a year for access to its platform, but that the MTA refused.

    “We’re glad that Twitter has committed to offering free API access for public service providers,” the MTA tweeted, referring to the software interface that enables third parties to create automated posts on Twitter.

    In another tweet, it added: “We know that customers missed us, so starting today, we’ll resume posting service alerts on @NYCTSubway, @NYCTBus, @LIRR, and @MetroNorth.”

    In recent weeks, Twitter has sought to charge businesses for the ability to access its platform. Its paid plans cost as much as $2.5 million a year for top-tier access. The paywall’s introduction in March prompted widespread warnings by public services of possible disruptions to weather and transit alerts.

    Amid the outcry, Twitter changed course on Tuesday and said that verified government accounts would once again be able to post automated tweets for free.

    [ad_2]

    Source link

  • EU approves Microsoft’s deal to buy Activision Blizzard | CNN Business

    EU approves Microsoft’s deal to buy Activision Blizzard | CNN Business

    [ad_1]



    CNN
     — 

    European regulators have approved Microsoft’s $69 billion acquisition of Activision Blizzard, handing the technology giant a victory at a time when the deal is being challenged in other countries.

    While the merger could harm competition in some respects, particularly in the fast-growing market for cloud gaming services, concessions by Microsoft were enough to mitigate antitrust concerns stemming from the deal, the European Commission said in a statement.

    Among Microsoft’s offers were a 10-year commitment letting European consumers play Activision titles on any cloud gaming service. Microsoft also committed that it would not downgrade the quality or content of its games made available on rival streaming platforms.

    “These commitments fully address the competition concerns identified by the Commission and represent a significant improvement for cloud game streaming compared to the current situation,” the Commission said.

    The Microsoft deal, which would make the company the third largest game publisher in the world after Tencent and Sony, is being challenged in the United States and the UK.

    In a statement, Microsoft said its commitment on game streaming would go beyond the European Union.

    “The European Commission has required Microsoft to license popular Activision Blizzard games automatically to competing cloud gaming services,” said Microsoft President Brad Smith. “This will apply globally and will empower millions of consumers worldwide to play these games on any device they choose.”

    Activision CEO Bobby Kotick called the requirements “stringent” and pledged to expand investments in EU workers.

    “Our talented teams in Sweden, Spain, Germany, Romania, Poland and many other European countries have the skills, ambition, and government support needed to compete effectively on a global scale,” Kotick said in a statement. “We expect these teams to grow and prosper given their governments’ firm but pragmatic approach to gaming.”

    [ad_2]

    Source link

  • TikTok sues Montana over new law banning the app | CNN Business

    TikTok sues Montana over new law banning the app | CNN Business

    [ad_1]


    New York
    CNN
     — 

    TikTok on Monday filed a suit against Montana over a bill that would ban the popular short-form video app in the state starting early next year.

    TikTok alleges that the ban violates the US Constitution, including the First Amendment, as well as other federal laws, according to a complaint filed in Montana District Court. The company also claims concerns that the Chinese government could access the data of US TikTok users – which are a key motivation behind the ban – are “unfounded.”

    The bill was signed by Montana Gov. Greg Gianforte last week, and would impose a fine of $10,000 per day on TikTok or app stores for making the app available to personal devices in the state starting on January 1, 2024.

    “We are challenging Montana’s unconstitutional TikTok ban to protect our business and the hundreds of thousands of TikTok users in Montana,” TikTok spokesperson Brooke Oberwetter said in a statement. “We believe our legal challenge will prevail based on an exceedingly strong set of precedents and facts.”

    Emily Flower, a spokesperson for Montana’s Attorney General, told CNN: “We expected a legal challenge and are fully prepared to defend the law.”

    The Montana law stems from growing criticism of TikTok over its ties to China through its parent company, ByteDance. Many US officials have expressed fears that the Chinese government could potentially access US data via TikTok for spying purposes, though there is no evidence that the Chinese government has ever done so. Some federal lawmakers have also called for a ban.

    Montana’s ban went a step beyond other states that have restricted TikTok from government devices. But legal and technology experts say there are challenges for Montana, or any state, to enforce such a ban. Even if the law is allowed to stand, the practicalities of the internet may make it impossible to keep TikTok out of the hands of users.

    TikTok said in the complaint that the app is used by “hundreds of thousands” of people in Montana to “communicate with each other and others around the world on an endless variety of topics, from business to politics to the arts.”

    “This unprecedented and extreme step of banning a major platform for First Amendment speech, based on unfounded speculation about potential foreign government access to user data and the content of the speech, is flatly inconsistent with the constitution,” TikTok said in the complaint.

    TikTok is seeking for the court to invalidate and permanently enjoin Montana from enforcing the ban.

    The legal challenge by TikTok is an indicator of the hurdles that Montana and other lawmakers could face in attempting to restrict the platform in the United States. A group of TikTok creators also sued Montana last week over the state’s ban, saying it violates their First Amendment rights.

    CNN’s Brian Fung contributed to this report.

    [ad_2]

    Source link

  • AI industry and researchers sign statement warning of ‘extinction’ risk | CNN Business

    AI industry and researchers sign statement warning of ‘extinction’ risk | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Dozens of AI industry leaders, academics and even some celebrities on Tuesday called for reducing the risk of global annihilation due to artificial intelligence, arguing in a brief statement that the threat of an AI extinction event should be a top global priority.

    “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read the statement published by the Center for AI Safety.

    The statement was signed by leading industry officials including OpenAI CEO Sam Altman; the so-called “godfather” of AI, Geoffrey Hinton; top executives and researchers from Google DeepMind and Anthropic; Kevin Scott, Microsoft’s chief technology officer; Bruce Schneier, the internet security and cryptography pioneer; climate advocate Bill McKibben; and the musician Grimes, among others.

    The statement highlights wide-ranging concerns about the ultimate danger of unchecked artificial intelligence. AI experts have said society is still a long way from developing the kind of artificial general intelligence that is the stuff of science fiction; today’s cutting-edge chatbots largely reproduce patterns based on training data they’ve been fed and do not think for themselves.

    Still, the flood of hype and investment into the AI industry has led to calls for regulation at the outset of the AI age, before any major mishaps occur.

    The statement follows the viral success of OpenAI’s ChatGPT, which has helped heighten an arms race in the tech industry over artificial intelligence. In response, a growing number of lawmakers, advocacy groups and tech insiders have raised alarms about the potential for a new crop of AI-powered chatbots to spread misinformation and displace jobs.

    Hinton, whose pioneering work helped shape today’s AI systems, previously told CNN he decided to leave his role at Google and “blow the whistle” on the technology after “suddenly” realizing “that these things are getting smarter than us.”

    Dan Hendrycks, director of the Center for AI Safety, said in a tweet Tuesday that the statement first proposed by David Kreuger, an AI professor at the University of Cambridge, does not preclude society from addressing other types of AI risk, such as algorithmic bias or misinformation.

    Hendrycks compared Tuesday’s statement to warnings by atomic scientists “issuing warnings about the very technologies they’ve created.”

    “Societies can manage multiple risks at once; it’s not ‘either/or’ but ‘yes/and,’” Hendrycks tweeted. “From a risk management perspective, just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well.”

    [ad_2]

    Source link

  • I tried Apple’s new headset. Here’s what it’s like to use | CNN Business

    I tried Apple’s new headset. Here’s what it’s like to use | CNN Business

    [ad_1]



    CNN
     — 

    It’s rare to find a new technology that feels groundbreaking. But last night, while sitting on a couch in a private demo room at Apple’s campus wearing its newly announced Vision Pro mixed reality headset, it felt like I’d seen the future — or at least an early and very pricey prototype of it.

    In the demo, which lasted 30 minutes, a virtual butterfly landed on my finger; a dinosaur with detailed scales tried to bite me; and I stood inches away from Alicia Keys’ piano as she serenaded me in a recording studio. When a small bear cub swam by me on a quiet lake during another immersive video, it felt so real that it reminded me of an experience with a loved one who recently passed away. I couldn’t wipe the tears inside my headset.

    Apple unveiled the headset, its most ambitious and riskiest new hardware offering in years, at a developer event earlier in the day. The headset blends both virtual reality and augmented reality, a technology that overlays virtual images on live video of the real world. At the event, Apple CEO Tim Cook touted the Vision Pro as a “revolutionary product,” with the potential to change how users interact with technology, each other and the world around them. He called it “the first product you look through, not at.”

    But it’s clearly a work in progress. The apps and experiences remain limited; users must stay tethered to a battery pack the size of an iPhone with just two hours of battery life; and the first minutes using the device can be off-putting. Apple also plans to charge $3,499 for the device when it goes on sale early next year – more than had been rumored and far more than other headsets on the market that have previously struggled to gain wide adoption.

    With its loyal following and impressive track record on hardware, Apple may be able to convince developers, early adopters and some enterprise customers to pay up for the device. But if it wants to attract a more mainstream audience, it will need a “killer app,” as the industry often refers to it -— or several.

    Based on my demo, Apple still has a long way to go, but it’s off to a compelling start.

    Hours after the keynote event, I arrived at a building on Apple’s sprawling Cupertino, California, campus specifically constructed to stage demos and briefings for the new headset.

    I was met by an Apple employee who scanned my face to help customize the fit of the headset. Then I entered a small room where an optometrist asked if I wore glasses or corrective lenses. I had gotten Lasik surgery years ago, but others around me had their glasses scanned so the headset could present their specific prescription. It’s an incredible feat that differentiates Apple from competitors and ensures no frames need to be squeezed into the headset. But it’s unclear how the company plans to handle this process at scale if millions buy the device.

    The initial setup process was somewhat unpleasant: I felt a little nauseous and claustrophobic as I adjusted to the device. It tracked my eyes, scanned my hands and mapped the room to better tailor the augmented reality experience.

    But Apple has also taken steps to reduce the motion sickness problem that has plagued other headsets. The headset uses an R1 processor, a custom chip that cuts down on the latency issue found in similar products that can result in nausea.

    As many viewers were quick to point out on Monday, the headset itself looks like a pair of designer ski goggles. It features a soft adjustable strap on the top, a “digital crown” on the back – a bigger version than what you’d find on an Apple Watch – and another digital crown on the top that serves as a kind of home button. There’s also a wire connecting to an external battery pack.

    The headset itself felt light enough in the beginning, but even with Apple’s considerable design chops, I never shook the idea that there was a computer on my face. Fortunately, unlike other computing products, the headset did remain cool on my face throughout the experience, thanks largely to a quiet fan and airflow running through the system

    Unlike other headsets, the new mixed reality headset also displays the eyes of its users on the outside, so “you’re never isolated from the people around you, you can see them and they can see you,” Alan Dye, vice president of human interface, said during the keynote.

    Sadly, I never got to see how my own eyes or anyone else’s looked through the headset during the demo.

    After putting on the device, I saw an iOS-like interface. I could easily hop in and out of apps, such as Messages, FaceTime, Safari and Photos, using just my eye movements and touching my thumb and pointer finger together to act as the “select” button. This was more intuitive than expected and worked even when my hands rested on my lap.

    Some app experiences were better than others, however. It was beautiful to see images in the Photos app presented before me in a larger than life manner, but it’s hard to imagine feeling the need to do this often on a couch back home. Vision Pro also offers a spatial photo option, which lets users view images and videos in 3D so you feel like you’re directly in the scene. Again, cool but unnecessary.

    During another demo, an Apple employee wearing a Vision Pro headset FaceTimed me from the other side of campus. Her “persona” – a digital representation which did not show her wearing the Vision Pro – appeared in front of me as we chatted about the event earlier in the day. She seemed real but it was clear she was not; she was a sort of pseudo-human. (Apple did not scan my face to create my own persona, which would otherwise be done through its OpticID security feature during the setup phase.)

    The Apple employee then shared a virtual whiteboard – dragging, dropping and highlighting interior design images. Cook has focused on AR’s potential to foster collaboration, and it’s clear how this tool could be used in meetings to fulfill that promise. What’s less clear is why most employers would spend $3,499 per device per employee to make this happen rather than simply use Zoom.

    Like so much else about the product unveiling, this pitch felt mistimed. Earlier in the pandemic, more people might have jumped at the chance to create these virtual experiences while we worked and socialized almost entirely from home. Now, with more employees back in the office and companies looking to cut costs amid broader economic uncertainty, the justification for this pricey device seemed less clear.

    The real magic of the Vision Pro, however, is in the immersive videos. Watching an underwater scene from Avatar 2 in 3D, for example, was surreal, seemingly placing me right in the ocean with these fictional creatures. It’s easy to imagine buy-in from Hollywood filmmakers to create experiences just for the headset.

    Apple is also uniquely positioned here to supercharge the device with these experiences. It has close relationships in the entertainment industry, including with former Apple board member and Disney CEO Bob Iger, who announced in a pre-recorded video during the event that Disney+ will be available on the headset at launch. Apple teased new National Geographic, Marvel and ESPN experiences for the headset, too.

    Almost every new Apple product, from the iPhone to the Apple Watch, promises to use screens of varying sizes to change how we live, work and interact with the world. The Vision Pro has the potential to do all of that in an even more striking way. But unlike the first time I picked up an iPhone or a smartatch, after 30 minutes of using Vision Pro, I was very content to put it down and return to the real world.

    [ad_2]

    Source link

  • ‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images | CNN Business

    ‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    For a brief moment last month, an image purporting to show an explosion near the Pentagon spread on social media, causing panic and a market sell-off. The image, which bore all the hallmarks of being generated by AI, was later debunked by authorities.

    But according to Jeffrey McGregor, the CEO of Truepic, it is “truly the tip of the iceberg of what’s to come.” As he put it, “We’re going to see a lot more AI generated content start to surface on social media, and we’re just not prepared for it.”

    McGregor’s company is working to address this problem. Truepic offers technology that claims to authenticate media at the point of creation through its Truepic Lens. The application captures data including date, time, location and the device used to make the image, and applies a digital signature to verify if the image is organic, or if it has been manipulated or generated by AI.

    Truepic, which is backed by Microsoft, was founded in 2015, years before the launch of AI-powered image generation tools like Dall-E and Midjourney. Now McGregor says the company is seeing interest from “anyone that is making a decision based off of a photo,” from NGOs to media companies to insurance firms looking to confirm a claim is legitimate.

    “When anything can be faked, everything can be fake,” McGregor said. “Knowing that generative AI has reached this tipping point in quality and accessibility, we no longer know what reality is when we’re online.”

    Tech companies like Truepic have been working to combat online misinformation for years, but the rise of a new crop of AI tools that can quickly generate compelling images and written work in response to user prompts has added new urgency to these efforts. In recent months, an AI-generated image of Pope Francis in a puffer jacket went viral and AI-generated images of former President Donald Trump getting arrested were widely shared, shortly before he was indicted.

    Some lawmakers are now calling for tech companies to address the problem. Vera Jourova, vice president of the European Commission, on Monday called for signatories of the EU Code of Practice on Disinformation – a list that includes Google, Meta, Microsoft and TikTok – to “put in place technology to recognize such content and clearly label this to users.”

    A growing number of startups and Big Tech companies, including some that are deploying generative AI technology in their products, are trying to implement standards and solutions to help people determine whether an image or video is made with AI. Some of these companies bear names like Reality Defender, which speak to the potential stakes of the effort: protecting our very sense of what’s real and what’s not.

    But as AI technology develops faster than humans can keep up, it’s unclear whether these technical solutions will be able to fully address the problem. Even OpenAI, the company behind Dall-E and ChatGPT, admitted earlier this year that its own effort to help detect AI-generated writing, rather than images, is “imperfect,” and warned it should be “taken with a grain of salt.”

    “This is about mitigation, not elimination,” Hany Farid, a digital forensic expert and professor at the University of California, Berkeley, told CNN. “I don’t think it’s a lost cause, but I do think that there’s a lot that has to get done.”

    “The hope,” Farid said, is to get to a point where “some teenager in his parents basement can’t create an image and swing an election or move the market half a trillion dollars.”

    Companies are broadly taking two approaches to address the issue.

    One tactic relies on developing programs to identify images as AI-generated after they have been produced and shared online; the other focuses on marking an image as real or AI-generated at its conception with a kind of digital signature.

    Reality Defender and Hive Moderation are working on the former. With their platforms, users can upload existing images to be scanned and then receive an instant breakdown with a percentage indicating the likelihood for whether it’s real or AI-generated based on a large amount of data.

    Reality Defender, which launched before “generative AI” became a buzzword and was part of competitive Silicon Valley tech accelerator Y Combinator, says it uses “proprietary deepfake and generative content fingerprinting technology” to spot AI-generated video, audio and images.

    In an example provided by the company, Reality Defender highlights an image of a Tom Cruise deepfake as 53% “suspicious,” telling the user it has found evidence showing the face was warped, “a common artifact of image manipulation.”

    Defending reality could prove to be a lucrative business if the issue becomes a frequent concern for businesses and individuals. These services offer limited free demos as well as paid tiers. Hive Moderation said it charges $1.50 for every 1,000 images as well as “annual contract deals” that offer a discount. Realty Defender said its pricing may vary based on various factors, including whether the client needs “any bespoke factors requiring our team’s expertise and assistance.”

    “The risk is doubling every month,” Ben Colman, CEO of Reality Defender, told CNN. “Anybody can do this. You don’t need a PhD in computer science. You don’t need to spin up servers on Amazon. You don’t need to know how to write ransomware. Anybody can do this just by Googling ‘fake face generator.’”

    Kevin Guo, CEO of Hive Moderation, described it as “an arms race.”

    “We have to keep looking at all the new ways that people are creating this content, we have to understand it and add it to our dataset to then classify the future,” Guo told CNN. “Today it’s a small percent of content for sure that’s AI-generated, but I think that’s going to change over the next few years.”

    In a different, preventative approach, some larger tech companies are working to integrate a kind of watermark to images to certify media as real or AI-generated when they’re first created. The effort has so far largely been driven by the Coalition for Content Provenance and Authenticity, or C2PA.

    The C2PA was founded in 2021 to create a technical standard that certifies the source and history of digital media. It combines efforts by the Adobe-led Content Authenticity Initiative (CAI) and Project Origin, a Microsoft- and BBC-spearheaded initiative that focuses on combating disinformation in digital news. Other companies involved in C2PA include Truepic, Intel and Sony.

    Based on the C2PA’s guidelines, the CAI makes open source tools for companies to create content credentials, or the metadata that contains information about the image. This “allows creators to transparently share the details of how they created an image,” according to the CAI website. “This way, an end user can access context around who, what, and how the picture was changed — then judge for themselves how authentic that image is.”

    “Adobe doesn’t have a revenue center around this. We’re doing it because we think this has to exist,” Andy Parsons, Senior Director at CAI, told CNN. “We think it’s a very important foundational countermeasure against mis- and disinformation.”

    Many companies are already integrating the C2PA standard and CAI tools into their applications. Adobe’s Firefly, an AI image generation tool recently added to Photoshop, follows the standard through the Content Credentials feature. Microsoft also announced that AI art created by Bing Image Creator and Microsoft Designer will carry a cryptographic signature in the coming months.

    Other tech companies like Google appear to be pursuing a playbook that pulls a bit from both approaches.

    In May, Google announced a tool called About this image, offering users the ability to see when images found on its site were originally indexed by Google, where images might have first appeared and where else they can be found online. The tech company also announced that every AI-generated image created by Google will carry a markup in the original file to “give context” if the image is found on another website or platform.

    While tech companies are trying to tackle concerns about Ai-generated images and the integrity of digital media, experts in the field stress that these businesses will ultimately need to work with each other and the government to address the problem.

    “We’re going to need cooperation from the Twitters of the world and the Facebooks of the world so they start taking this stuff more seriously, and stop promoting the fake stuff and start promoting the real stuff,” said Farid. “There’s a regulatory part that we haven’t talked about. There’s an education part that we haven’t talked about.”

    Parsons agreed. “This is not a single company or a single government or a single individual in academia who can make this possible,” he said. “We need everybody to participate.”

    For now, however, tech companies continue to move forward with pushing more AI tools into the world.

    [ad_2]

    Source link

  • Italy ties China’s hands at Pirelli over fears about chip technology | CNN Business

    Italy ties China’s hands at Pirelli over fears about chip technology | CNN Business

    [ad_1]


    London
    CNN
     — 

    Italy has imposed several curbs on Pirelli’s biggest shareholder, Sinochem, in a move aimed at blocking the Chinese government’s access to sensitive chip technology.

    The Italian government decided last week to make use of its so-called “Golden Power” regulations, designed to protect assets of strategic importance to the country, Pirelli said in a statement Sunday.

    The government order risks inflaming tensions between Europe and Beijing, and follows similar intervention by Germany and the United Kingdom to protect their semiconductor technology.

    Earlier this year, Europe joined a US-led effort to restrict China’s access to the most advanced chipmaking technology when the Netherlands — home to ASML Holding, a key supplier to the global semiconductor industry — said it would introduce export controls.

    Italy’s move comes as US Secretary of State Antony Blinken wraps up a high-stakes visit to China aimed at repairing strained relations between the world’s two biggest economies.

    Sinochem, owned by the Chinese government, is Pirelli’s biggest single shareholder, with a 37% stake, and has 60% of seats on the board of the Italian tire maker. CNN has contacted Sinochem for comment.

    In a statement Friday, the Italian government said Pirelli’s Cyber Tyre, which uses chip technology to collect vehicle data, is “configured as a critical technology of national strategic importance.”

    “Improper use of this technology can pose significant risks not only to the confidentiality of user data, but also to the possible transfer of information relevant to security,” the statement added.

    The order sets a host of limitations on Sinochem’s involvement in Pirelli, including a bar on it devising the company’s strategy and financial plans, or appointing a CEO.

    The government said these curbs would protect the “autonomy” of Pirelli and its management, as well as “information of strategic importance.”

    Europe is heavily reliant on China for trade and investment, but relations have come under strain from ideological differences, including over Russia’s war in Ukraine, and recent moves by European Union regulators and governments to limit China’s access to sensitive technology.

    The order takes a page out of this playbook. It requires that Pirelli refuse any requests from Sinochem’s owner — China’s State-owned Assets Supervision and Administration Commission of the State Council — for information sharing, including any information connected to the “know-how” of proprietary technologies.

    The government said “some” strategic decisions would require approval from at least 80% of board directors, a further limitation on Sinochem’s influence.

    Separately, Rome is also assessing whether to renew its partnership with Beijing on the Belt and Road Initiative — China’s global infrastructure and investment megaproject. Italy is the only Group of Seven nation to have joined the initiative.

    In a further sign of the steps multinational companies are beginning to consider to protect their operations from growing geopolitical friction, drugmaker AstraZeneca

    (AZN)
    has drawn up plans to spin off its China business and list it separately in Hong Kong, according to the Financial Times. AstraZeneca

    (AZN)
    declined to comment.

    Earlier this month, Sequoia Capital, the Silicon Valley venture capital group, said it would separate its China investments into an independent unit.

    On Tuesday, the European Commission will unveil measures — possibly including screening of outbound investments and export controls — to keep prized EU technology from countries such as China, Reuters reported.

    — Laura He in Hong Kong contributed to this article.

    [ad_2]

    Source link

  • OpenAI, maker of ChatGPT, hit with proposed class action lawsuit alleging it stole people’s data | CNN Business

    OpenAI, maker of ChatGPT, hit with proposed class action lawsuit alleging it stole people’s data | CNN Business

    [ad_1]



    CNN
     — 

    OpenAI, the company behind the viral ChatGPT tool, has been hit with a lawsuit alleging the company stole and misappropriated vast swaths of peoples’ data from the internet to train its AI tools.

    The proposed class action lawsuit, filed Wednesday in a California federal court, claims that OpenAI secretly scraped “massive amounts of personal data from the internet,” according to the complaint. The nearly 160-page complaint alleges that this personal data, including “essentially every piece of data exchanged on the internet it could take,” was also seized by the company without notice, consent or “just compensation.”

    Moreover, this data scraping occurred at an “unprecedented scale,” the suit claims.

    OpenAI did not immediately respond to CNN’s request for comment Wednesday. Microsoft, a major investor into OpenAI, was also named as a defendant in the suit and did not immediately respond to a request for comment.

    “By collecting previously obscure personal data of millions and misappropriating it to develop a volatile, untested technology, OpenAI put everyone in a zone of risk that is incalculable – but unacceptable by any measure of responsible data protection and use,” Timothy K. Giordano, a partner at Clarkson, the law firm behind the suit, said in a statement to CNN Wednesday.

    The complaint also claims that OpenAI products “use stolen private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge.”

    The lawsuit seeks injunctive relief in the form of a temporary freeze on further commercial use of OpenAI’s products. It also seeks payments of “data dividends” as financial compensation to people whose information was used to develop and train OpenAI’s tools.

    OpenAI publicly launched ChatGPT late last year, and the tool immediately went viral for its ability to generate compelling, human-sounding responses to user prompts. The success of ChatGPT spurred an apparent AI arms race in the tech world, as companies big and small are now racing to develop and deploy AI tools into as many products as possible.

    [ad_2]

    Source link

  • Twitter isn’t letting users view the site without logging in | CNN Business

    Twitter isn’t letting users view the site without logging in | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Twitter appears to be restricting access to its platform for anyone not logged into an account.

    People without a Twitter account or who weren’t logged in used to be able to scroll the platform’s homepage and view public accounts and tweets. But as of this week, when such a user opens the platform they are met with a screen prompting them to sign up or sign in to Twitter.

    Internet users began noticing the change late this week, and on Friday, multiple CNN reporters were unable to access Twitter without logging in.

    It was not immediately clear whether the change was an intentional policy update or a glitch, both of which have been common at Twitter since Musk took over the platform. Twitter did not respond to a request for comment.

    The change comes as billionaire owner Elon Musk attempts to revamp Twitter’s business following months of challenges since his takeover late last year — now with the help of new CEO Linda Yaccarino.

    Twitter’s leadership is urging advertisers to return to the platform after many fled over concerns about increased hate speech, layoffs and general questions about the company’s direction. Musk has also sought to grow subscription revenue by offering a blue verification checkmark for users who sign up for its Twitter Blue service.

    The restriction on public access to Twitter could be an effort to grow the platform’s user base, which has always been significantly smaller than social media rivals like Facebook and Instagram.

    [ad_2]

    Source link

  • Meta cut election teams months before Threads launch, raising concerns for 2024 | CNN Business

    Meta cut election teams months before Threads launch, raising concerns for 2024 | CNN Business

    [ad_1]



    CNN
     — 

    Meta has made cuts to its teams that tackle disinformation and coordinated troll and harassment campaigns on its platforms, people with direct knowledge of the situation told CNN, raising concerns ahead of the pivotal 2024 elections in the US and around the world.

    Several members of the team that countered mis- and disinformation in the 2022 US midterms were laid off last fall and this spring, a person familiar with the matter said. The staffers are part of a global team that works on Meta’s efforts to counter disinformation campaigns seeking to undermine confidence in or sow confusion around elections.

    The news comes as Meta, the parent company of Facebook and Instagram, is celebrating the unparalleled success of its new Threads platform, surpassing 100 million users just five days after launch and opening a potential new avenue for bad actors.

    A Meta spokesperson did not specify, when asked, how many staffers had been cut from its teams working on elections. In a statement to CNN on Monday night, the spokesperson said, “Protecting the US 2024 elections is one of our top priorities, and our integrity efforts continue to lead the industry.”

    The spokesperson did not answer CNN questions about what additional resources had been deployed to monitor and moderate its new platform. Instead, Meta said the social media giant had invested $16 billion in technology and teams since 2016 to protect its users.

    But the decision to lay off staffers ahead of 2024, when elections will not only take place in the United States but also in Taiwan, Ukraine, India and elsewhere, has raised concerns among those with direct knowledge of Meta’s election integrity work.

    The disparate nature of Meta’s work on elections makes it difficult for even people inside the company to say specifically how many people are part of the effort. One group of relevant employees hit harder by the layoffs were “content review” specialists who manually review election-related posts that may violate Meta’s terms of service, a person familiar with the cuts told CNN.

    Meta is trying to offset those cuts by more proactively detecting accounts that spread false election-related information, said the person, who spoke on the condition of anonymity because they were not authorized to speak to the press.

    For years, the social media giant has invested heavily in teams of personnel to root out sophisticated and coordinated networks of fake accounts. That “coordinated inauthentic behavior,” as Meta calls it, began in the lead up to the 2016 election when an infamous Russian government-linked troll operation ran amuck on Facebook.

    The team tasked with combating the influence campaigns – which includes former US government and intelligence officials – has been generally seen as the most robust in the social media industry. The company has published quarterly reports in recent years that expose governments and other entities found to have been operating covert campaigns pushing disinformation on Meta’s platforms.

    Those teams investigating disinformation campaigns now must further prioritize which campaigns and countries to focus on, another person familiar with the situation said, a trade-off that could result in some deceptive efforts going unnoticed.

    The person emphasized that Meta still has a dedicated team of professionals working on these issues, many of whom are widely respected in the cyber and information security communities.

    But while artificial intelligence and other automated systems can help detect some of these efforts, unearthing sophisticated disinformation networks is still a “very manual process” that involves intense scrutiny from expert staff, another person with direct knowledge of Meta’s counter disinformation efforts told CNN.

    The person said they feared Meta was regressing from progress it had made from learning from past mistakes. “Lessons that were learned at great costs,” they said, citing the company’s 2018 admission that its platforms were used to incite violence in Myanmar.

    In addition to its in-house team, Meta and other social media companies rely on tips from academics and other researchers who specialize in monitoring covert disinformation networks.

    Darren Linvill, a professor at Clemson University’s Media Forensics Hub, said he has sent the company valuable tips in recent months, but Meta’s response time has slowed significantly.

    Linvill, who has a long track record of successfully identifying covert online accounts, including helping to unearth a Russian election meddling effort in Africa in 2020, said that Meta recently removed a network of Russian language accounts that were posting both pro and anti-Ukraine content on Facebook and Instagram.

    “They were trying to stoke anger on both sides of the debates,” he said.

    Launched last Thursday, Threads has become an instant success with celebrities, politicians, and journalists flocking to the platform.

    The new Twitter-style app is tied to users’ existing Instagram accounts, rather than being linked directly to Facebook. Currently, Threads shares the same community standards as Instagram, but the platforms differ on issues relating to Meta’s methods to combat disinformation.

    Meta also applies labels to state-controlled accounts on Facebook and Instagram, such as Russia’s Sputnik news agency and China’s CCTV. However, these labels do not appear on state-controlled accounts on Threads.

    The launch of Threads even as Meta trims its disinformation-focused personnel comes at a turbulent and transformative time for those tasked with writing and implementing rules on social media platforms.

    Elon Musk, the billionaire who bought Twitter last year, has all but torn up that platform’s rule book and gutted the team that worked on implementing policies designed to combat disinformation efforts.

    Last month, YouTube, which has also made job cuts, announced it would allow videos that feature the false claim the 2020 US presidential election was stolen, a reversal of its previous policy.

    The rule reversals come as the Republican-controlled House of Representatives investigates interactions between technology companies and the federal government.

    Last week, a federal judge in Louisiana ordered some Biden administration agencies and top officials not to communicate with social media companies about certain content, handing a win to GOP states in a lawsuit accusing the government of going too far in its effort to combat Covid-19 disinformation.

    The restrictions and the scrutiny could give cover to social media companies that may want to pull back on some of their platforms’ rules around election integrity, said Katie Harbath, a former Facebook official who helped lead the company’s global election efforts until 2021.

    “I can [almost] hear [Meta Global Affairs President] Nick Clegg saying that ‘we’re going to be cautious of what we do, because we wouldn’t want to run afoul of the law,’” Harbath said.

    [ad_2]

    Source link

  • Elon Musk rebrands Twitter as X | CNN Business

    Elon Musk rebrands Twitter as X | CNN Business

    [ad_1]


    New York
    CNN
     — 

    In a radical rebranding, Twitter owner Elon Musk has replaced Twitter’s iconic bird logo with X.

    Musk made the shock announcement of his plans early Sunday. By Monday morning US time, he tweeted that X.com now points to Twitter.com.

    “Interim X logo goes live later today,” he wrote, shortly before sharing a photo of Twitter’s headquarters lit up by a giant new X.

    The Twitter website now features the same logo, while the familiar blue bird is gone.

    Previously, Musk said he was bidding “adieu to the twitter brand and, gradually, all the birds.”

    Twitter

    (TWTR)
    , founded in 2006, has used its vivid, globally recognized blue bird emblem for more than a decade.

    The renaming could be seen as something of a brand overhaul “Hail Mary” for the company: Musk in recent months has repeatedly warned that Twitter, facing steep losses in ad revenue, was on the edge of bankruptcy.

    Increasing the pressure, earlier this month rival social media platform Threads launched from Facebook

    (FB)
    parent Meta. It surpassed 100 million user sign-ups in its first week.

    Twitter had 238 million active users prior to being taken private by Musk in October 2022.

    One of the world’s richest men, Musk was once best known for his innovative efforts through companies SpaceX and Tesla

    (TSLA)
    to launch rockets and build electric cars.

    Now, many of the headlines he makes are for his eccentric remarks on his personal Twitter account – often sharing conspiracy theories and getting into public spats on the social media platform.

    Musk overhauled the site after acquiring it for $44 billion in late October, then followed with mass layoffs, disputes over millions of dollars allegedly owed in severance and Musk’s note to employees that remaining at the company would mean “working long hours at high intensity.” He wrote: “Only exceptional performance will constitute a passing grade.”

    The upheaval prompted organizations, including the Anti-Defamation League, Free Press and GLAAD, to pressure brands to rethink advertising on Twitter.

    The groups pointed to the mass layoffs as a key factor in their thinking, citing fears that Musk’s cuts would make Twitter’s election-integrity policies effectively unenforceable, even if they technically remain active.

    Musk also began overseeing controversial policy changes which led to frequent service disruptions at Twitter and upended his own reputation in the process.

    In June, Musk named Linda Yaccarino, a former NBCUniversal marketing executive, CEO of the company.

    She commented on the name change on Twitter Sunday afternoon: “It’s an exceptionally rare thing – in life or in business – that you get a second chance to make another big impression. Twitter made one massive impression and changed the way we communicate. Now, X will go further, transforming the global town square.”

    As the new venture begins, it faces challenges. Musk recently disclosed that the platform still has a negative cash flow due to a 50% drop in advertising revenue and heavy debt loads.

    Criticizing the exit, or pause, of such Twitter advertisers as General Mills

    (GIS)
    , Macy’s

    (M)
    and some car companies that compete with Tesla, Musk has called himself a “free speech absolutist” and said he wanted to buy Twitter to bolster users’ ability to speak freely on the platform.

    Musk explained his approach to free speech by saying: “Is someone you don’t like allowed to say something you don’t like? And if that is the case, then we have free speech.”

    He added that Twitter would “be very reluctant to delete things” and that the platform would aim to allow all legal speech. Many users have worried that could mean a rise in hate speech.

    Meanwhile, the initial frenzy around rival Threads appears to have come back to earth, especially as it has been plagued with spam and lacks several user-friendly features Twitter, or, now X, offers.

    Adam Mosseri, who is overseeing the Threads launch for Meta, has hinted at plans to add features such as a desktop version of the app, a feed of only accounts a user follows and an edit button.

    Its ability to draw advertising support is, as yet, unproven.

    [ad_2]

    Source link

  • Twitter says portions of source code leaked online | CNN Business

    Twitter says portions of source code leaked online | CNN Business

    [ad_1]



    CNN
     — 

    Twitter said parts of its proprietary code were posted online and had been exposed until Friday, when the company had the material removed from the web and filed for a court order to hunt down the source of the leak.

    The leak saw excerpts of Twitter’s source code — the programming that powers the Twitter platform and its internal tools — posted to the online software repository GitHub, according to a court filing Friday by a Twitter attorney. The files were posted by a pseudonymous GitHub user, identified only by the handle FreeSpeechEnthusiast. The account was created on Jan. 3 and does not appear to have posted any other material besides the Twitter code.

    The code leak represents the latest mishap for Twitter as CEO Elon Musk has sought to reverse a sharp decline in revenues through substantial layoffs and other cost cutting measures that some experts had already said risked making the platform less safe. Leaked source code can not only provide insight into how a company designs its product but can also give criminals the chance to find or exploit security flaws and vulnerabilities.

    Twitter has launched an effort to identify the person or group behind the FreeSpeechEnthusiast GitHub account, as well as anyone who may have interacted with the leaked code. On Friday, Twitter filed for a subpoena at the US District Court for the Northern District of California, which Twitter hopes will compel GitHub to hand over IP addresses, contact information, and access logs associated with the incident.

    “The purpose for which Twitter’s DMCA Subpoena is sought is to obtain the identity of an alleged infringer or infringers, and such information will only be used for the purpose of protecting Twitter’s rights,” Twitter wrote in its filing to the court.

    GitHub removed the content on Friday after Twitter submitted a copyright claim to the company. GitHub declined to comment on the matter but said it publicly posts all copyright takedown requests and referred CNN to Twitter’s request. Twitter, which has cut much of its public relations team under Musk, automatically responded to a request for comment with an email containing a poop emoji.

    The leak was first reported by The New York Times.

    The leak comes as Musk has sought to place more of his own imprint on the social media platform he purchased last year. The acquisition prompted a wave of advertisers to flee the platform over fears the deal would lead to a rise in hate speech and an increase in reputational risks for brands. Musk has blamed the advertiser revolt for steep losses at the company, and has aggressively pushed the company’s subscription service, Twitter Blue, as an alternative revenue stream. He has also said Twitter will charge fees for other software applications to access Twitter’s platform.

    On Saturday, reports on an internal memo by Musk outlining employee stock awards suggested that Twitter was valued at about $20 billion, or less than half of the $44 billion Musk paid for the company. (CNN has not independently confirmed the memo’s existence or its contents.) In the memo, Musk reportedly defended the changes he has made at the company and claimed that Twitter’s valuation could someday exceed $250 billion.

    The same day, Musk tweeted that prior to the changes he made, Twitter only had $1 billion in cash, which he said represented about four months’ worth of expenses and an “extremely dire situation.” But, he added, things are looking up.

    “Now that advertisers are returning, it looks like we will break even in Q2,” he said.

    [ad_2]

    Source link

  • Arkansas sues TikTok, ByteDance and Meta over mental health claims | CNN Business

    Arkansas sues TikTok, ByteDance and Meta over mental health claims | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    The state of Arkansas has sued TikTok, its parent ByteDance, and Facebook-parent Meta over claims the companies’ products are harmful to users, in the latest effort by public officials to take social media companies to court over mental-health and privacy concerns.

    All three lawsuits claim the companies have violated the state’s Deceptive Trade Practices Act, and seek millions, if not billions, in potential fines. The suits were filed in Arkansas state court.

    The complaints come amid mounting pressure in Washington on TikTok for its ties to China and as states have grown more aggressive in suing tech companies broadly, particularly on mental health claims. Suits by school districts or county officials in California, Florida, New Jersey, Pennsylvania and Washington state have targeted multiple social media platforms over addiction allegations.

    The suit against Meta particularly zeroes in on the company’s impact to young users’ mental health, alleging that Meta’s implementation of like buttons, photo tagging, an unending news feed and other features are addictive and “intended to manipulate users’ brains by triggering the release of dopamine.”

    In a statement, Meta’s global head of safety, Antigone Davis, said the company has invested in “technology that finds and removes content related to suicide, self-injury or eating disorders before anyone reports it to us.”

    “We want to reassure every parent that we have their interests at heart in the work we’re doing to provide teens with safe, supportive experiences online,” Davis said in the statement. “These are complex issues, but we will continue working with parents, experts and regulators such as the state attorneys general to develop new tools, features and policies that meet the needs of teens and their families.”

    The remaining two suits, both naming ByteDance and TikTok as defendants, target TikTok’s alleged shortcomings in content moderation and also reiterate claims about TikTok’s alleged threat to US national security.

    The first suit alleges that TikTok has misled users by identifying its app as suitable for teens on app stores because of the “abundant” presence of content showing profanity, substance use and nudity. The suit further alleges that TikTok’s Chinese sister app, Douyin, does not make such content available within China.

    “TikTok poses known risks to young teens that TikTok’s parent company itself finds inappropriate for Chinese users who are the same age,” the complaint said. “Yet TikTok pushes salacious and other mature content to all young U.S. users age 13 and up.”

    The second suit against ByteDance and TikTok accuse the companies of having made misleading statements about the reach of Chinese government officials and their purported inability to access TikTok user data. TikTok has migrated US user data to servers operated by the American tech giant Oracle and has established organizational controls intended to prevent unauthorized data access. But, the suit alleges, that does not mean the data is necessarily protected.

    “Neither TikTok’s data storage practices, nor its data security practices, negate the applicability of Chinese law to that data or to the individuals and entities who are subject to Chinese law and have access to that data, or the risk of access by the Chinese Government or Communist Party,” the complaint said.

    The suit also claims TikTok has misrepresented its approach to privacy and security by omitting the potential risks of Chinese government access from its privacy policies and in its statements to app store operators.

    TikTok and ByteDance didn’t immediately respond to a request for comment.

    In a statement announcing the lawsuits, Arkansas Gov. Sarah Huckabee Sanders said the suits reflect a “failed status quo.”

    “We have to hold Big Tech companies accountable for pushing addictive platforms on our kids and exposing them to a world of inappropriate, damaging content,” Sanders said. “These actions are a long time coming. We have watched over the past decade as one social media company after another has exploited our kids for profit and escaped government oversight.”

    [ad_2]

    Source link

  • Pentagon investigating alleged classified documents circulating on social media of US and NATO intelligence on Ukraine | CNN Politics

    Pentagon investigating alleged classified documents circulating on social media of US and NATO intelligence on Ukraine | CNN Politics

    [ad_1]


    Washington
    CNN
     — 

    The Pentagon is investigating what appear to be screenshots of classified US and NATO military information about Ukraine circulating on social media, a Pentagon official told CNN.

    CNN has reviewed some of the images circulating on Twitter and Telegram but is unable to verify if they are authentic or have been doctored. US officials say the documents are real slides, part of a larger daily intelligence deck produced by the Pentagon about the war, but it appears the documents have been edited in some places.

    Pentagon deputy press secretary Sabrina Singh would not weigh in on the documents’ legitimacy but said in a statement that the Defense Department is “aware of the reports of social media posts, and the Department is reviewing the matter.”

    Mykhailo Podolyak, the adviser to the head of the Office of the President of Ukraine, said on his Telegram channel he believes the Russians are behind the purported leak. Podolyak said the documents that were disseminated are inauthentic, have “nothing to do with Ukraine’s real plans” and are based on “a large amount of fictitious information.”

    The emergence of the documents, whether genuine or not, has heightened focus on when the planned Ukrainian counteroffensive will begin and what, if anything, either side knows about the other’s preparations for it.

    One image that has been circulating on Russian Telegram channels and was reviewed by CNN is a photo of a hard copy of a document titled “US, Allied & Partner UAF Combat Power Build.” The document, which is from February and marked as secret, lists the amounts of certain Western weapons systems that Ukraine currently has on hand, estimated delivery of additional systems and the training Ukraine has or is expected to complete on the systems.

    Another is titled “Russia/Ukraine Joint Staff J3/4/5 Daily Update (D+370)” and is listed as secret. J3 refers to the operations directorate of the US military’s joint staff, J4 deals with logistics and engineering, and J5 proposes strategies, plans and policy recommendations. “D+370” refers to the date the document was produced: 370 days after the first day of the Russian invasion.

    A third document is a map, listed as top secret, that shows the status of the conflict as of March 1. The map shows Russian and Ukrainian battalion locations and sizes, as well as total assessed losses on both sides. The casualty numbers on this document are what officials believe was doctored – the Russian losses are actually far higher than the “16,000-17,500 killed in action” listed on the document, officials said.

    The document also says that 61,000-71,500 Ukrainians have been killed in action, a number that officials said also appeared edited to be higher than actual Pentagon estimates.

    A fourth document is a weather projection from February, listed as Secret, that assesses where the ground may freeze in Ukraine in a way that would be favorable for vehicle maneuver.

    The New York Times, which first disclosed the Pentagon investigation, reported that some of the images circulating online describe intelligence that could be useful to Russia, such as how quickly the Ukrainians are expending munitions used in US-provided rocket-systems.

    Podolyak called the documents “a bluff, dust in your eyes” and said that “if Russia really did receive real scenario preparations, it would hardly make them public.”

    “Russia is looking for any way to seize the information initiative, to try to influence the scenario plans for Ukraine’s counteroffensive,” he said. “To raise doubts, compromise previous ideas and frighten with their ‘awareness.’ But these are just standard elements of the Russian intelligence’s operational game and nothing more. It has nothing to do with Ukraine’s real plans.”

    Podolyak added that Russian troops “will get acquainted” with Ukraine’s real counteroffensive plans “very soon.”

    Asked about the images circulating on Twitter and Telegram, Kremlin spokesperson Dmitry Peskov told CNN in a statement that “we don’t have the slightest doubt about direct or indirect involvement of the United States and NATO in the conflict between Russia and Ukraine.”

    “This level of involvement is rising, is rising gradually,” he said. “We keep our eye on this process. Well, of course, it makes the whole story more complicated, but it cannot influence the final outcome of the special operation.”

    This story has been updated with additional details.

    [ad_2]

    Source link