ReportWire

Tag: iab-social networking

  • New York MTA resumes transit alerts on Twitter | CNN Business

    New York MTA resumes transit alerts on Twitter | CNN Business

    [ad_1]



    CNN
     — 

    New York’s Metropolitan Transportation Authority said it would resume posting automated transit alerts to Twitter on Thursday after the social media company backtracked on a plan to charge public service accounts for access to the platform.

    In a statement Thursday, MTA Acting Chief Customer Officer Shanifah Rieara said Twitter had tried to charge the MTA more than $500,000 a year for access to its platform, but that the MTA refused.

    “We’re glad that Twitter has committed to offering free API access for public service providers,” the MTA tweeted, referring to the software interface that enables third parties to create automated posts on Twitter.

    In another tweet, it added: “We know that customers missed us, so starting today, we’ll resume posting service alerts on @NYCTSubway, @NYCTBus, @LIRR, and @MetroNorth.”

    In recent weeks, Twitter has sought to charge businesses for the ability to access its platform. Its paid plans cost as much as $2.5 million a year for top-tier access. The paywall’s introduction in March prompted widespread warnings by public services of possible disruptions to weather and transit alerts.

    Amid the outcry, Twitter changed course on Tuesday and said that verified government accounts would once again be able to post automated tweets for free.

    [ad_2]

    Source link

  • ‘Verified’ Twitter accounts share fake image of ‘explosion’ near Pentagon, causing confusion | CNN Business

    ‘Verified’ Twitter accounts share fake image of ‘explosion’ near Pentagon, causing confusion | CNN Business

    [ad_1]



    CNN
     — 

    A fake image purporting to show an explosion near the Pentagon was shared by multiple verified Twitter accounts on Monday, causing confusion and leading to a brief dip in the stock market. Local officials later confirmed no such incident had occurred.

    The image, which bears all the hallmarks of being generated by artificial intelligence, was shared by numerous verified accounts with blue check marks, including one that falsely claimed it was associated with Bloomberg News.

    “Large explosion near the Pentagon complex in Washington DC. – initial report,” the account posted, along with an image purporting to show black smoke rising near a large building.

    The account has since been suspended by Twitter. It was unclear who was behind the account or where the image originated. A spokesperson for Bloomberg News said the account is not affiliated with the news organization.

    Under owner Elon Musk, Twitter has allowed anyone to obtain a verified account in exchange for a monthly payment. As a result, Twitter verification is no longer an indicator that an account represents who it claims to represent.

    Twitter did not respond to a request for comment.

    The false reports of the explosion also made their way to air on a major Indian television network. Republic TV reported that an explosion had taken place, showing the fake image on its air and citing reports from the Russian news outlet RT. It later retracted the report when it became clear the incident had not taken place.

    “Republic had aired news of a possible explosion near the Pentagon citing a post & picture tweeted by RT,” the outlet later posted on its Twitter account. “RT has deleted the post and Republic has pulled back the newsbreak.”

    In a statement Tuesday, the RT press office said, “As with fast-paced news verification, we made the public aware of reports circulating and once provenance and veracity were ascertained, we took appropriate steps to correct the reporting.”

    In a post on the Russian social media platform VKontakte Tuesday, RT tried to make light of its apparent error.

    “Is the Pentagon on fire? Look, there’s a picture and everything. It’s not real, it’s just an AI generated image. Still, this picture managed to fool several major news outlets full of clever and attractive people, allegedly,” a post from RT read.

    In the moments after the image began circulating on Twitter, the US stock market took a noticeable dip. The Dow Jones Industrial Average fell about 80 points between 10:06 a.m. and 10:10 a.m., fully recovering by 10:13 a.m. Similarly, the broader S&P 500 went from up 0.02% at 10:06 a.m. to down 0.15% at 10:09 a.m.. By 10:11 a.m., the index was positive again.

    The building in the image does not closely resemble the Pentagon and, according to experts, shows signs it may have been created using AI.

    “This image shows typical signs of being AI-synthesized: there are structural mistakes on the building and fence that you would not see if, for example, someone added smoke to an existing photo,” Hany Farid, a professor at the University of California, Berkeley, and digital forensic expert told CNN.

    The fire department in Arlington, Virginia, later responded in a tweet, stating that it and the Pentagon Force Protection Agency were “aware of a social media report circulating online about an explosion near the Pentagon. There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public.”

    CNN’s David Goldman contributed reporting.

    [ad_2]

    Source link

  • First on CNN: Pornhub asks users and Big Tech for help as states adopt age verification laws | CNN Business

    First on CNN: Pornhub asks users and Big Tech for help as states adopt age verification laws | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    In the two-minute video, adult performer Cherie Deville stares into the camera and intones soberly to viewers, for the second time in a month, that policymakers are coming for their porn.

    “Click the button below to contact your representatives before it is too late,” Deville pleads.

    The call-to-action video, launching Wednesday in multiple states, comes from Pornhub, which last month withdrew from Utah over a new law that requires adult sites to verify their users’ ages and holds them liable for serving their content to minors. Now, as similar legislation is set to take effect next month in Arkansas, Mississippi and Virginia, Pornhub is making a last-ditch effort to galvanize users there in opposition.

    It’s unclear how much Pornhub expects to achieve, as the laws have already been passed and signed. A company spokesperson told CNN it’s “certainly not our goal” to shut down the site in the three states as it did in Utah but hinted at the possibility, saying that “if necessary, we will share next steps in the coming weeks.”

    But the video campaign is only part of a broader unfolding strategy by one of the internet’s highest-profile distributors of adult material.

    The video’s release coincides with a previously unreported effort by Pornhub — and its private equity owners, Ethical Capital Partners (ECP) — to convince the world’s largest tech companies to intervene in the wider debate over age restrictions for digital porn and social media.

    In recent weeks, ECP has lobbied Apple, Google and Microsoft to jointly develop a technological standard that might turn a user’s electronic device into the proof of age necessary to access restricted online content, according to Solomon Friedman, a partner at ECP.

    One possible version of the idea, Friedman told CNN, would be for the tech companies to securely store a person’s age information on a device and for the operating system to provide websites requesting age verification with a yes-or-no answer on the owner’s behalf — allowing sites to block underage users without ever handling anyone’s personal information.

    “We are willing to commit whatever resources are required to work proactively with those companies, with other technical service providers and as well with government,” Friedman said.

    Pornhub’s simultaneous appeals to users and to Big Tech highlight the challenging position the company now finds itself in amid a wave of state legislation. Under many of these laws, adult sites are required to implement “reasonable age verification methods” that could include users submitting pictures of their photo ID, facial scans or other information, either to third-party companies or to the sites themselves.

    Minimum age requirements have emerged as a favored policy tool in statehouses across the country as lawmakers have increasingly become attuned to the potential mental health harms of unregulated social media use. But Pornhub, along with civil liberties and digital rights groups, have broadly warned of the potential pitfalls of age-verification rules.

    Those risks can include the infringement of Americans’ rights to access legal speech under the First Amendment; the leakage of personal information belonging to underage as well as overage internet users; or the loss of online anonymity that safety experts say is crucial for shielding vulnerable individuals.

    Pornhub’s outreach to Big Tech is intended to convince the companies whose operating systems power the world’s smartphones, tablets and computers that their technology is central to the future of online identity management and to draw their political might into a global policy battle that could reshape the internet for millions.

    But it is far from clear the effort is succeeding. Friedman declined to say how, or even if, the companies have responded to Pornhub’s communications. Microsoft declined to comment for this story; Apple and Google didn’t respond to requests for comment.

    Friedman characterized the discussions as being in “early stages,” though his other remarks implied the talks may be largely one-sided.

    “We are willing and ready to work with them proactively to determine best solutions and to lend any technical expertise that we possibly can, whether it be implementation or pilot projects or assistance in any way,” Friedman told CNN. “We are hoping that this dialogue bears fruit and age verification will be addressed once and for all.”

    The adult industry has famously led the charge on technological innovation before. Porn played a decisive role in the battle between the VHS and Betamax videotape platforms, facilitated the rise of online credit card transactions and helped promote streaming video technology writ large.

    Now, Pornhub’s fight could prove to be a bellwether for the growing push to enforce age verification for social media. As with the battle over adult material, debates over how to keep children and teens away from social media have raised substantial questions about user privacy and how effective age restrictions may be when determined kids inevitably try to circumvent the rules.

    The tech industry, for its part, has been making its own strides in digital identity services. In 2021, for example, Apple announced support for adding driver’s licenses from eight states to Apple Wallet. In December, Google announced it was beta testing a similar feature for Android.

    Those services, however, are designed for in-person ID checks such as at travel checkpoints or liquor stores, technology experts said, and are not set up to perform age or identity verification remotely or virtually.

    Josh Golin, executive director of Fairplay, a consumer advocacy group focused on children’s technology use, described calls for device-based age verification as an “intriguing idea” that might ease burdens on websites and internet users. But, he argued, there are less invasive ways of determining a website visitor’s age.

    “It is our position that rather than requiring new, stringent forms of age verification, that we should start by having the platforms use the data they’re already collecting to do age estimation,” Golin said, pointing to how TikTok, for example, reportedly uses behavioral cues and activity algorithms to guess whether a user may be underage.

    Any device-based approach to age verification would immediately run into issues in most households with children, where no device is ever solely used by one person or exclusively by adults, said India McKinney, director of federal affairs for the Electronic Frontier Foundation, a digital rights organization.

    “You would have to assume that children and teens weren’t borrowing their parents’ phones,” McKinney said. “And that’s sharing on purpose. You don’t have to be too sophisticated to think about teens stealing their parent’s device to get around the age-gating.”

    Meanwhile, entrusting large tech companies to be the custodians of even more personal information, and enabling them to be the effective arbiters of what internet users can see online, brings its own challenges, said Udbhav Tiwari, head of global product policy at Mozilla, maker of the popular Firefox web browser.

    Device-based age verification, Tiwari said, could have “very serious privacy connotations, because you now have the largest tech companies in the world having your government ID and all the information present in them linked to individual devices. We’ve seen Twitter use phone numbers they collected for account security for targeting ads in the past, which led to them being subjected to FTC fines.”

    Last year, Twitter agreed to pay $150 million to resolve those Federal Trade Commission allegations.

    But Pornhub argues that the alternative is a world that’s even less safe, where users seeking age-restricted content may simply go to sites without age-gates or other checks.

    “Giving your ID card every time you want to visit an adult platform is not the most effective solution,” Deville says in Wednesday’s video. “In fact, it will put children and your privacy at risk.”

    [ad_2]

    Source link

  • Snapchat+ gains 4 million paying subscribers in its first year | CNN Business

    Snapchat+ gains 4 million paying subscribers in its first year | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Snap said Thursday that it has garnered more than 4 million paying customers for its subscription service Snapchat+.

    The news comes on the one-year anniversary of Snap launching the service on its flagship platform, Snapchat, and shows how it is finding some early success in getting users to shell out cash for access to premium features. The service costs $3.99 a month.

    The tally of paying subscribers disclosed by Snap on Thursday still represents a small fraction of the 750 million monthly active users that the platform boasted about hitting at its Investor Day event earlier this year.

    Snapchat+ offers access to “exclusive, experimental and pre-release features,” according to a blog post from Snap Thursday. As of Thursday, subscribers have access to more than 20 features, “including custom app themes, unique app icons, and the ability to pin your #1 BFF,” the company added.

    The announcement comes as a handful of other social media platforms are similarly trying to find new ways to get users to pay for services.

    Under the new ownership of Elon Musk, Twitter unveiled an $8-per-month subscription service that offered users the once-coveted blue check mark on the platform, as well as additional features such as seeing fewer ads and having their tweets prioritized in replies, mentions and search. The service, dubbed Twitter Blue, had an estimated 550,000 paying subscribers as of late April. Meta, the parent company of Facebook and Instagram, similarly began rolling out a paid service for users called Meta Verified earlier this year with a price tag of $11.99 per month.

    After taking a battering last year, shares of Snap have climbed roughly 30% in 2023. Still, the stock is down about 86% from its all-time high in late 2021.

    [ad_2]

    Source link

  • Laid-off Twitter Africa team ‘ghosted’ without severance pay or benefits, former employees say | CNN Business

    Laid-off Twitter Africa team ‘ghosted’ without severance pay or benefits, former employees say | CNN Business

    [ad_1]


    Nairobi, Kenya
    CNN
     — 

    Former employees of Twitter Africa who were laid off as part of a global cost-cutting measure after Elon Musk’s acquisition have not received any severance pay more than seven months since leaving the company, several sources told CNN.

    In late May, the former employees, who were based in the Ghanaian capital Accra, accepted Twitter’s

    (TWTR)
    offer to pay them three months worth of severance, the cost of repatriating foreign staff and legal expenses incurred during negotiations with the company, but they have not received the money or any further communication, the sources said.

    “They literally ghosted us,” one former Twitter Africa employee told CNN.

    “Although Twitter has eventually settled former staff in other locations, Africa staff have still been left in the lurch despite us eventually agreeing to specific negotiated terms.”

    The former employees say they reluctantly agreed to the severance package without benefits, even though it was less than what colleagues elsewhere received.

    “Twitter was non-responsive until we agreed to the three months because we were all so stressed and exhausted and tired of the uncertainty, reluctant to take on the extra burdens of a court case so we felt we had no choice but to settle,” another former employee told CNN.

    The former employees spoke to CNN on condition of anonymity because they said they were asked to sign non-disclosure agreements as part of their exit terms.

    According to Carla Olympio, an attorney who is representing the former employees, the last communication from Twitter or its lawyers was in May, shortly after settlement was agreed.

    CNN reached out to Twitter for comment on the status of the severance package for the former employees in the Ghana office but received an automated response – a poop emoji. It’s unclear whether Twitter still has a media relations department.

    In March, Musk tweeted that Twitter would respond to all press inquiries with the poop emoji. He completed a deal to buy the social media platform in October.

    CNN also asked Ghana’s Ministry of Employment and Labor Relations for comment. A spokesperson said they are investigating the claims.

    Whether Ghanaian authorities can compel Twitter to comply with the settlement is uncertain. The former employees and their attorney say the offer was never finalized.

    The dozen or so team members were laid off just four days after the social network opened a physical office in Accra last November.

    Some of them said they had moved to Ghana from other African nations, and depended on their jobs at Twitter to support their legal status in the country.

    “Unfortunately, it appears that after having unethically implemented their terminations in violation of their own promises and Ghana’s laws, dragging the negotiation process out for over half a year, now that we have come to the point of almost settlement, there has been complete silence from them for several weeks,” Olympio said.

    Twitter and Musk face multiple lawsuits where plaintiffs are claiming the company has failed to pay former staffers what they are owed.

    Last week, a former US employee filed a proposed class action lawsuit claiming the company didn’t pay the full amount of severance benefits it promised last November prior to mass layoffs.

    The plaintiff said Twitter promised senior employees severance of six months of base pay plus one week for every year of service, in addition to other benefits. Instead, the plaintiff said they received a total of three months of pay, according to the lawsuit. In response to a request for comment on the lawsuit, Twitter sent CNN an automated poop emoji.

    In April, Musk told the BBC more than 6,000 people had been laid off since he completed his acquisition of the company in late October.

    “We’re exploring our options with respect to causes of action against Twitter in various jurisdictions including Ghana,” Olympio told CNN.

    Twitter did not open negotiations with the African team until after CNN reported in November that they had been offered separation terms that differed from those offered to departing staff in Europe and North America.

    [ad_2]

    Source link

  • Meta, Microsoft, hundreds more own trademarks to new Twitter name | CNN Business

    Meta, Microsoft, hundreds more own trademarks to new Twitter name | CNN Business

    [ad_1]



    Reuters
     — 

    Billionaire Elon Musk’s decision to rebrand Twitter as X could be complicated legally: companies including Meta and Microsoft already have intellectual property rights to the same letter.

    X is so widely used and cited in trademarks that it is a candidate for legal challenges – and the company formerly known as Twitter could face its own issues defending its X brand in the future.

    “There’s a 100% chance that Twitter is going to get sued over this by somebody,” said trademark attorney Josh Gerben, who said he counted nearly 900 active U.S. trademark registrations that already cover the letter X in a wide range of industries.

    Musk renamed social media network Twitter as X on Monday and unveiled a new logo for the social media platform, a stylized black-and-white version of the letter.

    Owners of trademarks – which protect things like brand names, logos and slogans that identify sources of goods – can claim infringement if other branding would cause consumer confusion. Remedies range from monetary damages to blocking use.

    Microsoft since 2003 has owned an X trademark related to communications about its Xbox video-game system. Meta Platforms – whose Threads platform is a new Twitter rival – owns a federal trademark registered in 2019 covering a blue-and-white letter “X” for fields including software and social media.

    Meta and Microsoft likely would not sue unless they feel threatened that Twitter’s X encroaches on brand equity they built in the letter, Gerben said.

    The three companies did not respond to requests for comment.

    Meta itself drew intellectual property challenges when it changed its name from Facebook. It faces trademark lawsuits filed last year by investment firm Metacapital and virtual-reality company MetaX, and settled another over its new infinity-symbol logo.

    And if Musk succeeds in changing the name, others still could claim ‘X’ for themselves.

    “Given the difficulty in protecting a single letter, especially one as popular commercially as ‘X’, Twitter’s protection is likely to be confined to very similar graphics to their X logo,” said Douglas Masters, a trademark attorney at law firm Loeb & Loeb.

    “The logo does not have much distinctive about it, so the protection will be very narrow.”

    Insider reported earlier that Meta had an X trademark, and lawyer Ed Timberlake tweeted that Microsoft had one as well.

    [ad_2]

    Source link

  • Pentagon leak spotlights surprising interplay between gaming and military secrets | CNN Politics

    Pentagon leak spotlights surprising interplay between gaming and military secrets | CNN Politics

    [ad_1]



    CNN
     — 

    The recent leak of classified US documents on social media platform Discord seemingly caught many at the Pentagon by surprise. But it wasn’t the first time that a forum popular with online gamers had hosted military secrets, underlining a major challenge for the US national security establishment and platforms alike.

    As recently as January 2023, someone on a forum for fans of the video game War Thunder reportedly published confidential information on an F-16 fighter jet. That followed reports of at least three other occasions since 2021 when War Thunder fans posted documents on British, French and Chinese tanks. These cases – which Axios also reported on in the context of the Discord leaks – typically involved users boasting of their inside knowledge of military equipment and claiming to want to make the game more realistic.

    Gaijin Entertainment, the company that produces War Thunder, took the posts down after forum moderators flagged them.

    The recent leaks on Discord exposed a shortcoming in how the US government alerts platforms that they are hosting sensitive or classified information, according to Discord’s top lawyer.

    There is currently “no structured process,” for the government to communicate whether documents posted on social media are classified or even authentic, Clint Smith, Discord’s chief legal officer, said in an April 14 statement that described classified military documents as a “significant, complex challenge” for Discord and other platforms.

    The episodes point to vexing challenges for social media platforms like Discord – where 21-year Air National Guardsman Jack Teixeira allegedly began posting classified information in December – and the US military, which has used Discord for recruiting.

    Discord and other platforms face a difficult balancing act in giving young gamers the space to be themselves while also detecting when they post illegal content.

    “A lot of these guys find their social circles in these online gaming spaces, and that can be great,” said Jennifer Golbeck, a professor at the University of Maryland’s College of Information Studies. “But if the culture of the platform shifts to rewarding things that you shouldn’t be doing, it can hard if you’re really invested in that that social group to give that up.”

    Teixeira allegedly posted the documents – which included sensitive US intelligence on the war in Ukraine – to a private Discord chat in an attempt to look after his online friends and keep them informed, one member of the chatroom has claimed.

    The Pentagon is trying to tap into online youth culture without it backfiring spectacularly, as it allegedly did with Teixeira.

    An Air Force Gaming program that allows service members to compete in video game leagues to, according to a Pentagon press release, “build morale and mental health resiliency,” has more than 28,000 members. The top of the Air Force Gaming website includes a link to join the program’s Discord channel.

    There were signs that Pentagon officials were growing wary of information young service members might share on Discord even before news of Teixeira’s alleged leak broke.

    “Don’t post anything in Discord that you wouldn’t want seen by the general public,” reads a pamphlet published by US Army Special Operations Command in March.

    That the warning came as classified documents allegedly shared by Teixeira sat on Discord appears to be entirely a coincidence; many US officials appeared unaware of the leak until news of it broke on April 6.

    “Past incidents show how hard it is to stop these leaks,” said Casey Brooks, an Army veteran and video game fan.

    “This is about maturity and how certain people seek value from interpersonal relationships and approval from peers and the competitive nature that gaming group members bond over,” Brooks told CNN.

    Classified or sensitive documents are also a unique problem for content moderators on social media sites.

    “With porn, you can at least have some kind of AI that will give a rough flag at the beginning that this looks vaguely like porn,” said Golbeck, the University of Maryland professor. “But what looks like a classified document? They’re just documents.”

    As social media platforms like Discord grapple with the challenges of detecting sensitive intelligence leaks online, current and former US officials worry that US adversaries like Russia may see an intelligence gathering opportunity.

    “If it’s not already happening, my guess would be the Russians have assessed that digging around in some of these obscure online forums … could bear fruit,” Holden Triplett, a former FBI official who worked at the US embassy in Moscow, told CNN.

    Though there is no evidence that Teixeira was approached by foreign agents, Triplett said a young generation of online gamers might be a ripe target for recruitment.

    “Ego and excitement have always been strong motivations to spy,” said Triplett, who is founder of security consultancy Trenchcoat Advisors. But the group of Discord users that included Teixeira “seemed particularly indifferent to national security concerns,” which is a vulnerability for the US government, Triplett said.

    [ad_2]

    Source link

  • UK citizen extradited to US pleads guilty to 2020 Twitter hack | CNN Business

    UK citizen extradited to US pleads guilty to 2020 Twitter hack | CNN Business

    [ad_1]



    Reuters
     — 

    A citizen of the United Kingdom who was extradited to New York from Spain last month has pleaded guilty to cyberstalking and computer hacking schemes, including the 2020 hack of the social media site Twitter, the U.S. Justice Department said on Tuesday.

    Joseph James O’Connor, 23, was charged in both North Dakota and New York. The North Dakota case was transferred to the U.S. District Court for the Southern District of New York.

    O’Connor pleaded guilty to charges including conspiring to commit computer intrusions, to commit wire fraud and to commit money laundering.

    O’Connor, who was extradited to the U.S. on April 26, will also forfeit more than $794,000 and pay restitution to victims, prosecutors said. He faces a maximum of 77 years in prison at sentencing on June 23.

    “O’Connor’s criminal activities were flagrant and malicious, and his conduct impacted multiple people’s lives. He harassed, threatened, and extorted his victims, causing substantial emotional harm,” Assistant Attorney General Kenneth Polite said in a statement.

    Prosecutors said the schemes included gaining unauthorized access to social media accounts on Twitter in July 2020 as well as a TikTok account in August 2020. Along with his co-conspirators, O’Connor stole at least $794,000 worth of cryptocurrency.

    The July 2020 Twitter attack hijacked a variety of verified accounts, including those of then-Democratic presidential candidate Joe Biden and Tesla CEO Elon Musk, who now owns Twitter.

    The accounts of former President Barack Obama, reality TV star Kim Kardashian, Bill Gates, Warren Buffett, Benjamin Netanyahu, Jeff Bezos, Michael Bloomberg and Kanye West were also hit.

    The alleged hacker used the accounts to solicit digital currency, prompting Twitter to prevent some verified accounts from publishing messages for several hours until security could be restored.

    [ad_2]

    Source link

  • Twitter debuts a mid-tier data access plan, to almost immediate backlash | CNN Business

    Twitter debuts a mid-tier data access plan, to almost immediate backlash | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Twitter unveiled a new data access tier on Thursday aimed at attracting startups, after its decision to erect a paywall for developers and researchers prompted widespread backlash. But the new tier already has some describing it as “too little, too late.”

    The new paid tier, which the company calls “Pro,” costs $5,000 per month and allows subscribers to retrieve one million tweets a month, the company announced. The offering also allows for the monthly posting of up to 300,000 automated tweets.

    Under owner Elon Musk, Twitter has been racing to find new ways to boost revenue to offset declines from an exodus of advertisers and to help recoup the billions he spent buying the company.

    But the addition of an intermediate tier between Twitter’s Basic and Enterprise tiers reflects pushback from users who have said its plans severely restrict the amount of data that can be accessed or published through Twitter’s application programming interface (API). An enterprise plan starts at $42,000 a month and can cost as much as $210,000 a month.

    Public institutions such as New York’s Metropolitan Transit Authority have made headlines for pulling their real-time service alerts from Twitter over the paywall. The MTA later returned after Twitter backtracked and said eligible government and public service accounts would continue to be able to post automated tweets for free.

    But hours after its release, even the new Pro tier is being criticized as still unaffordable for many startups and coming too late to save others that have already shut down because of Twitter’s paywall.

    The replies to Twitter’s announcement are filled with complaints that “the jump to 5k is too much,” as one user responded.

    “1.66 cents per tweet… I mean, it’s cheaper to send emails these days, and it costs 1.66 cents for 280 characters?” came another reply.

    “That’s cool, but you already killed most Twitter apps by now,” another user said. “5K is still too much for most of us. A 1K plan could make sense… but then again it’s too late.”

    Others suggested a $500 per month tier would be “more appropriate.”

    The new paid tier comes amid a report that Twitter has demanded researchers delete the data they had downloaded from the platform from before the paywall went into effect, unless they agree to pay for an enterprise plan.

    Twitter didn’t immediately respond to a request for comment.

    [ad_2]

    Source link

  • Anita Dunn, Biden’s brawler-in-chief, looks to 2024 | CNN Politics

    Anita Dunn, Biden’s brawler-in-chief, looks to 2024 | CNN Politics

    [ad_1]



    CNN
     — 

    Anita Dunn saw an opportunity with “Dark Brandon.

    The liberal meme, created by Joe Biden’s most online fans as a play on the right-wing “Let’s Go Brandon” code bashing the president, depicts a grinning Biden with red lasers shooting out of his eyes. After Dunn, Biden’s top messaging and communications adviser, became aware of the meme, she brought it to the president – and they jumped on an opportunity to go on the offensive in the never-ending social media meme wars.

    He’s nodded to Dark Brandon in official speeches, aides have shared the image on social media, and his 2024 campaign is selling $32 T-shirts emblazoned with his online alter-ego. While it’s a minor part of his repertoire, it shows how Dunn – a longtime Democratic operative and Biden confidante – has used her influence to engage in the kind of street brawling needed to combat perceptions of the aging president and the challenge ahead as he seeks a second term.

    “It fits well with who she is, which is a f**k-sh*t-up-brawler. It’s not a coincidence that the stuff that came behind Dark Brandon was very much in line with Anita’s way of seeing the world,” Rob Flaherty, the White House’s director of digital strategy, told CNN.

    CNN spoke to more than a dozen current and former White House and administration officials, lawmakers and Democratic strategists – many of whom requested anonymity to speak freely – who paint a picture of Dunn as a deeply loyal aide with a big-picture view of Biden’s strategy – with a hand in nearly all aspects of his political life.

    Her instincts for brawling are now directly intertwined with the president’s political fortunes, as she bolsters an already expansive role as a presidential adviser with steering 2024 messaging from the White House. For a president who relies on a very tight circle of trusted advisers, sources say Dunn has emerged as a powerful chief political communicator, a key strategist and someone who will fight on his behalf. She takes these responsibilities on with a combination of deep experience and Biden’s trust.

    While sources both inside and outside the White House say no communicator is better prepared for the moment than Dunn, her messaging task ahead is massive: A high-profile candidate of Biden’s age has never run before, and the president is facing low approval ratings after two years in the White House, which could be a drag on his reelection campaign. With Biden widely expected to face a familiar, but powerful, foe in former President Donald Trump, the mission facing Biden’s advisers is to find a message that can resonate enough with voters to beat Trump again.

    Just a third of Americans (33%) say that Biden winning in 2024 would be a step forward or a triumph for the country, per a May CNN poll conducted by SSRS. Voters also have serious questions about Biden’s age – he’s 80 now, and would be 86 at the end of a potential second term. Recent Washington Post/ABC News polling indicates that nearly two-thirds of respondents feel that Biden lacks the mental sharpness (63%) or is not in good enough physical health (62%) to serve effectively as president.

    Those poll numbers mean that Dunn’s most important task ahead is to help the president communicate to an unconvinced American public why he deserves a second term. And that has given her a White House portfolio that is virtually unparalleled.

    “The circle is small and isn’t exactly expanding anytime soon. That makes voices like Anita’s carry a significant amount of weight,” a Biden adviser said.

    Ostensibly, Dunn’s White House role centers around messaging, political strategy, oversight and crisis communication on the key issues facing the White House.

    “Like the rest of the senior staff, Anita works to act on the strategies and agenda President Biden assigns for her portfolio,” White House spokesman Andrew Bates said.

    She’s in close touch with Cabinet secretaries, members of Congress, outside groups and prospective candidates. A former Hill aide, Dunn is keenly aware of congressional dynamics, but her work inside the Obama White House has shaped her views on how to approach the daily challenges confronting the president.

    “She has this perch where she spans the overall strategic plan for (Biden) and for the White House, and also communicates outward with the political apparatus of the (Democratic National Committee) and the campaign and tries to keep the entire Joe Biden enterprise swimming in the same direction,” a White House aide said.

    She’s also playing an active role in Biden’s reelection campaign strategy, with multiple sources familiar with the dynamic indicating she is spearheading 2024 political messaging from the White House and coordinating with senior campaign staff.

    A former White House senior adviser put it more bluntly: When it comes to 2024, “she’s running everything.”

    Pressed in an interview with CBS News on how she will balance those two roles, Dunn declined to comment, citing the Hatch Act.

    “I’m going to be here at the White House serving the people,” she said. “I’m a White House employee, and I have a government job, and I will continue to do my government job.”

    For an incumbent president running for reelection, there is no commodity more valuable than time, especially time spent in front of the cameras, with an audience of critics looking for signs of fatigue. And with Biden officially jumping into the 2024 race, it’s expected that his current pace of activity will continue for the foreseeable future – using the bully pulpit to highlight key policies at the White House, visiting battleground states to showcase his accomplishments and traveling abroad to meet world leaders. That makes everything he says – and how and where he says it – part of an implicit reelection campaign.

    Dunn – in concert with a tight circle of aides, including White House chief of staff Jeff Zients, deputy chiefs of staff Jen O’Malley Dillon and Bruce Reed, senior adviser Mike Donilon, and counselor to the president Steve Ricchetti – is key to making those decisions and how to communicate them.

    For instance, Dunn was central to a recent decision to frame the president’s age as a sign of wisdom and experience.

    “It’s a legitimate thing to raise the question of age,” Biden told donors at a recent campaign fundraiser. “I hope what I’ve been able to bring to this job, and will continue to bring, is a little bit of wisdom.”

    His explicit use of that framing had Dunn’s fingerprints all over it – part of Bidenworld’s effort to mitigate a weakness by tying it to legislative accomplishments that supporters believe have little precedent over the past several decades.

    Multiple sources told CNN that Dunn has been a decision-maker for campaign issues such as staffing, announcement timing, headquarters location and selecting campaign leadership.

    She’s also recently been involved in strategically elevating the profile of Vice President Kamala Harris, making it clear internally that the West Wing needs to do a better job at bolstering Harris heading into the campaign, a source familiar with the dynamic said.

    Dunn is expected to remain in her White House role but continue to advise Biden on 2024 matters, multiple sources said, with campaign manager and former White House staffer Julie Chavez Rodriguez leading the charge from the campaign side.

    It’s a similar model to how former President Barack Obama’s top advisers coordinated between the West Wing and the campaign, and not unusual.

    “You want a strategist like Anita at the White House. Reelects are about organizing, ground game, targeting digital and ads and messages to particular audiences. It is a game of execution,” said Jennifer Palmieri, a longtime Democratic strategist who served as communications director during the Obama administration after Dunn.

    “The most important messaging that people will judge the president on is the job he is doing at the White House,” Palmieri said.

    Dunn’s deep loyalty and instinct to fight has also raised eyebrows outside the White House. TJ Ducklo, a 2020 Biden campaign aide who resigned from his White House role after privately threatening a reporter weeks into the administration, is expected to play a role in the 2024 campaign, a decision that has been publicly defended in a rare on-the-record statement from Dunn. (That reporter later called for Ducklo’s redemption.)

    “TJ made a mistake, took responsibility for it, and paid a price,” Dunn said in her statement – in her personal capacity – to Politico’s “West Wing Playbook” last month.

    One former senior White House adviser, however, called that an “unforced error” by Dunn. The former adviser asked for anonymity to speak freely without retribution.

    Her allegiance to Ducklo, the adviser said, “leaves the president vulnerable and exposed to unnecessary criticism and charges of hypocrisy.”

    “It’s in direct contradiction to the president’s own values and integrity and the standards that he himself demanded of everybody in the White House,” the former adviser said.

    A current White House aide fired back.

    “The president has values. Taking responsibility when you have done wrong means a lot. So does forgiveness,” that aide said.

    Dunn, 65, is one-half of a Washington power couple at the epicenter of Bidenworld – husband Bob Bauer is the president’s personal attorney and the lead attorney handling the special counsel investigation into classified documents found at Biden’s private office and residence. Yet she has risen from roles in the Jimmy Carter White House to the Senate to building public affairs powerhouse SKDK to the 2008 Obama campaign entirely on her own, sparked by her intense preparation.

    “She made preparation her friend,” said veteran Democratic operative Minyon Moore, who has known Dunn for decades.

    “Every time she walks in a room, she’s probably more prepared than most of her counterparts. And I think that’s how she was able to tackle the business very early, because they knew she had done her homework. They knew she could think through many layers. She was smart as heck. And so, you want a person like Anita in the room,” Moore said.

    Dunn operates largely behind the scenes – actively eschewing Washington’s social scene, social media and most television appearances.

    Her loyalty to Biden was cemented when she was among a small group of advisers working toward a potential 2016 presidential run, Biden wrote in his 2017 memoir, “Promise Me, Dad.” Though he ultimately decided against running, Dunn’s encouragement solidified a strong level of trust, multiple people close to her say.

    Dunn joined Biden’s 2020 campaign as a senior adviser and is widely credited for helping turn the tide of Biden’s political fortunes in that campaign’s Democratic primary after she was tapped to lead the operation following a fourth-place showing in the Iowa caucuses.

    She encouraged a sharper messaging posture from her desk in the center of the campaign “bullpen” workspace.

    When Biden arrived in the White House, Dunn briefly joined the administration as a senior adviser before returning to SKDK in August 2021, the temporary nature of her service allowing her to skirt disclosure of a cadre of investments and high-profile clients.

    She continued to advise Biden informally and rejoined the White House in May 2022 in a permanent capacity, a move requiring multimillion-dollar financial disclosures.

    Dunn’s omnipresence in Biden’s orbit has been just as clear in the period when she was out of the White House as when she’s been in. She was a constant presence on conference calls or in visits to the White House.

    “The president trusts her counsel – and there’s good evidence as to why,” one adviser said, pointing to her central role in his path to the presidency, loyalty during Biden’s 2016 deliberations and her work inside the West Wing.

    Over her career, Dunn has developed a reputation as an aggressive messaging tactician with strict discipline.

    She’s led White House messaging efforts on legislative accomplishments, seeking to highlight the legislation’s tangible impacts on real Americans, though some Democrats argue the White House has not done enough to sell those measures to the public.

    “What she tries to do is find the connective tissue,” said a senior administration official, who talks to Dunn regularly. “There’s nobody in government that has a better big picture perspective of what’s going on.”

    Multiple colleagues suggested Dunn can inspire a certain level of anxiety in her subordinates – demanding a high level of results and keeping the receipts – while also being seen as a supportive mentor. At the White House, Dunn is known for leading a weekly meeting known as “Fridays at 5,” a 5 p.m. in-person convening that is met with both eye-rolling (given its timing) and appreciation. It includes the entire communications staff, from interns to press assistants to the highest levels. Dunn will lead shout-outs at the beginning of each meeting, identifying achievements and often spotlighting junior staff.

    “It’s really emblematic of Anita,” the White House aide said. “The fact that she took it upon herself to establish this very expansive view of who’s on the communications team across the entire White House and set a weekly meeting where those people get direct exposure to her, as the senior adviser to the president, is really neat.”

    Dunn’s counsel isn’t only valued inside the White House walls – Democratic Sen. Amy Klobuchar, who ran against Biden for president in 2020, regularly seeks her advice.

    “She’s someone that you feel like you can trust and she’s going to have your back. And I think that’s why she’s been such a trusted adviser to President Biden,” the Minnesota senator told CNN, saying that Dunn has been a key messaging coordinator for her Senate colleagues in advancing Biden’s policies.

    That intense loyalty to the president is ultimately why Dunn has been given such a powerful role inside Biden’s political operation.

    “The people that were there and believed in him when he was counted out hold a unique bond and trust with the president. That relationship, with her obvious expertise, means she’s empowered to do what she needs to do,” a former colleague said.

    “Few people have the experience and discipline to keep their eye on the ball like she does. She’s not distracted, and she knows what messages are going to land, even if the pundits disagree,” the former colleague added.

    This story has been updated with additional details.

    [ad_2]

    Source link

  • With the rise of AI, social media platforms could face perfect storm of misinformation in 2024 | CNN Business

    With the rise of AI, social media platforms could face perfect storm of misinformation in 2024 | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Last month, a video posted to Twitter by Florida Gov. Ron DeSantis’ presidential campaign used images that appeared to be generated by artificial intelligence showing former President Donald Trump hugging Dr. Anthony Fauci. The images, which appeared designed to criticize Trump for not firing the nation’s top infectious disease specialist, were tricky to spot: they were shown alongside real images of the pair and with a text overlay saying, “real life Trump.”

    As the images began spreading, fact-checking organizations and sharp-eyed users quickly flagged them as fake. But Twitter, which has slashed much of its staff in recent months under new ownership, did not remove the video. Instead, it eventually added a community note — a contributor-led feature to highlight misinformation on the social media platform — to the post, alerting the site’s users that in the video “3 still shots showing Trump embracing Fauci are AI generated images.”

    Experts in digital information integrity say it’s just the start of AI-generated content being used ahead of the 2024 US Presidential election in ways that could confuse or mislead voters.

    A new crop of AI tools offer the ability to generate compelling text and realistic images — and, increasingly, video and audio. Experts, and even some executives overseeing AI companies, say these tools risk spreading false information to mislead voters, including ahead of the 2024 US election.

    “The campaigns are starting to ramp up, the elections are coming fast and the technology is improving fast,” said Jevin West, a professor at the University of Washington and co-founder of the Center for an Informed Public. “We’ve already seen evidence of the impact that AI can have.”

    Social media companies bear significant responsibility for addressing such risks, experts say, as the platforms where billions of people go for information and where bad actors often go to spread false claims. But they now face a perfect storm of factors that could make it harder than ever to keep up with the next wave of election misinformation.

    Several major social networks have pulled back on their enforcement of some election-related misinformation and undergone significant layoffs over the past six months, which in some cases hit election integrity, safety and responsible AI teams. Current and former US officials have also raised alarms that a federal judge’s decision earlier this month to limit how some US agencies communicate with social media companies could have a “chilling effect” on how the federal government and states address election-related disinformation. (On Friday, an appeals court temporarily blocked the order.)

    Meanwhile, AI is evolving at a rapid pace. And despite calls from industry players and others, US lawmakers and regulators have yet to implement real guardrails for AI technologies.

    “I’m not confident in even their ability to deal with the old types of threats,” said David Evan Harris, an AI researcher and ethics adviser to the Psychology of Technology Institute, who previously worked on responsible AI at Facebook-parent Meta. “And now there are new threats.”

    The major platforms told CNN they have existing policies and practices in place related to misinformation and, in some cases, specifically targeting “synthetic” or computer-generated content, that they say will help them identify and address any AI-generated misinformation. None of the companies agreed to make anyone working on generative AI detection efforts available for an interview.

    The platforms “haven’t been ready in the past, and there’s absolutely no reason for us to believe that they’re going to be ready now,” Bhaskar Chakravorti, dean of global business at The Fletcher School at Tufts University, told CNN.

    Misleading content, especially related to elections, is nothing new. But with the help of artificial intelligence, it’s now possible for anyone to quickly, easily and cheaply create huge quantities of fake content.

    And given AI technology’s rapid improvement over the past year, fake images, text, audio and videos are likely to be even harder to discern by the time the US election rolls around next year.

    “We’ve still got more than a year to go until the election. These tools are going to get better and, in the hands of sophisticated users, they can be very powerful,” said Harris. He added that the kinds of misinformation and election meddling that took place on social media in 2016 and 2020 will likely only be exacerbated by AI.

    The various forms of AI-generated content could be used together to make false information more believable — for example, an AI-written fake article accompanied by an AI-generated photo purporting to show what happened in the report, said Margaret Mitchell, researcher and chief ethics scientist at open-source AI firm Hugging Face.

    AI tools could be useful for anyone wanting to mislead, but especially for organized groups and foreign adversaries incentivized to meddle in US elections. Massive foreign troll farms have been hired to attempt to influence previous elections in the United States and elsewhere, but “now, one person could be in charge of deploying thousands of thousands of generative AI bots that work,” to pump out content across social media to mislead voters, Mitchell, who previously worked at Google, said.

    OpenAI, the maker of the popular AI chatbot ChatGPT, issued a stark warning about the risk of AI-generated misinformation in a recent research paper. An abundance of false information from AI systems, whether intentional or created by biases or “hallucinations” from the systems, has “the potential to cast doubt on the whole information environment, threatening our ability to distinguish fact from fiction,” it said.

    Examples of AI-generated misinformation have already begun to crop up. In May, several Twitter accounts, including some who had paid for a blue “verification” checkmark, shared fake images purporting to show an explosion near the Pentagon. While the images were quickly debunked, their circulation was briefly followed by a dip in the stock market. Twitter suspended at least one of the accounts responsible for spreading the images. Facebook labeled posts about the images as “false information,” along with a fact check.

    A month earlier, the Republican National Committee released a 30-second advertisement responding to President Joe Biden’s official campaign announcement that used AI images to imagine a dystopian United States after the reelection of the 46th president. The RNC ad included the small on-screen disclaimer, “Built entirely with AI imagery,” but some potential voters in Washington D.C. to whom CNN showed the video did not spot it on their first watch.

    Dozens of Democratic lawmakers last week sent a letter calling on the Federal Election Commission to consider cracking down on the use of artificial intelligence technology in political advertisements, warning that deceptive ads could harm the integrity of next year’s elections.

    Ahead of 2024, many of the platforms have said that they will be rolling out plans to protect the election’s integrity, including from the threat of AI-generated content.

    TikTok earlier this year rolled out a policy stipulating that “synthetic” or manipulated media created by AI must be clearly labeled, in addition to its civic integrity policy which prohibits misleading information about electoral processes and its general misinformation policy which prohibits false or misleading claims that could cause “significant harm” to individuals or society.

    YouTube has a manipulated media policy that prohibits content that has been “manipulated or doctored” in a way that could mislead users and “may pose a serious risk of egregious harm.” The platform also has policies against content that could mislead users about how and when to vote, false claims that could discourage voting and content that “encourages others to interfere with democratic processes.” YouTube also says it prominently surfaces reliable news and information about elections on its platform, and that its election-focused team includes members of its trust and safety, product and “Intelligence Desk” teams.

    “Technically manipulated content, including election content, that misleads users and may pose a serious risk of egregious harm is not allowed on YouTube,” YouTube spokesperson Ivy Choi said in a statement. “We enforce our manipulated content policy using machine learning and human review, and continue to improve on this work to stay ahead of potential threats.”

    A Meta spokesperson told CNN that the company’s policies apply to all content on its platforms, including AI-generated content. That includes its misinformation policy, which stipulates that the platform removes false claims that could “directly contribute to interference with the functioning of political processes and certain highly deceptive manipulated media,” and may reduce the spread of other misleading claims. Meta also prohibits ads featuring content that has been debunked by its network of third-party fact checkers.

    TikTok and Meta have also joined a group of tech industry partners coordinated by the non-profit Partnership on AI dedicated to developing a framework for responsible use of synthetic media.

    Asked for comment on this story, Twitter responded with an auto-reply of a poop emoji.

    Twitter has rolled back much of its content moderation in the months since billionaire Elon Musk took over the platform, and instead has leaned more heavily on its “Community Notes” feature which allows users to critique the accuracy of and add context to other people’s posts. On its website, Twitter also says it has a “synthetic media” policy under which it may label or remove “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”

    Still, as is often the case with social media, the challenge is likely to be less a matter of having the policies in place than enforcing them. The platforms largely use a mix of human and automated review to identify misinformation and manipulated media. The companies declined to provide additional details about their AI detection processes, including how many staffers are involved in such efforts.

    But AI experts say they’re worried that the platforms’ detection systems for computer-generated content may have a hard time keeping up with the technology’s advancements. Even some of the companies developing new generative AI tools have struggled to build services that can accurately detect when something is AI-generated.

    Some experts are urging all the social platforms to implement policies requiring that AI-generated or manipulated content be clearly labeled, and calling on regulators and lawmakers to establish guardrails around AI and hold tech companies accountable for the spread of false claims.

    One thing is clear: the stakes for success are high. Experts say that not only does AI-generated content create the risk of internet users being misled by false information; it could also make it harder for them to trust real information about everything from voting to crisis situations.

    “We know that we’re going into a very scary situation where it’s going to be very unclear what has happened and what has not actually happened,” said Mitchell. “It completely destroys the foundation of reality when it’s a question whether or not the content you’re seeing is real.”

    [ad_2]

    Source link

  • Meta’s Threads gets a highly requested ‘following feed’ | CNN Business

    Meta’s Threads gets a highly requested ‘following feed’ | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Meta on Tuesday launched a highly anticipated “following feed” option in its Threads app as part of its latest batch of updates that could help the new social platform further chip away at Twitter’s position in the market.

    The option to see a reverse chronological feed of posts from only accounts a user follows had been one of the most requested features since Threads launched earlier this month. On Tuesday, Meta CEO Mark Zuckerberg replied to a post requesting the feature, saying, “Ask and you shall receive.”

    The following feed, one of the central features of the Twitter experience, can be accessed on Threads by double tapping on the app’s home button.

    Meta has been steadily rolling out updates to Threads as it tries to keep users engaged in the new app. Threads had a hugely successful launch, topping 100 million sign-ups in its first week, but engagement has declined somewhat since then.

    Meta rolled out Threads as a barebones app — missing popular features such as direct messages and a robust search function — to take advantage of a weak moment at rival Twitter. Now, Meta executives have acknowledged that they must continue building out the app to keep the momentum going.

    “I’m very optimistic about how the Threads community is coming together,” Meta CEO Mark Zuckerberg said in a post on the platform last week. “Early growth was off the charts, but more importantly 10s of millions of people now come back daily … The focus for the rest of the year is improving the basics and retention.”

    Tuesday’s round of updates also includes automatic translation of posts into a users’ default language, the ability for users to see posts they’ve liked in their settings, the option for private users to batch “approve all” follow requests and buttons to filter the activity feed by various types of interactions, according to the company.

    The changes followed another batch of updates last week, which included a translation button and the option to subscribe and receive notifications from accounts a user doesn’t follow.

    Meta’s ongoing work on Threads comes as the chaos at Twitter continues. Earlier this week, owner Elon Musk began doing away with the platform’s iconic bird branding and replacing it with “X” in hopes of building an “everything” app similar to China’s WeChat.

    As Musk rebrands the app, he could face a different threat from Meta: Facebook’s parent company is one of many businesses that already have intellectual property rights to the letter “X.”

    [ad_2]

    Source link

  • Pentagon investigating alleged classified documents circulating on social media of US and NATO intelligence on Ukraine | CNN Politics

    Pentagon investigating alleged classified documents circulating on social media of US and NATO intelligence on Ukraine | CNN Politics

    [ad_1]


    Washington
    CNN
     — 

    The Pentagon is investigating what appear to be screenshots of classified US and NATO military information about Ukraine circulating on social media, a Pentagon official told CNN.

    CNN has reviewed some of the images circulating on Twitter and Telegram but is unable to verify if they are authentic or have been doctored. US officials say the documents are real slides, part of a larger daily intelligence deck produced by the Pentagon about the war, but it appears the documents have been edited in some places.

    Pentagon deputy press secretary Sabrina Singh would not weigh in on the documents’ legitimacy but said in a statement that the Defense Department is “aware of the reports of social media posts, and the Department is reviewing the matter.”

    Mykhailo Podolyak, the adviser to the head of the Office of the President of Ukraine, said on his Telegram channel he believes the Russians are behind the purported leak. Podolyak said the documents that were disseminated are inauthentic, have “nothing to do with Ukraine’s real plans” and are based on “a large amount of fictitious information.”

    The emergence of the documents, whether genuine or not, has heightened focus on when the planned Ukrainian counteroffensive will begin and what, if anything, either side knows about the other’s preparations for it.

    One image that has been circulating on Russian Telegram channels and was reviewed by CNN is a photo of a hard copy of a document titled “US, Allied & Partner UAF Combat Power Build.” The document, which is from February and marked as secret, lists the amounts of certain Western weapons systems that Ukraine currently has on hand, estimated delivery of additional systems and the training Ukraine has or is expected to complete on the systems.

    Another is titled “Russia/Ukraine Joint Staff J3/4/5 Daily Update (D+370)” and is listed as secret. J3 refers to the operations directorate of the US military’s joint staff, J4 deals with logistics and engineering, and J5 proposes strategies, plans and policy recommendations. “D+370” refers to the date the document was produced: 370 days after the first day of the Russian invasion.

    A third document is a map, listed as top secret, that shows the status of the conflict as of March 1. The map shows Russian and Ukrainian battalion locations and sizes, as well as total assessed losses on both sides. The casualty numbers on this document are what officials believe was doctored – the Russian losses are actually far higher than the “16,000-17,500 killed in action” listed on the document, officials said.

    The document also says that 61,000-71,500 Ukrainians have been killed in action, a number that officials said also appeared edited to be higher than actual Pentagon estimates.

    A fourth document is a weather projection from February, listed as Secret, that assesses where the ground may freeze in Ukraine in a way that would be favorable for vehicle maneuver.

    The New York Times, which first disclosed the Pentagon investigation, reported that some of the images circulating online describe intelligence that could be useful to Russia, such as how quickly the Ukrainians are expending munitions used in US-provided rocket-systems.

    Podolyak called the documents “a bluff, dust in your eyes” and said that “if Russia really did receive real scenario preparations, it would hardly make them public.”

    “Russia is looking for any way to seize the information initiative, to try to influence the scenario plans for Ukraine’s counteroffensive,” he said. “To raise doubts, compromise previous ideas and frighten with their ‘awareness.’ But these are just standard elements of the Russian intelligence’s operational game and nothing more. It has nothing to do with Ukraine’s real plans.”

    Podolyak added that Russian troops “will get acquainted” with Ukraine’s real counteroffensive plans “very soon.”

    Asked about the images circulating on Twitter and Telegram, Kremlin spokesperson Dmitry Peskov told CNN in a statement that “we don’t have the slightest doubt about direct or indirect involvement of the United States and NATO in the conflict between Russia and Ukraine.”

    “This level of involvement is rising, is rising gradually,” he said. “We keep our eye on this process. Well, of course, it makes the whole story more complicated, but it cannot influence the final outcome of the special operation.”

    This story has been updated with additional details.

    [ad_2]

    Source link

  • Elon Musk says he’s found a new CEO for Twitter | CNN Business

    Elon Musk says he’s found a new CEO for Twitter | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Elon Musk on Thursday said he’s found a new CEO to take over Twitter, months after he first promised to step back from the role.

    The new CEO will assume the role at Twitter Inc., which recently changed its name to X Corp., in the coming weeks, Musk said. He did not provide a name.

    “Excited to announce that I’ve a new CEO for X/Twitter. She will be starting in ~6 weeks!” Musk said in a tweet.

    Musk, who has had a chaotic reign as “Chief Twit” since buying the company in October, said he will become Twitter’s executive chair and chief technology officer, overseeing product, software and system operations.

    In December, Musk ran a poll on the platform asking users whether he should step back as Twitter’s CEO, which ended with the majority of users voting in the affirmative. Musk said he would abide by the results of the poll but later backtracked, saying he would hand over the role “as soon as I find someone foolish enough to take the job!” In February, he reiterated that he planned to find a replacement by the end of the year.

    Musk has faced criticism for a series of policy changes at Twitter, which often came without clear justification and raised concerns about the impact on Twitter’s users.

    He has also been attempting to convince advertisers to rejoin the platform, after many fled over concerns about hateful conduct on the platform, Twitter’s mass layoffs or questions about the company’s future. At the same time, he has been trying to sell users on a new paid subscription platform that includes the ability to pay for a blue verification check mark, but appears to have limited traction so far.

    Musk — who runs or is involved in numerous other companies, including Tesla

    (TSLA)
    — has also faced criticism from Tesla

    (TSLA)
    shareholders concerned that he is distracted by Twitter.

    Musk recently said that Twitter is now “trending to breakeven,” after previously saying it was at risk of bankruptcy. Now, the company’s new CEO will be tasked with trying to help turn around the struggling company and help Musk recoup some of the $44 billion spent acquiring the platform.

    Even as Musk prepares to step back from the CEO role, he will likely maintain significant control over the future direction of the company. After taking over the company in October, Musk cleared out the C-Suite, dissolved the board and became both the CEO and sole director of the platform.

    [ad_2]

    Source link

  • ‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images | CNN Business

    ‘We no longer know what reality is.’ How tech companies are working to help detect AI-generated images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    For a brief moment last month, an image purporting to show an explosion near the Pentagon spread on social media, causing panic and a market sell-off. The image, which bore all the hallmarks of being generated by AI, was later debunked by authorities.

    But according to Jeffrey McGregor, the CEO of Truepic, it is “truly the tip of the iceberg of what’s to come.” As he put it, “We’re going to see a lot more AI generated content start to surface on social media, and we’re just not prepared for it.”

    McGregor’s company is working to address this problem. Truepic offers technology that claims to authenticate media at the point of creation through its Truepic Lens. The application captures data including date, time, location and the device used to make the image, and applies a digital signature to verify if the image is organic, or if it has been manipulated or generated by AI.

    Truepic, which is backed by Microsoft, was founded in 2015, years before the launch of AI-powered image generation tools like Dall-E and Midjourney. Now McGregor says the company is seeing interest from “anyone that is making a decision based off of a photo,” from NGOs to media companies to insurance firms looking to confirm a claim is legitimate.

    “When anything can be faked, everything can be fake,” McGregor said. “Knowing that generative AI has reached this tipping point in quality and accessibility, we no longer know what reality is when we’re online.”

    Tech companies like Truepic have been working to combat online misinformation for years, but the rise of a new crop of AI tools that can quickly generate compelling images and written work in response to user prompts has added new urgency to these efforts. In recent months, an AI-generated image of Pope Francis in a puffer jacket went viral and AI-generated images of former President Donald Trump getting arrested were widely shared, shortly before he was indicted.

    Some lawmakers are now calling for tech companies to address the problem. Vera Jourova, vice president of the European Commission, on Monday called for signatories of the EU Code of Practice on Disinformation – a list that includes Google, Meta, Microsoft and TikTok – to “put in place technology to recognize such content and clearly label this to users.”

    A growing number of startups and Big Tech companies, including some that are deploying generative AI technology in their products, are trying to implement standards and solutions to help people determine whether an image or video is made with AI. Some of these companies bear names like Reality Defender, which speak to the potential stakes of the effort: protecting our very sense of what’s real and what’s not.

    But as AI technology develops faster than humans can keep up, it’s unclear whether these technical solutions will be able to fully address the problem. Even OpenAI, the company behind Dall-E and ChatGPT, admitted earlier this year that its own effort to help detect AI-generated writing, rather than images, is “imperfect,” and warned it should be “taken with a grain of salt.”

    “This is about mitigation, not elimination,” Hany Farid, a digital forensic expert and professor at the University of California, Berkeley, told CNN. “I don’t think it’s a lost cause, but I do think that there’s a lot that has to get done.”

    “The hope,” Farid said, is to get to a point where “some teenager in his parents basement can’t create an image and swing an election or move the market half a trillion dollars.”

    Companies are broadly taking two approaches to address the issue.

    One tactic relies on developing programs to identify images as AI-generated after they have been produced and shared online; the other focuses on marking an image as real or AI-generated at its conception with a kind of digital signature.

    Reality Defender and Hive Moderation are working on the former. With their platforms, users can upload existing images to be scanned and then receive an instant breakdown with a percentage indicating the likelihood for whether it’s real or AI-generated based on a large amount of data.

    Reality Defender, which launched before “generative AI” became a buzzword and was part of competitive Silicon Valley tech accelerator Y Combinator, says it uses “proprietary deepfake and generative content fingerprinting technology” to spot AI-generated video, audio and images.

    In an example provided by the company, Reality Defender highlights an image of a Tom Cruise deepfake as 53% “suspicious,” telling the user it has found evidence showing the face was warped, “a common artifact of image manipulation.”

    Defending reality could prove to be a lucrative business if the issue becomes a frequent concern for businesses and individuals. These services offer limited free demos as well as paid tiers. Hive Moderation said it charges $1.50 for every 1,000 images as well as “annual contract deals” that offer a discount. Realty Defender said its pricing may vary based on various factors, including whether the client needs “any bespoke factors requiring our team’s expertise and assistance.”

    “The risk is doubling every month,” Ben Colman, CEO of Reality Defender, told CNN. “Anybody can do this. You don’t need a PhD in computer science. You don’t need to spin up servers on Amazon. You don’t need to know how to write ransomware. Anybody can do this just by Googling ‘fake face generator.’”

    Kevin Guo, CEO of Hive Moderation, described it as “an arms race.”

    “We have to keep looking at all the new ways that people are creating this content, we have to understand it and add it to our dataset to then classify the future,” Guo told CNN. “Today it’s a small percent of content for sure that’s AI-generated, but I think that’s going to change over the next few years.”

    In a different, preventative approach, some larger tech companies are working to integrate a kind of watermark to images to certify media as real or AI-generated when they’re first created. The effort has so far largely been driven by the Coalition for Content Provenance and Authenticity, or C2PA.

    The C2PA was founded in 2021 to create a technical standard that certifies the source and history of digital media. It combines efforts by the Adobe-led Content Authenticity Initiative (CAI) and Project Origin, a Microsoft- and BBC-spearheaded initiative that focuses on combating disinformation in digital news. Other companies involved in C2PA include Truepic, Intel and Sony.

    Based on the C2PA’s guidelines, the CAI makes open source tools for companies to create content credentials, or the metadata that contains information about the image. This “allows creators to transparently share the details of how they created an image,” according to the CAI website. “This way, an end user can access context around who, what, and how the picture was changed — then judge for themselves how authentic that image is.”

    “Adobe doesn’t have a revenue center around this. We’re doing it because we think this has to exist,” Andy Parsons, Senior Director at CAI, told CNN. “We think it’s a very important foundational countermeasure against mis- and disinformation.”

    Many companies are already integrating the C2PA standard and CAI tools into their applications. Adobe’s Firefly, an AI image generation tool recently added to Photoshop, follows the standard through the Content Credentials feature. Microsoft also announced that AI art created by Bing Image Creator and Microsoft Designer will carry a cryptographic signature in the coming months.

    Other tech companies like Google appear to be pursuing a playbook that pulls a bit from both approaches.

    In May, Google announced a tool called About this image, offering users the ability to see when images found on its site were originally indexed by Google, where images might have first appeared and where else they can be found online. The tech company also announced that every AI-generated image created by Google will carry a markup in the original file to “give context” if the image is found on another website or platform.

    While tech companies are trying to tackle concerns about Ai-generated images and the integrity of digital media, experts in the field stress that these businesses will ultimately need to work with each other and the government to address the problem.

    “We’re going to need cooperation from the Twitters of the world and the Facebooks of the world so they start taking this stuff more seriously, and stop promoting the fake stuff and start promoting the real stuff,” said Farid. “There’s a regulatory part that we haven’t talked about. There’s an education part that we haven’t talked about.”

    Parsons agreed. “This is not a single company or a single government or a single individual in academia who can make this possible,” he said. “We need everybody to participate.”

    For now, however, tech companies continue to move forward with pushing more AI tools into the world.

    [ad_2]

    Source link

  • New lawsuit claims Elon Musk’s Twitter owes more severance to former employees | CNN Business

    New lawsuit claims Elon Musk’s Twitter owes more severance to former employees | CNN Business

    [ad_1]


    New York
    CNN
     — 

    A former Twitter employee on Wednesday filed a new lawsuit against Twitter and its owner, Elon Musk, alleging that the company failed to provide the full amount of severance it had promised employees prior to mass layoffs last November.

    The lawsuit, which was filed in federal district court in California and seeks class action status, asks the court to order Musk and Twitter to pay the additional severance benefits allegedly owed to former employees, in an amount no less than $500 million.

    The complaint was brought on behalf of Courtney McMillian, a former human resources leader at Twitter who was part of the mass layoffs Musk conducted the week after he bought the company last year. It alleges that Twitter made repeated assurances to employees about its severance plan amid Musk’s takeover in an effort to retain workers. In particular, the complaint claims that Twitter had promised senior employees severance of six months of base pay plus one week for every year of service, in addition to other benefits. Instead, Musk’s Twitter provided laid off employees with a total of three months of pay, including the state and federally mandated notice periods.

    In response to a request for comment on the lawsuit, Twitter sent CNN an automated poop emoji.

    Musk has cut around 80% of Twitter’s staff from prior to the takeover in his nine months owning the company.

    The lawsuit is just the latest legal action brought against Twitter by former employees with severance-related claims. More than 1,500 former employees have filed arbitration claims, after Twitter pushed for anyone who had signed an arbitration agreement while working at the company to pursue their claims out of court.

    But Kate Mueting, a lawyer working on the suit, said that Wednesday’s case relies on a federal law, the Employee Retirement Income Security Act, that the firm argues was exempt from Twitter’s arbitration agreement. That means that, if the suit is granted its request for class action status, former employees may be able to participate whether or not they signed the arbitration agreement.

    Twitter is also facing lawsuits from vendors, landlords and business partners who claim the company has failed to pay what they are owed, as well as music publishers who have alleged copyright infringement on the platform. A lawyer for the company last week also sent a letter threatening to sue Meta over its new rival platform, Threads.

    [ad_2]

    Source link

  • TikTok brings in text posts to rival Elon Musk’s X | CNN Business

    TikTok brings in text posts to rival Elon Musk’s X | CNN Business

    [ad_1]


    London
    CNN
     — 

    TikTok will now allow users to post text-only content for the first time in a challenge to Elon Musk’s beleaguered X, formerly known as Twitter.

    Announcing the new post format Monday, the video streaming platform said it would broaden “options for creators to share their ideas and express their creativity.”

    “With text posts, we’re expanding the boundaries of content creation for everyone on TikTok, giving the written creativity we’ve seen in comments, captions, and videos a dedicated space to shine,” the company said in a statement.

    Users are now able to share “stories, poems, recipes, and other written content,” which can be customized by adding sound, stickers and background colors, among other features.

    In perhaps the most direct challenge to the X platform, text posts on TikTok will allow users to tag other accounts and add hashtags that relate to trending topics.

    The latest move by TikTok, which is owned by China’s ByteDance, may prove to be another knock for Musk, whose takeover of X in October has resulted in mass layoffs, a huge drop in advertising revenue and controversial changes to the platform’s verification policy.

    Earlier this month, Facebook’s parent company, Meta, launched Threads, a rival social media site. Threads surpassed 100 million user sign-ups in its first week.

    Musk re-branded Twitter to X Monday, giving the platform a new website domain and logo.

    [ad_2]

    Source link

  • Academic researchers blast Twitter’s data paywall as ‘outrageously expensive’ | CNN Business

    Academic researchers blast Twitter’s data paywall as ‘outrageously expensive’ | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    After Twitter announced in February it would begin charging third parties to access its platform data, academic researchers warned that the vaguely worded plan could threaten important studies about how misinformation, harassment and other malicious activity spreads online.

    Now, as Twitter has released more pricing information, many of those same academics are saying their fears were well-founded, complaining that Twitter’s new tiered paywall not only charges “outrageously expensive” prices but that it also restricts the amount of accessible data so heavily that what little researchers can see, even on the most expensive tiers, is not useful for studies at any rigorous level.

    Twitter, which has cut much of its public relations team under CEO Elon Musk, automatically responded to a request for comment with an email containing a poop emoji.

    In an open letter this week, the Coalition for Independent Technology Research — a group representing dozens of researchers and civil society organizations — said free and open access to Twitter data has historically enabled systematic, large-scale research on social media’s role in public health initiatives, foreign propaganda, political discourse, and even the bots and spam that Musk has blamed for ruining Twitter.

    But Twitter’s new tiered access system undercuts all of that, the researchers said. The company’s pricing that launched last week, starting at $100 per month for a “basic” amount of data, does not provide nearly enough volume for users at the low end, while the high end “ranges from $42,000 to $210,000 per month [and] is unaffordable for researchers,” the letter said.

    The new basic tier limits users to reading just 10,000 tweets per month. That represents 0.3% of what researchers used to be able to collect in a single day, the letter said.

    Even under the most expensive “enterprise” tier costing upwards of $2.5 million a year, Twitter is offering only a fraction of the tweets it used to, the letter continued. Before the change, researchers could pay about $500 a month for the ability to access up to 10% of the roughly 1 billion tweets a month that flow across Twitter’s platform.

    Now, though, “the most expensive Enterprise tier would cut that by 80% at about 400 times the price,” the researchers’ letter said.

    Asking researchers to pay orders of magnitude more for a fifth of the access they once had represents a barrier to accountability and transparency, the letter added.

    “Under the new pricing plans, studying the communications and interactions of even a small population—such as the 535 Members of the U.S. Congress or the 705 Members of the European Parliament—will be unfeasible,” the letter said. “The new pricing plans will also end at least 76 long-term efforts, including dashboards, tools, or code packages that support other researchers, journalists, first-responders, educators, and Twitter users.”

    [ad_2]

    Source link

  • Twitter removes transgender protections from hateful conduct policy | CNN Business

    Twitter removes transgender protections from hateful conduct policy | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Twitter appears to have quietly rolled back a portion of its hateful conduct policy that included specific protections for transgender people.

    The policy previously stated that Twitter prohibits “targeting others with repeated slurs, tropes or other content that intends to degrade or reinforce negative or harmful stereotypes about a protected category. This includes targeted misgendering or deadnaming of transgender individuals.” But the second line was removed earlier this month, according to archived versions of the page from the WayBack Machine.

    Twitter also removed a line from the policy detailing certain groups of people often subject to disproportionate abuse online, including “women, people of color, lesbian, gay, bisexual, transgender, queer, intersex, asexual individuals, and marginalized and historically underrepresented communities.”

    The platform first introduced its policy prohibiting misgendering and deadnaming (referring to a person’s pre-transition name) of transgender people in 2018 as part of a broader overhaul of its hateful conduct policy.

    The change to the hateful conduct policy is one of a number of updates Twitter has made to its safety and content moderation practices since Elon Musk took over the company last fall. Twitter has also restored the accounts of users who had previously been banned for violating its rules, stopped enforcing its Covid-19 misinformation policy, allowed users to purchase blue verification checkmarks and applied controversial new labels to the accounts of several news organizations.

    LGBTQ advocacy group GLAAD called out the hateful conduct policy change in a Tuesday statement.

    “Twitter’s decision to covertly roll back its longtime policy is the latest example of just how unsafe the company is for users and advertisers alike,” GLAAD President and CEO Sarah Kate Ellis said. “This decision to roll back LGBTQ safety pulls Twitter even more out of step with TikTok, Pinterest, and Meta, which all maintain similar policies to protect their transgender users at a time when anti-transgender rhetoric online is leading to real world discrimination and violence.”

    Twitter did not respond to a request for comment about the change, although the platform did announce earlier this week some other updates to how it enforces its hateful conduct policy. The platform said it plans to start applying labels to some tweets that violate its hateful conduct policy and reduce their visibility, a similar practice to the one used under the company’s previous leadership, under which it either reduced the visibility of or removed violative tweets.

    “Restricting the reach of Tweets helps reduce binary ‘leave up versus take down’ content moderation decisions and supports our freedom of speech vs freedom of reach approach,” the company said in a tweet. Twitter also said it will not place ads next to content that has been labeled as violative.

    Musk has been in the process of trying to encourage advertisers to return to the platform, after many paused their spending over concerns about Musk’s policy changes, increased hate speech on the platform and massive cuts to the company’s workforce, threatening the company’s core business.

    The billionaire tried to assuage advertisers about Twitter’s approach to hateful conduct at a marketing conference Tuesday, saying, “If somebody has something hateful to say, it doesn’t mean you should give them a megaphone,” according to a report from the Wall Street Journal.

    Musk has faced a number of criticisms from some in the transgender community, most notably from his transgender daughter Vivian Jenna Wilson. Last year, she petitioned a court in California to change her last name to that of of her mother, Justine Wilson, Musk’s ex-wife and mother of five of his seven children, because she no longer wanted to be related to her father “in any way, shape or form.”

    Musk has also had several tweets where he mocked the idea of use of people choosing the pronouns they want to apply to them. He had one tweet in December 2020, which he later deleted, that said “when you put he/him in your bio” alongside a drawing of an 18th century soldier rubbing blood on his face in front of a pile of dead bodies and wearing a cap that read “I love to oppress.”

    And this past December, a vocal critic of many Covid restrictions and protocols, Musk tweeted, “My pronouns are Prosecute/Fauci.”

    But in other tweets, Musk has insisted he had no problems with transgender people, saying that his problem is with “all these pronouns” which he called an “esthetic nightmare.” He also pointed out that his auto company Tesla

    (TSLA)
    has repeatedly scored a 100% rating from the Human Rights Campaign as being one of the “Best Places to Work for LGBTQ+ Equality.”

    — CNN’s Chris Isidore contributed to this report

    [ad_2]

    Source link

  • Elon Musk says Twitter has ‘no actual choice’ about government censorship requests | CNN Business

    Elon Musk says Twitter has ‘no actual choice’ about government censorship requests | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Criticized for giving into governments’ censorship demands, Elon Musk on Sunday claimed that Twitter has “no actual choice” about complying those requests.

    The comment comes after Musk has previously called himself a “free speech absolutist” and said he wanted to buy Twitter to bolster users’ ability to speak freely on the platform. Shortly after agreeing to acquire Twitter, Musk explained his approach to free speech by saying: “Is someone you don’t like allowed to say something you don’t like? And if that is the case, then we have free speech.”

    He added at the time that Twitter would “be very reluctant to delete things” and “be very cautious with permanent bans,” and that the platform would aim to allow all legal speech.

    But Musk has faced blowback in recent weeks for appearing to cave to government censorship demands, including by removing some accounts and tweets at the behest of the government of Turkey ahead of the country’s elections (which the company later said it would attempt to fight in court). And in an interview with the BBC last month, Musk was asked about whether Twitter had removed a documentary about Indian Prime Minister Narendra Modi at the request of the Indian government, and said he didn’t know “what exactly happened.”

    Bloomberg columnist Matthew Yglesias on Sunday tweeted an article suggesting that Twitter has complied with a majority of government takedown requests since Musk took over as the platform’s owner. Musk replied: “Please point out where we had an actual choice and we will reverse it.”

    Musk has previously said the company would comply with laws governing social media companies around the world, although such laws in some cases appear to conflict with his free speech vision. Twitter did not respond to CNN’s request for comment.

    In last month’s interview with the BBC, Musk said, “the rules in India for what can appear on social media are quite strict, and we can’t go beyond the laws of a country … If we have a choice of either our people go to prison or we comply with the laws, we will comply with the laws.” At another point in the interview, Musk said: “If people of a given country are against a certain type of speech, they should talk to their elected representatives and pass a law to prevent it.”

    “By ‘free speech,’ I simply mean that which matches the law,” Musk said in a tweet last year about his vision for Twitter. “I am against censorship that goes far beyond the law.”

    In some countries, Twitter could risk substantial fines and other penalties — including, potentially, bans of the platform — for not complying with local laws.

    However, prior to Musk’s takeover, Twitter frequently fought government takedown requests in court, including from India and Turkey, in addition to publicly releasing detailed information about such requests and how it handled them. In many cases, Twitter led the charge among social media companies in protecting its users’ rights around the world.

    In last recent removal request report before Musk’s takeover, Twitter said it received more than 47,000 removal requests between July and December 2021, and complied with 51% of them. In many cases, when it did comply with a removal request because of a certain country’s laws, it removed the violating content only in that country, rather than globally.

    Musk was also criticized for backing down on his “free speech” vision when Twitter temporarily banned the accounts of several high-profile journalists in December, claiming that they had violated a new “doxxing” policy on the site. None of the banned journalists appeared to have shared Musk’s precise real-time location — the restrictions came after they reported on Twitter’s removal of an account that posts the updated location of Musk’s private jet.

    [ad_2]

    Source link