ReportWire

Tag: Meta Platforms Inc

  • Biden signs executive order with new framework to protect data transfers between the U.S. and EU

    Biden signs executive order with new framework to protect data transfers between the U.S. and EU

    [ad_1]

    President Joe Biden signed an executive order to implement a new framework to protect the privacy of personal data shared between the U.S. and Europe, the White House announced Friday.

    The new framework fills a significant gap in data protections across the Atlantic since a European court undid a previous version in 2020. The court found the U.S. had too great an ability to surveil European data transferred through the earlier system.

    The court case, known as Schrems II, “created enormous uncertainty about the ability of companies to transfer personal data from the European Union to the United States in a manner consistent with EU law,” then-Deputy Assistant Commerce Secretary James Sullivan wrote in a public letter shortly after the decision. The outcome made it so U.S. companies would need to use different “EU-approved data transfer mechanisms” on an ad hoc basis, creating more complexity for businesses, Sullivan wrote.

    The so-called Privacy Shield 2.0 seeks to address European concerns about possible surveillance by U.S. intelligence agencies. In March, after the U.S. and EU agreed in principle to the new framework, the White House said in a fact sheet that the U.S. “committed to implement new safeguards to ensure that signals intelligence activities are necessary and proportionate in the pursuit of defined national security objectives.”

    The new framework will allow individuals in the EU to seek redress through an independent Data Protection Review Court made up of members outside of the U.S. government. That body “would have full authority to adjudicate claims and direct remedial measures as needed,” according to the March fact sheet.

    Before a matter reaches the DPRC, the civil liberties protection officer in the Office of the Director of National Intelligence will also conduct an initial investigation of complaints. Its decisions are also binding, subject to the independent body’s assessment.

    The executive order directs the U.S. intelligence community to update policies and procedures to fit the new privacy protections in the framework. It also instructs the Privacy and Civil Liberties Oversight Board, an independent agency, to examine those updates and conduct an annual review of whether the intelligence community has fully adhered to binding redress decisions.

    “The EU-U.S. Data Privacy Framework includes robust commitment to strengthen the privacy and civil liberties safeguards for signals intelligence, which will ensure the privacy of EU personal data,” Commerce Secretary Gina Raimondo told reporters Thursday.

    Raimondo said she will transfer a series of documents and letters from relevant U.S. government agencies outlining the operation and enforcement of the framework to her EU counterpart, Commissioner Didier Reynders.

    The EU will then conduct an “adequacy determination” of the measures, the White House said. It will assess the sufficiency of the data protection measures in order to restore the data transfer mechanism.

    American tech companies and industry groups applauded the measure, with Meta‘s president of global affairs, Nick Clegg, writing on Twitter, “We welcome this update to US law which will help to preserve the open internet and keep families, businesses and communities connected, wherever they are in the world.”

    Linda Moore, president and CEO of industry group TechNet, said in a statement, “We applaud the Biden Administration for taking affirmative steps to ensure the efficiency and effectiveness of American and European cross-border data flows and will continue to work with the Administration and members of Congress from both parties to pass a federal privacy bill.”

    But some consumer and data privacy watchdogs critiqued the extent of the data protections.

    BEUC, a European consumer group, said in a release that the framework “is likely still insufficient to protect Europeans’ privacy and personal data when it crosses the Atlantic.” The group added that “there are no substantial improvements to address issues related to the commercial use of personal data, an area where the previous agreement, the EU-US Privacy Shield, fell short of GDPR requirements,” referring to Europe’s General Data Protection Regulation.

    Ashley Gorski, senior staff attorney at the ACLU National Security Project, said in a statement that the order “does not go far enough. It fails to adequately protect the privacy of Americans and Europeans, and it fails to ensure that people whose privacy is violated will have their claims resolved by a wholly independent decision-maker.”

    — CNBC’s Chelsey Cox contributed to this report.

    Subscribe to CNBC on YouTube.

    WATCH: Why the U.S. government is questioning your online privacy

    [ad_2]

    Source link

  • Amazon freezes corporate hiring in its retail business

    Amazon freezes corporate hiring in its retail business

    [ad_1]

    Amazon is pausing hiring for corporate roles in its retail business, according to a report published Tuesday by The New York Times.

    The company confirmed the accuracy of the report to CNBC.

    Amazon instructed recruiters to close all open job postings for those roles in the coming days, and recommended they cancel some recruiting activities, such as phone calls to screen new candidates, the Times reported, citing internal communications.

    Amazon spokesperson Brad Glasser said the retail giant continues to have a significant number of open roles across the company.

    “We have many different businesses at various stages of evolution, and we expect to keep adjusting our hiring strategies in each of these businesses at various junctures,” Glasser said in a statement.

    The Amazon headquarters sits virtually empty on March 10, 2020 in downtown Seattle, Washington. In response to the coronavirus outbreak, Amazon recommended all employees in its Seattle office to work from home, leaving much of downtown nearly void of people.

    John Moore | Getty Images

    Amazon is the latest company to reevaluate its hiring plans amid concerns of an economic downturn. Several companies including Google, Apple and Meta have announced they will slow or temporarily pause hiring altogether. Companies are also looking for ways to cut costs to gird for potential headwinds.

    Amazon CEO Andy Jassy has worked swiftly to rein in costs as the company grapples with slowing growth in its core retail business, which still accounts for the lion’s share of Amazon’s revenue.

    The retail business enjoyed breakneck growth during the Covid-19 pandemic as consumers avoided trips to physical stores and flocked to online retailers. By early 2022, e-commerce spending began to decelerate, and Amazon in the first quarter reported its slowest rate of revenue growth since the dot-com bust in 2001.

    Jassy has assured investors he’s focused on returning to a “healthy level of profitability” after slowing retail sales and rising costs ate into Amazon’s earnings. In recent months, Amazon has closed or cancelled the launch of new facilities, and it’s delaying the opening of some new buildings after its pandemic-driven expansion left it with too much warehouse space.

    It has also closed nearly all of its U.S. call centers in a bid to save on real estate, Bloomberg reported.

    The company is also contending with too many workers after it went on a pandemic hiring spree. In the second quarter, Amazon shaved its headcount by 99,000 people to 1.52 million employees.

    WATCH: Watch CNBC’s full interview with Amazon CEO Andy Jassy on first annual letter to shareholders

    Watch CNBC's full interview with Amazon CEO Andy Jassy on first annual letter to shareholders

    [ad_2]

    Source link

  • Kim Kardashian pays over $1 million to settle SEC charges linked to a crypto promo on her Instagram

    Kim Kardashian pays over $1 million to settle SEC charges linked to a crypto promo on her Instagram

    [ad_1]

    Reality TV star Kim Kardashian launched a private equity fund, Skky Partners, which she co-founded with Jay Sammons, a former partner at the investment firm Carlyle Group.

    Photo by James Devaney/GC Images via Getty Images

    Kim Kardashian’s crypto misadventure has landed her in hot water with federal regulators.

    The reality TV superstar and influencer has settled Securities and Exchange Commission charges that she failed to disclose a payment she received for touting a crypto asset on her Instagram feed, the agency announced Monday morning.

    “This case is a reminder that, when celebrities or influencers endorse investment opportunities, including crypto asset securities, it doesn’t mean that those investment products are right for all investors,” Gary Gensler, chairman of the SEC, said in a news release.

    Representatives for Kardashian didn’t immediately respond to a request for comment.

    Kardashian, who is reportedly worth $1.8 billion, agreed to pay $1.26 million to settle the charges over a promotion on Meta‘s Instagram for EthereumMax’s crypto asset, the SEC said. She will also cooperate with an ongoing investigation, and has agreed to not promote crypto securities for three years, the regulator added.

    However, Kardashian, who has built a media and lifestyle empire, neither admitted to nor denied the regulator’s findings, the SEC said.

    Kardashian has already felt regulatory heat over her EthereumMax promo, which she posted on Instagram in June of last year. She started the post by asking her millions of followers, “ARE YOU INTO CRYPTO??? THIS IS NOT FINANCIAL ADVICE BUT SHARING WHAT MY FRIENDS JUST TOLD ME ABOUT THE ETHEREUM MAX TOKEN.”

    Investors sued her, former NBA star Paul Pierce and superstar boxer Floyd Mayweather Jr. earlier this year over their promos for EthereumMax, accusing them of artificially inflating the value of the asset.

    The SEC on Monday said Kardashian failed to report that she was paid $250,000 to publish a post about EMAX tokens, a crypto asset offered by EthereumMax. The post, which featured the hashtag “#ad,” included a link to the EthereumMax website, which gives users instructions about how to buy the tokens, the regulator added.

    Her failure to disclose the payment was a violation of federal securities laws, the SEC said. She agreed to pay $260,000, which includes the payment she received, plus interest, in addition to the $1 million penalty, the agency added.

    [ad_2]

    Source link

  • Apple downgrade sparks tech sell-off, sending Alphabet and Microsoft to one-year lows

    Apple downgrade sparks tech sell-off, sending Alphabet and Microsoft to one-year lows

    [ad_1]

    Shares of large technology companies suffered heavy losses on Thursday, dragging down many other U.S. stocks along with them, after analysts at Bank of America lowered Apple’s stock rating.

    Tech stocks have been pushed down all year as investors have rotated out of growth and flocked to more defensive assets to deal with higher interest rates and to get ahead of a possible recession.

    The tech-heavy Nasdaq Composite rose on Tuesday and Wednesday, but the buying came after the worst two weeks since the onset of the Covid pandemic. Now the downward trend is back, with the Nasdaq off 2.8% on Thursday — it’s steepest one-day setback since Sept. 13. The broader S&P 500 fell 2.1%.

    Apple CEO Tim Cook speaks at an Apple special event at Apple Park in Cupertino, California on September 7, 2022. – Apple is expected to unveil the new iPhone 14. (Photo by Brittany Hosea-Small / AFP) (Photo by BRITTANY HOSEA-SMALL/AFP via Getty Images)

    Brittany Hosea-small | Afp | Getty Images

    Apple shares declined nearly 5% as Bank of America analysts led by Wamsi Mohan changed their rating to neutral from buy, straying from the buy position held by a majority of analysts polled by FactSet.

    The analysts pointed to several risks, including a weaker buying cycle associated with the iPhone 14 that Apple released this month. One day earlier, a report said Apple had scrapped its plan to boost iPhone production by 6 million units in the second half of the year.

    Apple stock is now worth 20% less than it was at the end of 2021, while the Nasdaq is down 31% over the same period.

    Of the technology companies with the largest market valuations, Microsoft took the lightest blow. It ended Thursday’s trading session down about 1.5%, which was still a 52-week low. Google parent Alphabet also reached a 52-week low, dropping 2.6%. Shares of Facebook parent Meta Platforms slid 3.7%, Amazon declined 2.7% and Tesla was off 6.8%.

    Smaller growth-oriented tech companies also suffered, with Coinbase down nearly 8% after Wells Fargo initiated coverage with an underweight rating. Elsewhere, Shopify fell 8.45%, Rivian declined 7.9% and Roblox was off 7%.

    [ad_2]

    Source link

  • FTC says Meta should be barred from monetizing data from younger users | CNN Business

    FTC says Meta should be barred from monetizing data from younger users | CNN Business

    [ad_1]



    CNN
     — 

    The Federal Trade Commission on Wednesday accused Facebook-parent Meta of violating its landmark $5 billion privacy settlement and called for toughening up restrictions on the company, after alleging Meta has improperly shared user data with third parties and failed to protect children as it has promised.

    The proposal to update the binding 2020 settlement with Meta marks a new front in the FTC’s long-running battle with the social media company, which has included multiple lawsuits aimed at breaking up the tech giant or preventing it from growing larger.

    The FTC said Meta should be banned from monetizing data it collects from younger users. It added that the company should be barred from releasing any new features or products until a third-party auditor determines the company’s privacy policies do enough to protect users. It also called for new limitations on how Meta can use facial recognition technology.

    If approved, the sweeping proposal could threaten the future of Meta’s business, including its expansion into virtual reality.

    In a statement on Wednesday, Meta spokesman Andy Stone called the FTC proposal “a political stunt” and vowed to contest the effort.

    “Despite three years of continual engagement with the FTC around our agreement, they provided no opportunity to discuss this new, totally unprecedented theory,” Stone said. “FTC Chair Lina Khan’s insistence on using any measure – however baseless – to antagonize American business has reached a new low.”

    The FTC proposal comes as policymakers at all levels of government have increasingly blamed social media for furthering a mental health crisis among young people, prompting calls for strict regulations on how tech platforms can use the personal information of users under 18, target them with automated recommendations or seek to boost their engagement in other ways. Many of those proposals have taken the form of broad-based legislation, but the FTC proposal would represent a novel approach by amending a past consent order in connection with a single company that influences more than a billion users.

    As part of the FTC’s call for changes, the agency said Meta had misled the public about its compliance with the historic settlement that resolved allegations surrounding the Cambridge Analytica data fiasco, as well as prior agreements with the agency.

    Meta had allowed personal information to leak to apps that users of the platform were no longer using, the FTC alleged. That data sharing, the FTC claimed, contrasted with Meta’s public statements about how it cuts off a third-party app’s access to Facebook users’ information if the users stop using the third-party app for 90 days.

    The FTC also alleged that multiple coding errors in a messaging app marketed to children, Messenger Kids, allowed users to connect to “unapproved contacts” in group video calls, and that the flaws went unresolved for weeks.

    Those flaws meant parents could not control who their kids were speaking to on the app, in contrast to claims by Meta that they could, according to the FTC.

    In addition to being a breach of Meta’s prior settlements, the alleged violations surrounding Messenger Kids also ran afoul of a federal children’s privacy law known as COPPA, the FTC said, because parents were not provided an opportunity to give Meta their consent before the company collected information on their kids.

    Meta will have 30 days to respond to the proposed findings and changes, the FTC said, before the commission votes to finalize them. The FTC can unilaterally approve updates to the settlement, but Meta would have the opportunity to appeal that move in federal court, according to an agency fact sheet.

    The FTC voted 3-0 to issue the proposed findings and changes, but one commissioner, Alvaro Bedoya, questioned whether the agency has the authority to impose such sweeping restrictions on Meta in light of the alleged violations.

    In a statement, Bedoya said he was skeptical whether there was enough of a connection between Meta’s alleged harms and the proposed remedies to legally sustain a complete ban on monetizing the data of young users.

    “I look forward to hearing additional information and arguments and will consider these issues with an open mind,” Bedoya said.

    [ad_2]

    Source link

  • First on CNN: Senators press Google, Meta and Twitter on whether their layoffs could imperil 2024 election | CNN Business

    First on CNN: Senators press Google, Meta and Twitter on whether their layoffs could imperil 2024 election | CNN Business

    [ad_1]



    CNN
     — 

    Three US senators are pressing Facebook-parent Meta, Google-parent Alphabet and Twitter about whether their layoffs may have hindered the companies’ ability to fight the spread of misinformation ahead of the 2024 elections.

    In a letter to the companies dated Tuesday, the lawmakers warned that reported staff cuts to content moderation and other teams could make it harder for the companies to fulfill their commitments to election integrity.

    “This is particularly troubling given the emerging use of artificial intelligence to mislead voters,” wrote Minnesota Democratic Sen. Amy Klobuchar, Vermont Democratic Sen. Peter Welch and Illinois Democratic Sen. Dick Durbin, according to a copy of the letter reviewed by CNN.

    Since purchasing Twitter in October, Elon Musk has slashed headcount by more than 80%, in some cases eliminating entire teams.

    Alphabet announced plans to cut roughly 12,000 workers across product areas and regions earlier this year. And Meta has previously said it would eliminate about 21,000 jobs over two rounds of layoffs, hitting across teams devoted to policy, user experience and well-being, among others.

    “We remain focused on advancing our industry-leading integrity efforts and continue to invest in teams and technologies to protect our community – including our efforts to prepare for elections around the world,” Andy Stone, a spokesperson for Meta, said in a statement to CNN about the letter.

    Alphabet and Twitter did not immediately respond to a request for comment.

    The pullback at those companies has coincided with a broader industry retrenchment in the face of economic headwinds. Peers such as Microsoft and Amazon have also trimmed their workforces, while others have announced hiring freezes.

    But the social media companies are coming under greater scrutiny now in part due to their role facilitating the US electoral process.

    Tuesday’s letter asked Meta CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai and Twitter CEO Linda Yaccarino how each company is preparing for the 2024 elections and for mis- and disinformation surrounding the campaigns.

    To illustrate their concerns, the lawmakers pointed to recent changes at Alphabet-owned YouTube to allow the sharing of false claims that the 2020 presidential election was stolen, along with what they described as content moderation “challenges” at Twitter since the layoffs.

    The letter, which seeks responses by July 10, also asked whether the companies may hire more content moderation employees or contractors ahead of the election, and how the platforms may be specifically preparing for the rise of AI-generated deepfakes in politics.

    Already, candidates such as Florida Gov. Ron DeSantis appear to have used fake, AI-generated images to attack their opponents, raising questions about the risks that artificial intelligence could pose for democracy.

    [ad_2]

    Source link

  • Sarah Silverman sues OpenAI and Meta alleging copyright infringement | CNN Business

    Sarah Silverman sues OpenAI and Meta alleging copyright infringement | CNN Business

    [ad_1]



    CNN
     — 

    Comedian Sarah Silverman and two authors are suing Meta and ChatGPT-maker OpenAI, alleging the companies’ AI language models were trained on copyrighted materials from their books without their knowledge or consent.

    The pair of lawsuits against OpenAI and Facebook-parent Meta were filed in a San Francisco federal court on Friday, and are both seeking class action status. Silverman, the author of “The Bedwetter,” is joined in filing the lawsuits by fellow authors Christopher Golden and Richard Kadrey.

    A new crop of AI tools has gained tremendous attention in recent months for their ability to generate written work and images in response to user prompts. The large language models underpinning these tools are trained on vast troves of online data. But this practice has raised some concerns that these models may be sweeping up copyrighted works without permission – and that these works could ultimately be served to train tools that upend the livelihoods of creatives.

    The complaint against OpenAI claims that “when ChatGPT is prompted, ChatGPT generates summaries of Plaintiffs’ copyrighted works—something only possible if ChatGPT was trained on Plaintiffs’ copyrighted works.” The authors “did not consent to the use of their copyrighted books as training material for ChatGPT,” according to the complaint.

    The complaint against Meta similarly claims that the company used the authors’ copyrighted books to train LLaMA, the set of large language models released by Meta in February. The suit claims that much of the material used to train Meta’s language models “comes from copyrighted works—including books written by Plaintiffs—that were copied by Meta without consent, without credit, and without compensation.”

    The suit against Meta also alleges that the company accessed the copyrighted books via an online “shadow library” website that includes a large quantity of copyrighted material.

    Meta declined to comment on the lawsuit. OpenAI did not immediately respond to a request for comment.

    The legal action from Silverman isn’t the first to focus on how large language models are trained. A separate lawsuit filed against OpenAI last month alleged the company misappropriated vast swaths of peoples’ personal data from the internet to train its AI tools. (OpenAI did not respond to a request for comment on the suit.)

    In May, OpenAI CEO Sam Altman appeared to acknowledge more needed to be done to address concerns from creators about how AI systems use their works.

    “We’re trying to work on new models where if an AI system is using your content, or if it’s using your style, you get paid for that,” he said at an event.

    [ad_2]

    Source link

  • You can now apply for your share of a $725 million Facebook data privacy settlement. Here’s how | CNN Business

    You can now apply for your share of a $725 million Facebook data privacy settlement. Here’s how | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Facebook users who had an active account at any point between May 2007 and December 2022 can now apply to receive a piece of parent company Meta’s $725 million settlement related to the Cambridge Analytica scandal.

    Meta in December agreed to the payment to settle a longstanding class action lawsuit accusing it of allowing Cambridge Analytica and other third parties to access private user information and misleading users about its privacy practices.

    The legal battle began four years ago, following an international outcry from the company’s disclosure that the private information of as many as 87 million Facebook users was obtained by Cambridge Analytica, a data analytics firm that worked with the Trump campaign.

    The California judge overseeing the case granted preliminary approval of the settlement late last month, and Facebook users can now apply for a cash payment as part of a settlement.

    The claim form — which requires a few personal details and information about a user’s Facebook account — can be filled out online or printed and submitted by mail. The form takes only a few minutes to complete and must be submitted by August 25 to be included as part of the settlement.

    Any US Facebook user who had an active account sometime between May 24, 2007, and December 22, 2022, is eligible to be part of the settlement class, including those who have since deleted their accounts.

    It’s not yet clear how much each settlement payment will be. The fund will be distributed to class members who submit valid claims based on how long they had an active Facebook account during the relevant period, according to a frequently asked questions page on the settlement site.

    A final settlement approval hearing is set for September 7. Settlement payments will be distributed after the court’s approval, assuming there are no appeals.

    Meta did not admit wrongdoing as part of the settlement. Facebook has made changes in the wake of the Cambridge Analytica incident, including restricting third-party access to user data and improving communications to users about how their data is collected and shared.

    “We pursued a settlement as it’s in the best interest of our community and shareholders,” Meta spokesperson Dina Luce said in a statement following the December settlement agreement. “Over the last three years we revamped our approach to privacy and implemented a comprehensive privacy program. We look forward to continuing to build services people love and trust with privacy at the forefront.”

    [ad_2]

    Source link

  • Tax prep companies shared private taxpayer data with Google and Meta for years, congressional probe finds | CNN Business

    Tax prep companies shared private taxpayer data with Google and Meta for years, congressional probe finds | CNN Business

    [ad_1]



    CNN
     — 

    Some of America’s largest tax-prep companies have spent years sharing Americans’ sensitive financial data with tech titans including Meta and Google in a potential violation of federal law — data that in some cases was misused for targeted advertising, according to a seven-month congressional investigation.

    The report highlights what legal experts described to CNN as a “five-alarm fire” for taxpayer privacy that could lead to government and private lawsuits, criminal penalties or perhaps even a “mortal blow” for some industry giants involved in the probe including TaxSlayer, H&R Block and TaxAct.

    Using visitor tracking technology embedded on their websites, the three tax-prep companies allegedly sent tens of millions of Americans’ personal information to the tech industry without consent or appropriate disclosures, according to the congressional report reviewed by CNN.

    Beyond ordinary personal data such as people’s names, phone numbers and email addresses, the list of information shared also included taxpayer data — details about people’s filing status, adjusted gross income, the size of their tax refunds and even information about the buttons and text fields they clicked on while filling out their tax forms, which could reveal what tax breaks they may have claimed or which government programs they use, according to the report.

    The report, which drew on congressional interviews and written testimony from Meta, Google and the tax-prep companies, also found that every taxpayer who used TaxAct’s IRS Free File service while the tracking was enabled would have had their information shared with the tech companies. Some of the tax-prep companies still do not know whether the data they shared continues to be held by the tech platforms, the report said.

    “On a scale from one to 10, this is a 15,” said David Vladeck, a law professor at Georgetown University and a former consumer protection chief at the Federal Trade Commission, the country’s top privacy watchdog. “This is as great as any privacy breach that I’ve seen other than exploiting kids. This is a five-alarm fire, if what we know about this so far is true.”

    It is also an example, Vladeck said, of why the United States needs federal legislation guaranteeing every American a basic right to data privacy — an issue that has languished in Congress for years despite electronic data becoming an ever-larger part of the global economy.

    The congressional findings represent the latest claims of wrongdoing to hit the embattled tax-prep industry after a report last year by the investigative journalism outlet The Markup highlighted the tracking practice.

    Wednesday’s bombshell report adds to those earlier revelations by identifying a previously unreported category of data that was allegedly being collected and shared: the webpage titles in online tax software that can reveal what tax forms users have accessed, said an aide to Democratic Sen. Elizabeth Warren, who helped lead the congressional probe. For example, taxpayers who entered information about their college savings contributions or rental income may have done so on webpages bearing titles reflecting that information, which would then have been shared with the tech companies, the aide said.

    During the probe, Meta told investigators it used the taxpayer data it received to target third-party ads to users of its platform and to train its artificial intelligence algorithms, the report said. The Warren aide told CNN it was unclear whether Meta knew it was inappropriately using taxpayer data at the time. A Meta spokesperson said the company instructs its partners not to use its tools to share sensitive information and that Meta’s systems are “designed to filter out potentially sensitive data it is able to detect.”

    The technology behind the data collection, known as a tracking pixel, is commonly used across the entire internet. A small snippet of code that website owners can insert onto their sites, tracking pixels gather information that can help companies, including but not limited to Meta and Google, understand the behavior or interests of website visitors.

    Because of the tracking technology used by TaxAct, TaxSlayer and H&R Block, “every single taxpayer who used their websites to file their taxes could have had at least some of their data shared,” the report said.

    The tax-prep companies at the center of the investigation told lawmakers the collected data had been scrambled to help protect privacy, according to the report. But the report also said some of the tax-prep firms themselves were not fully aware of how much information was being exposed to the tech platforms, and the report cited past FTC research concluding that even “anonymized” data can be easily reverse-engineered to identify a person.

    The pixels’ use in a taxpayer context resulted in the “reckless” sharing of legally protected data that could put taxpayers at risk, according to the report by Warren and her Democratic colleagues Sens. Ron Wyden; Richard Blumenthal; Tammy Duckworth; and Sheldon Whitehouse; Sen. Bernie Sanders, an independent who caucuses with Democrats; and Democratic Rep. Katie Porter.

    The FTC, the Internal Revenue Service, the Justice Department and the Treasury Inspector General for Tax Administration “should fully investigate this matter and prosecute any company or individuals who violated the law,” the lawmakers wrote in a letter dated Tuesday to the agencies and obtained by CNN. The FTC and DOJ declined to comment; the IRS and TIGTA didn’t immediately respond to a request for comment.

    In a statement, H&R Block said it takes client privacy “very seriously, and we have taken steps to prevent the sharing of information via pixels.” Wednesday’s report said H&R Block had testified to using the tracking technology for “at least a couple of years.”

    TaxAct and TaxSlayer didn’t immediately respond to a request for comment. The report said TaxAct had been using Meta’s tools since 2018 and Google’s since about 2014, while TaxSlayer began using Meta’s tools in 2018 and Google’s in 2011. The investigation found that all three tax-prep companies had discontinued their use of Meta’s pixel after The Markup’s report last November.

    Intuit, the maker of TurboTax, received an initial inquiry letter from the lawmakers in December but was not a focus of Wednesday’s report because the company did not use tracking pixels to the same extent, the investigation found.

    Tax preparation firms have faced mounting scrutiny in recent years amid reports that many have turned to data harvesting as a business model and that the largest among them have spent millions lobbying against legislation that could make it easier for Americans to file their tax returns. An IRS report this year found that 72% of Americans would be interested in using a free, electronic tax filing service if it were provided by the agency as an alternative to private online filing services. The IRS plans to launch a pilot version of that service to a limited number of taxpayers in the 2024 tax filing season.

    Google told CNN it prohibits business customers from uploading to its platform sensitive data that could be traced back to a person.

    “We have strict policies and technical features that prohibit Google Analytics customers from collecting data that could be used to identify an individual,” a Google spokesperson said. “Site owners — not Google — are in control of what information they collect and must inform their users of how it will be used. Additionally, Google has strict policies against advertising to people based on sensitive information.”

    Wednesday’s report focuses more heavily on Meta’s use of taxpayer data, the Warren aide told CNN, because Google did not appear to have used the information for its own commercial purposes as overtly as Meta and the investigation was unable to fully determine whether Google may have used the data for other applications.

    The allegations could nevertheless create extensive legal risk for both the tech companies as well as the tax-preparation firms, according to tax and privacy legal experts.

    The tax-prep companies could face billions in fines under US tax law if the federal government decides to sue, said Steven Rosenthal, a senior fellow at the Urban-Brookings Tax Policy Center. In addition, the US government could seek criminal penalties.

    “The scope of ‘taxpayer information’ is broad by design,” Rosenthal said, adding that tax-prep companies can be sued for “knowingly” or “recklessly” leaking that information. “The companies shouldn’t be sharing it in a way that some third party could obtain it.”

    Theoretically, he said, the tax code also affords individual taxpayers the right to file private lawsuits against the tax-prep companies. But most if not all of those firms require customers to submit to mandatory arbitration that could realistically make bringing a private claim more challenging, said the Warren aide.

    Apart from the tax code, both the tech giants as well as the tax-prep firms could also face civil liability from the FTC — which can police data breaches and hold companies accountable for their commitments to user privacy — and potentially from state governments that have their own privacy laws on the books, said Vladeck.

    Depending on the strength of the allegations, the tax-prep companies could quickly be forced into a binding settlement, said a former FTC official who requested anonymity in order to speak more freely.

    “If the facts are really strong, these companies would probably rather settle than go to court. This is very embarrassing,” the former official said. “It could be a mortal blow to the tax prep companies.”

    [ad_2]

    Source link

  • Meta phased out Covid-19 content labels after finding they did little to combat misinformation, Oversight Board says | CNN Business

    Meta phased out Covid-19 content labels after finding they did little to combat misinformation, Oversight Board says | CNN Business

    [ad_1]



    CNN
     — 

    Late last year, Facebook-parent Meta quietly phased out certain content labels on its platforms that for much of the pandemic had directed viewers to its central Covid-19 information page, after internal research concluded the labels may be ineffective at changing attitudes or stopping the spread of misinformation, according to a report Thursday by the company’s external oversight board.

    Facebook rolled out the labels in early 2021, after coming under criticism for the spread of Covid-19 misinformation on its platforms during the first year of the pandemic. The company applied the labels to a wide range of claims both true and untrue about vaccines, treatments and other topics related to the virus.

    But Meta’s use of the labels began slowing on Dec. 19, and ended completely soon after, the report said, following the internal research. Study results provided to the Meta Oversight Board, a quasi-judicial body, showed that the company’s labels appeared to have “no detectable effect on users’ likelihood to read, create or re-share” claims that had previously been rated as false by third-party fact-checkers or that discouraged the use of vaccines, the report said.

    The research focused on Meta’s direct labeling interventions as opposed to labels the company applies to content as part of its third-party fact-checking program. The research found that the more frequently a user was exposed to the labels, the less likely they were to visit the Covid-19 information center, which offers authoritative resources and information linked to the pandemic.

    “The company reported that initial research showed that these labels may have no effect on user knowledge and vaccine attitudes,” the report said.

    Meta’s internal research on the labels has not been previously released, and the Oversight Board on Thursday called for Meta to publish its findings as part of a broader review of the company’s handling of Covid-19 misinformation.

    The new details highlight the struggles platforms have faced in fighting misinformation and could raise broader questions about the efficacy of labeling and directing users to more accurate information. It also comes at a time when some of the biggest social media companies, including Twitter and Meta, are either rolling back their Covid-19 misinformation policies or considering doing so.

    Meta should not relax its approach to Covid-19 misinformation as the company has proposed, the Oversight Board added. Until the World Health Organization determines that the pandemic has eased, Meta should instead continue to remove misinformation that violates the company’s policies, rather than shifting toward more lenient treatments such as labeling or downranking misleading information, the board said.

    Meta said Thursday it will publicly respond to the Oversight Board’s recommendations within 60 days.

    “We thank the Oversight Board for its review and recommendations in this case,” a company spokesperson said. “As Covid-19 evolves, we will continue consulting extensively with experts on the most effective ways to help people stay safe on our platforms.”

    In the past, Meta has touted its ability to direct users to the Covid-19 information center. Last July, the company said it had connected more than 2 billion people across 189 countries to trustworthy information through the portal.

    Some of those visits occurred through labels that Meta referred to internally as “neutral inform treatments,” or NITs, and “facts about ‘X’ informed treatments,” also known as FAXITs.

    The labels were automatically applied to content that Meta’s automated tools determined were about Covid-19, the Oversight Board said. The labels never directly addressed the claims within any given post, but they provided a link to the Covid-19 information center as well as more contextual information, including messages saying that vaccines have been proven safe and effective or that unapproved Covid-19 treatments could cause bodily harm. (Meta provided examples of a NIT and a FAXIT in its July 2022 request for Oversight Board guidance on whether it should relax its Covid-19 misinformation policy.)

    The decision to begin phasing out the labels came after Meta’s product and integrity teams ran an experiment studying Meta’s global userbase, the report said. The study found that users who were shown the labels approximately once a month were more likely on average to click through to the Covid-19 information center than users who were shown the labels both more and less frequently.

    In light of the results, Meta later told the Oversight Board it would stop using the labels altogether, to ensure they could remain effective in other public health emergencies, according to the report.

    While the Oversight Board’s report Thursday did not pass judgment on Meta’s decision to stop using the labels, it urged the company to reevaluate the 80 distinct types of claims that the company considers to be Covid-19 misinformation and therefore subject to removal from its platforms.

    Meta should perform the reassessments regularly, the Oversight Board said, consulting with public health officials to determine which claims on Meta’s banned list continue to be false or misleading and worthy of removal. Meta should also publish a record of when and how it updates that list, the board added.

    [ad_2]

    Source link

  • Meta shuts down network of fake accounts that ‘signal a shift’ in China-based influence efforts | CNN Business

    Meta shuts down network of fake accounts that ‘signal a shift’ in China-based influence efforts | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Facebook’s parent company Meta announced Wednesday that it has taken down a network of more than 100 China-based accounts that posed as organizations in the US and Europe and pushed pro-Beijing talking points.

    The Facebook and Instagram accounts, which included a fictitious news organization and posed as a think tank, likely used deepfake images developed through artificial intelligence to make the fake accounts appear legitimate, Meta said.

    The network, which had more than 15,000 followers on Meta’s platforms, appears to have had some financial resources behind it. In one instance, the people behind the accounts called for protests in Budapest against George Soros, the billionaire philanthropist and frequent target of right-wing groups, and posted on Twitter an offer to pay people to attend. The accounts also offered to pay freelance writers to contribute to at least one of its websites.

    The accounts were awash with pro-China commentary, including “warnings against boycotting the 2022 Beijing Olympics; allegations of US foreign policy in Africa,” and “claims of comfortable living conditions for Uyghurs in China,” Meta said in its report. The fake accounts also posted “negative commentary about Uyghur activists and critics of the Chinese state,” it said.

    Meta did not link the network to the Chinese government, instead saying it found links to individuals in China associated with a technology company. CNN has reached out to the company for comment. Meta regularly takes down covert influence campaigns and discloses information about them in quarterly reports.

    The takedowns “signal a shift in the nature” of China-based influence networks, as Chinese operatives embrace new tactics like setting up a front company, hiring freelance writers around the world and offering to recruit protesters, Ben Nimmo, Meta’s global threat intelligence lead, told reporters on Tuesday.

    While the networks are generally small and have struggled to build an audience, “they are experimenting with diverse tactics and that’s always something we want to keep an eye on,” Nimmo said. 

    The tactics are similar to those used by Russian operatives during the 2016 US presidential election campaign. Using fake personas and posing as representatives of US political and activist organizations, Russians successfully recruited unwitting Americans to take part in political stunts.

    Chinese operatives have in recent years “evolved their posture” from being concerned about being caught influencing US elections to seeing influence operations as another tool to project power, a US official told CNN.

    “We’re keeping a close eye” on the Chinese influence operations heading into the 2024 election, the official said.

    Indictments from special counsel Robert Mueller’s team in 2018 detailed how disinformation from Russia were designed to exacerbate existing divisions in the United States.

    Ahead of the 2022 US midterm election, FBI officials expressed concern that Chinese operatives appeared to be engaging in “Russian-style influence activities” that stoke American divisions. Russian and Chinese government-affiliated operatives and organizations both promoted misinformation about the integrity of American elections that originated in the US during the midterm election season, FBI officials have said. 

    [ad_2]

    Source link

  • Welcome to the era of viral AI generated ‘news’ images | CNN Business

    Welcome to the era of viral AI generated ‘news’ images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Pope Francis wearing a massive, white puffer coat. Elon Musk walking hand-in-hand with rival GM CEO Mary Barra. Former President Donald Trump being detained by police in dramatic fashion.

    None of these things actually happened, but AI-generated images depicting them did go viral online over the past week.

    The images ranged from obviously fake to, in some cases, compellingly real, and they fooled some social media users. Model and TV personality Chrissy Teigen, for example, tweeted that she thought the pope’s puffer coat was real, saying, “didn’t give it a second thought. no way am I surviving the future of technology.” The images also sparked a slew of headlines, as news organizations rushed to debunk the false images, especially those of Trump, who was ultimately indicted by a Manhattan grand jury on Thursday but has not been arrested.

    The situation demonstrates a new online reality: the rise of a new crop of buzzy artificial intelligence tools has made it cheaper and easier than ever to create realistic images, as well as audio and videos. And these images are likely to pop up with increasing frequency on social media.

    While these AI tools may enable new means of expressing creativity, the spread of computer-generated media also threatens to further pollute the information ecosystem. That risks adding to the challenges for users, news organizations and social media platforms to vet what’s real, after years of grappling with online misinformation featuring far less sophisticated visuals. There are also concerns that AI-generated images could be used for harassment, or to further drive divided internet users apart.

    “I worry that it will sort of get to a point where there will be so much fake, highly realistic content online that most people will just go with their tribal instincts as a guide to what they think is real, more than actually informed opinions based on verified evidence,” said Henry Ajder, a synthethic media expert who works as an advisor to companies and government agencies, including Meta Reality Labs’ European Advisory Council.

    Images, compared to the AI-generated text that has also recently proliferated thanks to tools like ChatGPT, can be especially powerful in provoking emotions when people view them, said Claire Leibowicz, head of AI and media integrity at the Partnership on AI, a nonprofit industry group. That can make it harder for people to slow down and evaluate whether what they’re looking at is real or fake.

    What’s more, coordinated bad actors could eventually attempt to create fake content in bulk — or suggest that real content is computer-generated — in order to confuse internet users and provoke certain behaviors.

    “The paranoia of an impending Trump … potential arrest created a really useful case study in understanding what the potential implications are, and I think we’re very lucky that things did not go south,” said Ben Decker, CEO of threat intelligence group Memetica. “Because if more people had had that idea en masse, in a coordinated fashion, I think there’s a universe where we could start to see the online to offline effects.”

    Computer-generated image technology has improved rapidly in recent years, from the photoshopped image of a shark swimming through a flooded highway that has been repeatedly shared during natural disasters to the websites that four years ago began churning out mostly unconvincing fake photos of non-existent people.

    Many of the recent viral AI-generated images were created by a tool called Midjourney, a less than year-old platform that allows users to create images based on short text prompts. On its website, Midjourney describes itself as “a small self-funded team,” with just 11 full-time staff members.

    A cursory glance at a Facebook page popular among Midjourney users reveals AI-generated images of a seemingly inebriated Pope Francis, elderly versions of Elvis and Kurt Cobain, Musk in a robotic Tesla bodysuit and many creepy animal creations. And that’s just from the past few days.

    Midjourney has emerged as a popular tool for users to create AI-generated images.

    The latest version of Midjourney is only available to a select number of paid users, Midjourney CEO David Holz told CNN in an email Friday. Midjourney this week paused access to the free trial of its earlier versions due to “extraordinary demand and trial abuse,” according to a Discord post from Holz, but he told CNN it was unrelated to the viral images. The creator of the Trump arrest images also claimed he was banned from the site.

    The rules page on the company’s Discord site asks users: “Don’t use our tools to make images that could inflame, upset, or cause drama. That includes gore and adult content.”

    “Moderation is hard and we’ll be shipping improved systems soon,” Holz told CNN. “We’re taking lots of feedback and ideas from experts and the community and are trying to be really thoughtful.”

    In most cases, the creators of the recent viral images don’t appear to have been acting malevolently. The Trump arrest images were created by the founder of the online investigative journalism outlet Bellingcat, who clearly labeled them as his fabrications, even if other social media users weren’t as discerning.

    There are efforts by platforms, AI technology companies and industry groups to improve the transparency around when a piece of content is generated by a computer.

    Platforms including Meta’s Facebook and Instagram, Twitter and YouTube have policies restricting or prohibiting the sharing of manipulated media that could mislead users. But as use of AI-generated technologies grows, even such policies could threaten to undermine user trust. If, for example, a fake image accidentally slipped through a platform’s detection system, “it could give people false confidence,” Ajder said. “They’ll say, ‘there’s a detection system that says it’s real, so it must be real.’”

    Work is also underway on technical solutions that would, for example, watermark an AI-generated image or include a transparent label in an image’s metadata, so anyone viewing it across the internet would know it was created by a computer. The Partnership on AI has developed a set of standard, responsible practices for synthetic media along with partners like ChatGPT-creator OpenAI, TikTok, Adobe, Bumble and the BBC, which includes recommendations such as how to disclose an image was AI-generated and how companies can share data around such images.

    “The idea is that these institutions are all committed to disclosure, consent and transparency,” Leibowicz said.

    A group of tech leaders, including Musk and Apple co-founder Steve Wozniak, this week wrote an open letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” Still, it’s not clear whether any labs will take such a step. And as the technology rapidly improves and becomes accessible beyond a relatively small group of corporations committed to responsible practices, lawmakers may need to get involved, Ajder said.

    “This new age of AI can’t be held in the hands of a few massive companies getting rich off of these tools, we need to democratize this technology,” he said. “At the same time, there are also very real and legitimate concerns of having a radical open approach where you just open source a tool or have very minimal restrictions on its use is going to lead to a massive scaling of harm … and I think legislation will probably play a role in reigning in some of the more radically open models.”

    [ad_2]

    Source link

  • Despite TikTok ban threat, influencers are flocking to a new app from its parent company | CNN Business

    Despite TikTok ban threat, influencers are flocking to a new app from its parent company | CNN Business

    [ad_1]


    New York
    CNN
     — 

    In the days after TikTok’s CEO was grilled by Congress for the first time, many TikTok users began posting about an alternative platform called Lemon8, sometimes with eerily similar language.

    Multiple creators described the app as being like “if Pinterest and Instagram had a baby, with TikTok’s algorithm.” Some compared it to TikTok circa 2020 and encouraged other influencers to join the app before it grows. They also asked followers to share their Lemon8 usernames in the comments.

    As it turned out, the app wasn’t just a random alternative to TikTok. Lemon8 is a social media platform launched in the United States earlier this year by TikTok’s Chinese parent company ByteDance amid federal and state efforts to ban or restrict TikTok in the country over national security concerns.

    The similarities in the videos comparing the new service to Instagram and Pinterest, which were posted by both English and Spanish-speaking creators, raised questions about whether people were being paid to promote the new app on TikTok. But despite that speculation — and the mounting scrutiny on TikTok and ByteDance — a growing number of US users and influencers are now eagerly touting Lemon8, with its focus on photos and highly curated, informational or “aspirational” content.

    “We have to talk about TikTok’s new sister app,” a creator said in one such video.

    “I’ve seen a lot of bigger content creators that I love on it and promoting it on their Instagram stories, so I thought, ‘okay, it’s my time to hop on this bandwagon,’” said Melanie Cruz, who got her start creating content as a YouTube vlogger in high school around 2018. “I like that it’s something simple, it’s nothing too in your face … it’s not overwhelming.”

    Lemon8 has been downloaded just over one million times in the United States since it became available on US app stores in February, and had around half a million daily active US users last month, according to intelligence platform Apptopia.

    The early traction for Lemon8 hints at the whack-a-mole challenge lawmakers could face in reining in TikTok and other social media platforms. It also carries some hints of TikTok’s own rise, which was reportedly fueled in part by ByteDance spending heavily to advertise the service on rival platforms Facebook and Snapchat. This time, however, the best place to promote the next TikTok may be on TikTok itself.

    The New York Times reported last month that ByteDance had begun early marketing efforts for Lemon8 that included working with influencers. Now, some creators featured on Lemon8’s “for you” feed appear to be disclosing their work with the company using the hashtag #Lemon8Partner in their captions.

    A ByteDance company source said that Lemon8 is still in its early days and testing how to work with creators. They said ByteDance has not launched any formal marketing efforts for Lemon8, but in some cases has made deals to pay creators to post on the platform. However, they denied rumors that ByteDance had paid creators to promote the new app on TikTok.

    ByteDance has also recently listed open jobs for Lemon8 creator partnerships roles, according to postings viewed by CNN. “Lemon8 is a social media platform committed to building a diverse and inclusive community where people can discover new content and creators every day,” the job postings read.

    Lemon8’s photo-heavy focus marks a stark shift away from most of the major social apps that, following TikTok’s lead, have gone all-in on endlessly scrollable short-form videos in recent years.

    Lemon8’s homepage is a “for you” feed where users can scroll through content, similar to TikTok, but instead of videos, the feed is two columns of still images. When you click through to a post, it might be a single photo or a carousel of images. It’s also possible to post videos on the app, but they’re less popular.

    The app is heavily centered on beauty and lifestyle content — the “for you” page can be sorted into six categories including fashion, home and travel. Many of the posts feature lengthy captions, and users can also edit images to include text overlays. On top of similarities to Instagram and Pinterest, Lemon8 looks nearly identical to the Chinese app Xiaohongshu.

    Still, the app lacks some standard social platform features such as messaging and the option to tag other users in posts.

    A recent scroll through Lemon8’s “for you” page showed before-and-after photos of a botox treatment, a “no restrictions” day-long eating plan, book recommendations, black tie wedding attire tips and “10 recent girly Amazon buys I do NOT regret.”

    “It seems like people love it or hate it,” Madison Bravenec, a health coach and content creator, said of the app’s focus on aesthetics. But she added that the app’s targeted focus on certain types of content has made it easier to find a community that’s interested in the wellness content she likes to create, whereas the most popular posts on TikTok often have to appeal to a wider audience.

    Some creators say Lemon8 is filling a hole in the social media ecosystem that was left when Instagram moved to prioritize short-form video content in order to better compete with TikTok, frustrating many creators who joined the app for its original focus on photos.

    “We’re not videographers, we’re not the types of people who would like to change the ways we create content and communicate with others just because a platform is prioritizing one deliverable over the other,” said Can Ahtam, a professional photographer who joined Instagram more than a decade ago. “So all of us did feel the impact of reach being lower with the photos we were sharing [on Instagram].”

    Ahtam added: “If we were to compare them side-by-side right now, Lemon8 would have the upper hand in photos being shared.”

    Lemon8’s userbase remains a far cry from the 150 million users TikTok says it has in the United States.

    Still, in videos reviewing Lemon8, some creators have pondered whether the app could ultimately function as a replacement if TikTok were to get banned in the United States, preserving the content recommendation algorithm that helped make TikTok one of the country’s most popular apps and launched the careers of countless influencers.

    But if TikTok were to go down, Lemon8 would likely go with it, according to James Lewis, director of the strategic technologies program at the Center for Strategic and International Studies.

    “The concern is still the same, which is that ByteDance is a Chinese company subject to Chinese law,” Lewis said. “If it collects [users’ personal] information, then you’ve got the same problem.”

    TikTok, for its part, has said that its app does not pose a risk to US users, and that the Chinese government has never asked for US user data.

    The practical ramifications for creators of a TikTok (and, perhaps by extension, Lemon8) ban — if one were enacted — would still likely be months away, if not more. Lewis said he doesn’t expect any nationwide legislation to be passed before the end of this year, and it would almost certainly face legal challenges that could drag out its implementation if it did.

    By launching a new app even with TikTok in the spotlight, “ByteDance clearly doesn’t feel like they’re at risk,” Lewis said. And many creators say they’re not necessarily worried either.

    Even if TikTok and Lemon8 were banned, Cruz said, “I already have a following on all the other platforms.”

    [ad_2]

    Source link

  • Meta’s business groups cut in latest round of layoffs | CNN Business

    Meta’s business groups cut in latest round of layoffs | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Facebook-parent Meta on Wednesday began cutting employees in its business groups as part of a previously announced round of layoffs, according to social media posts from impacted workers.

    Meta employees in operations, project management, marketing, policy, communications and risk analytics announced on LinkedIn Wednesday morning that they had been laid off.

    The company declined to confirm the reductions were underway, but a Meta spokesperson pointed CNN to the March blog post from CEO Mark Zuckerberg announcing that the company would cut 10,000 employees this year, and that affected members of the business groups would be notified this month.

    Zuckerberg previously said the business groups would be the third and final major round of those layoffs. Laid off members of Meta’s technology and recruiting teams were notified in the past two months. Some smaller reductions may continue through the end of 2023, Zuckerberg said in March.

    The 10,000 job reductions mark the second significant wave of layoffs at Meta in recent months. The company said in November that it was eliminating approximately 13% of its workforce, or 11,000 jobs, in the single largest round of cuts in its history.

    In September, Meta reported a headcount of 87,314, per a securities filing. With the 11,000 job cuts announced in November and the 10,000 announced in March, Meta’s headcount will fall to around 66,000 — a total reduction of about 25% — assuming no additional hiring.

    Meta has said the layoffs are part of its “year of efficiency,” as the company attempts to recover from repeated revenue declines, heightened competition, concerns about user growth and big losses in its Reality Labs division amid its pivot to building the so-called metaverse. Zuckerberg has also taken responsibility for over-hiring earlier in the pandemic, when there was strong demand for the company’s products and online advertising, which dropped off somewhat once the world reopened.

    The turnaround strategy is showing early signs of success. Meta’s stock jumped last month after the company posted a 3% year-over-year revenue increase for the first three months of 2023, reversing a trend of three consecutive quarters of revenue declines. Still, profits declined by nearly a quarter compared to the same period in the prior year, and price per advertisement — an indicator of the health of the company’s core digital ad business — also decreased by 17% from the year prior.

    Zuckerberg said on an earnings call with analysts last month that when Meta started its “efficiency work” late last year, “our business wasn’t performing as well as I wanted, but now we’re increasingly doing this work from a position of strength.”

    But left in its wake are the thousands of employees affected by layoffs.

    “Finding work you care about and believe in and the right people to be in the trenches with is an incredible dream; it also makes moments like this incredibly difficult,” one employee affected by Wednesday’s layoffs said in a LinkedIn post. The employee called the cuts a “shock to the system.”

    [ad_2]

    Source link

  • Elon Musk is the gift that keeps on giving to Mark Zuckerberg | CNN Business

    Elon Musk is the gift that keeps on giving to Mark Zuckerberg | CNN Business

    [ad_1]


    New York
    CNN
     — 

    At the start of last year, Meta CEO Mark Zuckerberg was in the hot seat.

    Revelations from hundreds of internal company documents, known as the Facebook Papers, had drawn sharp criticism from lawmakers, users and civil society groups in late 2021 and forced company executives to appear before Congress. Zuckerberg’s plan to rebrand Facebook as Meta and pivot to the so-called metaverse was met with broad skepticism. And the company’s core ad business was under significant pressure from privacy changes made by Apple.

    But then, the attention of lawmakers, media and the tech world writ large abruptly shifted to another tech billionaire: Elon Musk.

    Musk early last year criticized Twitter, then nearly joined its board, then agreed to buy the company before launching a monthslong and ultimately unsuccessful fight to get out of the deal. The saga, which only continued after Musk completed the deal and pushed through numerous controversial changes, often dominated news cycles. In the process, it seemed to make Twitter’s rivals look better managed and draw away critical attention that might otherwise have been focused on other tech giants, including Meta, as they went through painful layoffs and suffered declines on Wall Street.

    This week, however, Zuckerberg notched his biggest win from Musk yet. After years of trying and failing to capture Twitter’s audience with copycat features, Zuckerberg is now capitalizing on Twitter’s struggles with a new app called Threads. Meta’s Twitter clone launched this week to unprecedented success, despite Meta’s history of privacy violations and enabling election meddling, not to mention longstanding concerns that the company and Zuckerberg wield too much power over the social media market.

    The app’s overnight success was a direct result of the chaos under Musk’s leadership of Twitter since last October. During that time, he has managed to anger many of the platform’s users and advertisers with his erratic statements, mass layoffs and significant changes to Twitter’s policies. While Twitter users have lamented what Musk’s ownership has meant for the platform, it may be the best thing that could have happened for Zuckerberg.

    “Musk has done one thing after another to piss off his own user base,” said Herbert Hovenkamp, a professor at the University of Pennsylvania’s Carey Law School.

    Some early Threads users even commented on the strange nature of the situation — that they would be eager to join a social network run by one billionaire whose company has faced intense public criticism simply because they were so eager to get away from another.

    “It boggles the mind,” one user posted to Threads. “I boycotted Facebook years ago and when I heard about this I joined immediately.”

    “Never used [Facebook] nor [Instagram],” another user said, adding that they had to join Instagram for the first time to gain access to Threads. “Last thing I would have EVER expected was to use any platform of Zuckerberg’s.”

    And yet, by Friday, Zuckerberg said Threads had reached 70 million user signups — amassing a user base nearly a third of the size of Twitter’s in fewer than two days for a platform that could eventually help knock out one of Facebook’s chief rivals and give a boost to Meta’s struggling ad business.

    If Musk is a boon to Zuckerberg’s fortunes, he’s an unlikely one. Zuckerberg and Musk have often been at odds over the years.

    In 2018, in the wake of Facebook’s Cambridge Analytica scandal, Musk said he had deleted the Facebook pages for his companies Tesla and SpaceX because the platform “gives me the willies.” And later that year, he also deleted his Instagram account.

    More recently, Musk has claimed that Instagram “makes people depressed” and appeared to imply that Meta was complicit in the January 6, 2021, attack on the US Capitol.

    Zuckerberg has also thrown jabs at Musk, including after a SpaceX explosion accidentally blew up a satellite that was being used by Facebook, and in a critique of his stance on artificial intelligence during a 2017 Facebook Live broadcast.

    But earlier this year, Zuckerberg also complimented Musk’s leadership of Twitter. In a podcast interview last month, Zuckerberg said that “Elon led a push early on to make Twitter a lot leaner … I think that those were generally good changes.”

    In some ways, Musk’s moves at Twitter may have given Zuckerberg and Meta — as well as other tech companies — cover to take similar actions without as much criticism. Meta announced it would eliminate more than 20,000 employees over two rounds of layoffs, marking the largest cuts in its history. But Meta came off looking responsible compared to Twitter’s mass layoffs by handling the cuts professionally and providing more robust severance.

    After Musk restored the account of former President Donald Trump following a two-year suspension that began after the January 6 attack, Twitter faced criticism from civil society civic? groups who called on advertisers to boycott the platform. But Meta, along with YouTube, followed suit several months later (although those platforms cited their own risk analyses, rather than Musk’s leadership, in explaining their decisions).

    The distraction and chaos of Musk’s Twitter takeover could hardly have come at a better time for Zuckerberg and Meta.

    The social media giant’s business had a brutal year — posting its first-ever quarterly revenue decline as a public company during the June quarter, and then again in each of the two remaining quarters of the year, as it struggled with a weak online advertising market while pouring billions into its plan for the metaverse. The company lost more than $600 billion in market value during 2022.

    Now, the launch of Threads marks a huge new opportunity for Meta and Zuckerberg. Threads could be a way of getting social media users to spend even more time on Meta’s apps, especially as Facebook increasingly struggles with the perception of being a has-been platform that’s less attractive to younger users.

    Zuckerberg said on Wednesday that he hopes to eventually have more than one billion users on Threads, far more than the 238 million active users on Twitter prior to Musk’s takeover.

    Although there are no ads on the platform yet, Threads could also ultimately supplement Meta’s core advertising business. Instagram head Adam Mosseri, who oversaw the Threads launch, told The Verge in an interview about the new platform this week that, “if we make something that lots of people love and keep using, we will, I’m sure, monetize it” through advertising.

    For Musk, losing Twitter users, or having its future growth hamstrung, thanks to Threads, could mean further harm to the $44 billion investment he made to buy the social media platform — and, perhaps more importantly, to his reputation as a genius with a knack for turning around troubled companies.

    Musk appears to be trying to push back against Zuckerberg’s turn of fortune. On Wednesday, a lawyer for Musk sent a letter to Meta threatening to sue the company over the rival app, accusing it of trade secret theft through the hiring of former Twitter employees. (Meta denied the charge.)

    The Twitter-Threads battle has raised the stakes for another fight: a cage fight that Musk and Zuckerberg have spent the past several weeks planning. Zuckerberg, a regular practitioner of Brazilian jiu jitsu, appears to have the upper hand.

    But whether or not the fight ends up going forward, Zuckerberg seems to have already won.

    [ad_2]

    Source link

  • Arkansas sues TikTok, ByteDance and Meta over mental health claims | CNN Business

    Arkansas sues TikTok, ByteDance and Meta over mental health claims | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    The state of Arkansas has sued TikTok, its parent ByteDance, and Facebook-parent Meta over claims the companies’ products are harmful to users, in the latest effort by public officials to take social media companies to court over mental-health and privacy concerns.

    All three lawsuits claim the companies have violated the state’s Deceptive Trade Practices Act, and seek millions, if not billions, in potential fines. The suits were filed in Arkansas state court.

    The complaints come amid mounting pressure in Washington on TikTok for its ties to China and as states have grown more aggressive in suing tech companies broadly, particularly on mental health claims. Suits by school districts or county officials in California, Florida, New Jersey, Pennsylvania and Washington state have targeted multiple social media platforms over addiction allegations.

    The suit against Meta particularly zeroes in on the company’s impact to young users’ mental health, alleging that Meta’s implementation of like buttons, photo tagging, an unending news feed and other features are addictive and “intended to manipulate users’ brains by triggering the release of dopamine.”

    In a statement, Meta’s global head of safety, Antigone Davis, said the company has invested in “technology that finds and removes content related to suicide, self-injury or eating disorders before anyone reports it to us.”

    “We want to reassure every parent that we have their interests at heart in the work we’re doing to provide teens with safe, supportive experiences online,” Davis said in the statement. “These are complex issues, but we will continue working with parents, experts and regulators such as the state attorneys general to develop new tools, features and policies that meet the needs of teens and their families.”

    The remaining two suits, both naming ByteDance and TikTok as defendants, target TikTok’s alleged shortcomings in content moderation and also reiterate claims about TikTok’s alleged threat to US national security.

    The first suit alleges that TikTok has misled users by identifying its app as suitable for teens on app stores because of the “abundant” presence of content showing profanity, substance use and nudity. The suit further alleges that TikTok’s Chinese sister app, Douyin, does not make such content available within China.

    “TikTok poses known risks to young teens that TikTok’s parent company itself finds inappropriate for Chinese users who are the same age,” the complaint said. “Yet TikTok pushes salacious and other mature content to all young U.S. users age 13 and up.”

    The second suit against ByteDance and TikTok accuse the companies of having made misleading statements about the reach of Chinese government officials and their purported inability to access TikTok user data. TikTok has migrated US user data to servers operated by the American tech giant Oracle and has established organizational controls intended to prevent unauthorized data access. But, the suit alleges, that does not mean the data is necessarily protected.

    “Neither TikTok’s data storage practices, nor its data security practices, negate the applicability of Chinese law to that data or to the individuals and entities who are subject to Chinese law and have access to that data, or the risk of access by the Chinese Government or Communist Party,” the complaint said.

    The suit also claims TikTok has misrepresented its approach to privacy and security by omitting the potential risks of Chinese government access from its privacy policies and in its statements to app store operators.

    TikTok and ByteDance didn’t immediately respond to a request for comment.

    In a statement announcing the lawsuits, Arkansas Gov. Sarah Huckabee Sanders said the suits reflect a “failed status quo.”

    “We have to hold Big Tech companies accountable for pushing addictive platforms on our kids and exposing them to a world of inappropriate, damaging content,” Sanders said. “These actions are a long time coming. We have watched over the past decade as one social media company after another has exploited our kids for profit and escaped government oversight.”

    [ad_2]

    Source link

  • Meta is giving parents more visibility into who their teens are messaging on social media | CNN Business

    Meta is giving parents more visibility into who their teens are messaging on social media | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Meta is adding new safeguards and monitoring tools for teens across its social platforms: parental controls on Messenger, suggestions for teens to step away from Facebook after 20 minutes, and nudges urging young night-owl Instagrammers to stop scrolling.

    The features announced Tuesday come as Meta

    (META)
    and other social media platforms face heightened pressure from lawmakers over the impact that their platforms have on younger users, who can be just 13 when they sign up for Meta

    (META)
    ’s apps.

    Messenger, Meta’s instant-messaging app, is adding parental supervision tools for the first time that are similar to those that exist on Instagram already: Parents and guardians can see how much time their teens spend on the chat tool, view and receive updates on their contacts list, and get notified if their teen reports someone.

    Another new feature is the ability for parents and teens to have discussions directly through notifications if their accounts are synced up.

    “We heard from parents and teens about the value they’re seeing from how a two-way dialogue can foster and encourage discussions,” Diana Williams, who oversees product changes for youth and families at Meta, told CNN in an interview.

    On Facebook, Meta will start to nudge teen users to take time away from the app after 20 minutes.

    Instagram will add introduce a new nudge that suggests teens close Instagram if they’re scrolling Reels videos for too long during nighttime hours. The effort builds on existing Instagram features like Quiet Mode, which temporarily holds notifications and lets people know if you’re trying to focus.

    In addition, Instagram is testing a feature that limits how people interact with non-followers. Users must now send an invite to connect with someone if they’re not a follower, and they cannot call the recipient or send photos, videos or voice messages or make calls until the user accepts their request. The feature aims to cut down on unwanted content from strangers, particularly for women, the company said.

    It’s the latest in a series of new tools and guardrails for teens from Meta, following the release of leaked internal documents that found Instagram can negatively impact the mental health of its young users. Instagram, for example, has since introduced an educational hub for parents with resources, tips and articles from experts on user safety.

    The company said it’s also taking a “stricter approach” to the content it recommends to teens and will actively nudge them toward different topics, such as architecture and travel destinations, if they’ve been dwelling on any type of content for too long.

    Few changes have been made to Facebook and Messenger until now. Facebook does, however, have a Safety Center that provides supervision tools and resources, such as articles and advice from leading experts.

    [ad_2]

    Source link

  • Meta takes aim at Twitter with new Threads app | CNN Business

    Meta takes aim at Twitter with new Threads app | CNN Business

    [ad_1]


    London
    CNN
     — 

    The rivalry between Mark Zuckerberg and Elon Musk has just kicked up a notch.

    Zuckerberg’s Meta, which owns Facebook and Instagram, has teased a new app that is set to take on Twitter by offering a rival space for real-time conversations online.

    The app is called Threads and it is expected to go live Thursday, according to a listing in the App Store. The app appears to have many similarities to Twitter — the App Store description emphasizes conversations, as well as the potential to build a following and connect with like-minded people.

    “Threads is where communities come together to discuss everything from the topics you care about today to what’ll be trending tomorrow,” it reads.

    “Whatever it is you’re interested in, you can follow and connect directly with your favorite creators and others who love the same things — or build a loyal following of your own to share your ideas, opinions and creativity with the world.”

    The move by Meta comes amid a fresh bout of turmoil at Twitter, which experienced an outage over the weekend, followed by an announcement that the site had imposed temporary limits on how many tweets its users are able to read while using the app.

    Musk, the platform’s billionaire owner, said these restrictions had been applied “to address extreme levels of data scraping and system manipulation.”

    Commenting on the launch of Threads Monday, Musk tweeted: “Thank goodness they’re so sanely run,” parroting reported comments by Meta executives that appeared to take a jab at Musk’s erratic behavior.

    Since taking Twitter private in October, Musk has turned the social media platform on its head, alienating advertisers and some of its highest-profile users.

    He is now looking for ways to return the platform to growth. Twitter announced Monday that users would soon need to pay for TweetDeck, a tool that allows people to organize and easily monitor the accounts they follow.

    Twitter is also attempting to encroach on Meta’s domain.

    In May, Twitter added encrypted messaging and said calls would follow, developments that could allow the platform to compete with Facebook Messenger and WhatsApp, also owned by Meta.

    Musk and Zuckerberg’s rivalry could soon extend beyond business and into the ring. Last month, the two men discussed the possibility of a cage fight, with the Las Vegas arena that hosts the Ultimate Fighting Championship seemingly the favorite location for the match.

    [ad_2]

    Source link

  • With the rise of AI, social media platforms could face perfect storm of misinformation in 2024 | CNN Business

    With the rise of AI, social media platforms could face perfect storm of misinformation in 2024 | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Last month, a video posted to Twitter by Florida Gov. Ron DeSantis’ presidential campaign used images that appeared to be generated by artificial intelligence showing former President Donald Trump hugging Dr. Anthony Fauci. The images, which appeared designed to criticize Trump for not firing the nation’s top infectious disease specialist, were tricky to spot: they were shown alongside real images of the pair and with a text overlay saying, “real life Trump.”

    As the images began spreading, fact-checking organizations and sharp-eyed users quickly flagged them as fake. But Twitter, which has slashed much of its staff in recent months under new ownership, did not remove the video. Instead, it eventually added a community note — a contributor-led feature to highlight misinformation on the social media platform — to the post, alerting the site’s users that in the video “3 still shots showing Trump embracing Fauci are AI generated images.”

    Experts in digital information integrity say it’s just the start of AI-generated content being used ahead of the 2024 US Presidential election in ways that could confuse or mislead voters.

    A new crop of AI tools offer the ability to generate compelling text and realistic images — and, increasingly, video and audio. Experts, and even some executives overseeing AI companies, say these tools risk spreading false information to mislead voters, including ahead of the 2024 US election.

    “The campaigns are starting to ramp up, the elections are coming fast and the technology is improving fast,” said Jevin West, a professor at the University of Washington and co-founder of the Center for an Informed Public. “We’ve already seen evidence of the impact that AI can have.”

    Social media companies bear significant responsibility for addressing such risks, experts say, as the platforms where billions of people go for information and where bad actors often go to spread false claims. But they now face a perfect storm of factors that could make it harder than ever to keep up with the next wave of election misinformation.

    Several major social networks have pulled back on their enforcement of some election-related misinformation and undergone significant layoffs over the past six months, which in some cases hit election integrity, safety and responsible AI teams. Current and former US officials have also raised alarms that a federal judge’s decision earlier this month to limit how some US agencies communicate with social media companies could have a “chilling effect” on how the federal government and states address election-related disinformation. (On Friday, an appeals court temporarily blocked the order.)

    Meanwhile, AI is evolving at a rapid pace. And despite calls from industry players and others, US lawmakers and regulators have yet to implement real guardrails for AI technologies.

    “I’m not confident in even their ability to deal with the old types of threats,” said David Evan Harris, an AI researcher and ethics adviser to the Psychology of Technology Institute, who previously worked on responsible AI at Facebook-parent Meta. “And now there are new threats.”

    The major platforms told CNN they have existing policies and practices in place related to misinformation and, in some cases, specifically targeting “synthetic” or computer-generated content, that they say will help them identify and address any AI-generated misinformation. None of the companies agreed to make anyone working on generative AI detection efforts available for an interview.

    The platforms “haven’t been ready in the past, and there’s absolutely no reason for us to believe that they’re going to be ready now,” Bhaskar Chakravorti, dean of global business at The Fletcher School at Tufts University, told CNN.

    Misleading content, especially related to elections, is nothing new. But with the help of artificial intelligence, it’s now possible for anyone to quickly, easily and cheaply create huge quantities of fake content.

    And given AI technology’s rapid improvement over the past year, fake images, text, audio and videos are likely to be even harder to discern by the time the US election rolls around next year.

    “We’ve still got more than a year to go until the election. These tools are going to get better and, in the hands of sophisticated users, they can be very powerful,” said Harris. He added that the kinds of misinformation and election meddling that took place on social media in 2016 and 2020 will likely only be exacerbated by AI.

    The various forms of AI-generated content could be used together to make false information more believable — for example, an AI-written fake article accompanied by an AI-generated photo purporting to show what happened in the report, said Margaret Mitchell, researcher and chief ethics scientist at open-source AI firm Hugging Face.

    AI tools could be useful for anyone wanting to mislead, but especially for organized groups and foreign adversaries incentivized to meddle in US elections. Massive foreign troll farms have been hired to attempt to influence previous elections in the United States and elsewhere, but “now, one person could be in charge of deploying thousands of thousands of generative AI bots that work,” to pump out content across social media to mislead voters, Mitchell, who previously worked at Google, said.

    OpenAI, the maker of the popular AI chatbot ChatGPT, issued a stark warning about the risk of AI-generated misinformation in a recent research paper. An abundance of false information from AI systems, whether intentional or created by biases or “hallucinations” from the systems, has “the potential to cast doubt on the whole information environment, threatening our ability to distinguish fact from fiction,” it said.

    Examples of AI-generated misinformation have already begun to crop up. In May, several Twitter accounts, including some who had paid for a blue “verification” checkmark, shared fake images purporting to show an explosion near the Pentagon. While the images were quickly debunked, their circulation was briefly followed by a dip in the stock market. Twitter suspended at least one of the accounts responsible for spreading the images. Facebook labeled posts about the images as “false information,” along with a fact check.

    A month earlier, the Republican National Committee released a 30-second advertisement responding to President Joe Biden’s official campaign announcement that used AI images to imagine a dystopian United States after the reelection of the 46th president. The RNC ad included the small on-screen disclaimer, “Built entirely with AI imagery,” but some potential voters in Washington D.C. to whom CNN showed the video did not spot it on their first watch.

    Dozens of Democratic lawmakers last week sent a letter calling on the Federal Election Commission to consider cracking down on the use of artificial intelligence technology in political advertisements, warning that deceptive ads could harm the integrity of next year’s elections.

    Ahead of 2024, many of the platforms have said that they will be rolling out plans to protect the election’s integrity, including from the threat of AI-generated content.

    TikTok earlier this year rolled out a policy stipulating that “synthetic” or manipulated media created by AI must be clearly labeled, in addition to its civic integrity policy which prohibits misleading information about electoral processes and its general misinformation policy which prohibits false or misleading claims that could cause “significant harm” to individuals or society.

    YouTube has a manipulated media policy that prohibits content that has been “manipulated or doctored” in a way that could mislead users and “may pose a serious risk of egregious harm.” The platform also has policies against content that could mislead users about how and when to vote, false claims that could discourage voting and content that “encourages others to interfere with democratic processes.” YouTube also says it prominently surfaces reliable news and information about elections on its platform, and that its election-focused team includes members of its trust and safety, product and “Intelligence Desk” teams.

    “Technically manipulated content, including election content, that misleads users and may pose a serious risk of egregious harm is not allowed on YouTube,” YouTube spokesperson Ivy Choi said in a statement. “We enforce our manipulated content policy using machine learning and human review, and continue to improve on this work to stay ahead of potential threats.”

    A Meta spokesperson told CNN that the company’s policies apply to all content on its platforms, including AI-generated content. That includes its misinformation policy, which stipulates that the platform removes false claims that could “directly contribute to interference with the functioning of political processes and certain highly deceptive manipulated media,” and may reduce the spread of other misleading claims. Meta also prohibits ads featuring content that has been debunked by its network of third-party fact checkers.

    TikTok and Meta have also joined a group of tech industry partners coordinated by the non-profit Partnership on AI dedicated to developing a framework for responsible use of synthetic media.

    Asked for comment on this story, Twitter responded with an auto-reply of a poop emoji.

    Twitter has rolled back much of its content moderation in the months since billionaire Elon Musk took over the platform, and instead has leaned more heavily on its “Community Notes” feature which allows users to critique the accuracy of and add context to other people’s posts. On its website, Twitter also says it has a “synthetic media” policy under which it may label or remove “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”

    Still, as is often the case with social media, the challenge is likely to be less a matter of having the policies in place than enforcing them. The platforms largely use a mix of human and automated review to identify misinformation and manipulated media. The companies declined to provide additional details about their AI detection processes, including how many staffers are involved in such efforts.

    But AI experts say they’re worried that the platforms’ detection systems for computer-generated content may have a hard time keeping up with the technology’s advancements. Even some of the companies developing new generative AI tools have struggled to build services that can accurately detect when something is AI-generated.

    Some experts are urging all the social platforms to implement policies requiring that AI-generated or manipulated content be clearly labeled, and calling on regulators and lawmakers to establish guardrails around AI and hold tech companies accountable for the spread of false claims.

    One thing is clear: the stakes for success are high. Experts say that not only does AI-generated content create the risk of internet users being misled by false information; it could also make it harder for them to trust real information about everything from voting to crisis situations.

    “We know that we’re going into a very scary situation where it’s going to be very unclear what has happened and what has not actually happened,” said Mitchell. “It completely destroys the foundation of reality when it’s a question whether or not the content you’re seeing is real.”

    [ad_2]

    Source link

  • Meta, Microsoft, hundreds more own trademarks to new Twitter name | CNN Business

    Meta, Microsoft, hundreds more own trademarks to new Twitter name | CNN Business

    [ad_1]



    Reuters
     — 

    Billionaire Elon Musk’s decision to rebrand Twitter as X could be complicated legally: companies including Meta and Microsoft already have intellectual property rights to the same letter.

    X is so widely used and cited in trademarks that it is a candidate for legal challenges – and the company formerly known as Twitter could face its own issues defending its X brand in the future.

    “There’s a 100% chance that Twitter is going to get sued over this by somebody,” said trademark attorney Josh Gerben, who said he counted nearly 900 active U.S. trademark registrations that already cover the letter X in a wide range of industries.

    Musk renamed social media network Twitter as X on Monday and unveiled a new logo for the social media platform, a stylized black-and-white version of the letter.

    Owners of trademarks – which protect things like brand names, logos and slogans that identify sources of goods – can claim infringement if other branding would cause consumer confusion. Remedies range from monetary damages to blocking use.

    Microsoft since 2003 has owned an X trademark related to communications about its Xbox video-game system. Meta Platforms – whose Threads platform is a new Twitter rival – owns a federal trademark registered in 2019 covering a blue-and-white letter “X” for fields including software and social media.

    Meta and Microsoft likely would not sue unless they feel threatened that Twitter’s X encroaches on brand equity they built in the letter, Gerben said.

    The three companies did not respond to requests for comment.

    Meta itself drew intellectual property challenges when it changed its name from Facebook. It faces trademark lawsuits filed last year by investment firm Metacapital and virtual-reality company MetaX, and settled another over its new infinity-symbol logo.

    And if Musk succeeds in changing the name, others still could claim ‘X’ for themselves.

    “Given the difficulty in protecting a single letter, especially one as popular commercially as ‘X’, Twitter’s protection is likely to be confined to very similar graphics to their X logo,” said Douglas Masters, a trademark attorney at law firm Loeb & Loeb.

    “The logo does not have much distinctive about it, so the protection will be very narrow.”

    Insider reported earlier that Meta had an X trademark, and lawyer Ed Timberlake tweeted that Microsoft had one as well.

    [ad_2]

    Source link