ReportWire

Tag: Facebook

  • Meta stock jumps after company reports first revenue growth in nearly a year | CNN Business

    Meta stock jumps after company reports first revenue growth in nearly a year | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Facebook-parent Meta on Wednesday reported that it grew sales by 3% during the first three months of the year, reversing a trend of three consecutive quarters of revenue declines and far exceeding Wall Street analysts’ expectations.

    Meta shares jumped as much as 12% in after-hours trading following the report, continuing the company’s strong trajectory since Zuckerberg announced that 2023 would be a “year of efficiency.”

    Another bright spot: user growth was relatively strong compared to recent quarters. The number of monthly active people on Meta’s family of apps grew 5% from the prior year to more than 3.8 billion and Facebook daily active users increased 4% to more than 2 billion.

    “We had a good quarter and our community continues to grow,” Zuckerberg said in a statement Wednesday. “We’re also becoming more efficient so we can build better products faster and put ourselves in a stronger position to deliver our long term vision.”

    But Meta has a long hill to climb.

    The company also reported that profits declined by nearly a quarter compared to the same period in the prior year to $5.7 billion. Price per advertisement — an indicator of the health of the company’s core digital ad business — also decreased by 17% from the year prior.

    Meta has been in the midst of a massive restructuring, as it attempts to recover from a perfect storm of heightened competition, lingering recession fears resulting in fewer ad dollars and a multibillion dollar effort to build a future version of the internet it calls the metaverse. Meta said in November it would eliminate 11,000 jobs, the single largest round of cuts in its history. And in March, Zuckerberg announced Meta would lay off another 10,000 employees. All told, the cuts will shrink Meta’s workforce by a quarter.

    Meta took a hit of more than $1 billion related to the restructuring in the March quarter, and said it will realize additional charges of around $500 million related to 2023 layoffs by the end of the year.

    Zuckerberg said on a call with analysts Wednesday that when Meta started its “efficiency work” late last year, “our business wasn’t performing as well as I wanted, but now we’re increasingly doing this work from a position of strength.”

    The company said it expects revenue to grow again in the current quarter compared to the prior year. And it slightly lowered its expectations for full-year expenses, potentially buoying investor optimism.

    “The year of efficiency is off to a stronger than expected start for Meta,” Insider Intelligence principal analyst Debra Aho Williamson said in a statement. But she added that the company “can’t afford to sit still in this environment.”

    Like other tech companies, Meta has recently read investor cues and taken to playing up its focus on artificial intelligence rather than the metaverse. The shift comes as Meta contends with the popularity of AI tools from tech firms like Microsoft and OpenAI.

    In his statement with the results Wednesday, Zuckerberg said: “Our AI work is driving good results across our apps and business.” He added in the call that the company’s AI work includes efforts to build AI chat experiences in WhatsApp and Messenger, as well as visual creation tools for posts on Facebook and Instagram and advertisements.

    [ad_2]

    Source link

  • Tax prep companies shared private taxpayer data with Google and Meta for years, congressional probe finds | CNN Business

    Tax prep companies shared private taxpayer data with Google and Meta for years, congressional probe finds | CNN Business

    [ad_1]



    CNN
     — 

    Some of America’s largest tax-prep companies have spent years sharing Americans’ sensitive financial data with tech titans including Meta and Google in a potential violation of federal law — data that in some cases was misused for targeted advertising, according to a seven-month congressional investigation.

    The report highlights what legal experts described to CNN as a “five-alarm fire” for taxpayer privacy that could lead to government and private lawsuits, criminal penalties or perhaps even a “mortal blow” for some industry giants involved in the probe including TaxSlayer, H&R Block and TaxAct.

    Using visitor tracking technology embedded on their websites, the three tax-prep companies allegedly sent tens of millions of Americans’ personal information to the tech industry without consent or appropriate disclosures, according to the congressional report reviewed by CNN.

    Beyond ordinary personal data such as people’s names, phone numbers and email addresses, the list of information shared also included taxpayer data — details about people’s filing status, adjusted gross income, the size of their tax refunds and even information about the buttons and text fields they clicked on while filling out their tax forms, which could reveal what tax breaks they may have claimed or which government programs they use, according to the report.

    The report, which drew on congressional interviews and written testimony from Meta, Google and the tax-prep companies, also found that every taxpayer who used TaxAct’s IRS Free File service while the tracking was enabled would have had their information shared with the tech companies. Some of the tax-prep companies still do not know whether the data they shared continues to be held by the tech platforms, the report said.

    “On a scale from one to 10, this is a 15,” said David Vladeck, a law professor at Georgetown University and a former consumer protection chief at the Federal Trade Commission, the country’s top privacy watchdog. “This is as great as any privacy breach that I’ve seen other than exploiting kids. This is a five-alarm fire, if what we know about this so far is true.”

    It is also an example, Vladeck said, of why the United States needs federal legislation guaranteeing every American a basic right to data privacy — an issue that has languished in Congress for years despite electronic data becoming an ever-larger part of the global economy.

    The congressional findings represent the latest claims of wrongdoing to hit the embattled tax-prep industry after a report last year by the investigative journalism outlet The Markup highlighted the tracking practice.

    Wednesday’s bombshell report adds to those earlier revelations by identifying a previously unreported category of data that was allegedly being collected and shared: the webpage titles in online tax software that can reveal what tax forms users have accessed, said an aide to Democratic Sen. Elizabeth Warren, who helped lead the congressional probe. For example, taxpayers who entered information about their college savings contributions or rental income may have done so on webpages bearing titles reflecting that information, which would then have been shared with the tech companies, the aide said.

    During the probe, Meta told investigators it used the taxpayer data it received to target third-party ads to users of its platform and to train its artificial intelligence algorithms, the report said. The Warren aide told CNN it was unclear whether Meta knew it was inappropriately using taxpayer data at the time. A Meta spokesperson said the company instructs its partners not to use its tools to share sensitive information and that Meta’s systems are “designed to filter out potentially sensitive data it is able to detect.”

    The technology behind the data collection, known as a tracking pixel, is commonly used across the entire internet. A small snippet of code that website owners can insert onto their sites, tracking pixels gather information that can help companies, including but not limited to Meta and Google, understand the behavior or interests of website visitors.

    Because of the tracking technology used by TaxAct, TaxSlayer and H&R Block, “every single taxpayer who used their websites to file their taxes could have had at least some of their data shared,” the report said.

    The tax-prep companies at the center of the investigation told lawmakers the collected data had been scrambled to help protect privacy, according to the report. But the report also said some of the tax-prep firms themselves were not fully aware of how much information was being exposed to the tech platforms, and the report cited past FTC research concluding that even “anonymized” data can be easily reverse-engineered to identify a person.

    The pixels’ use in a taxpayer context resulted in the “reckless” sharing of legally protected data that could put taxpayers at risk, according to the report by Warren and her Democratic colleagues Sens. Ron Wyden; Richard Blumenthal; Tammy Duckworth; and Sheldon Whitehouse; Sen. Bernie Sanders, an independent who caucuses with Democrats; and Democratic Rep. Katie Porter.

    The FTC, the Internal Revenue Service, the Justice Department and the Treasury Inspector General for Tax Administration “should fully investigate this matter and prosecute any company or individuals who violated the law,” the lawmakers wrote in a letter dated Tuesday to the agencies and obtained by CNN. The FTC and DOJ declined to comment; the IRS and TIGTA didn’t immediately respond to a request for comment.

    In a statement, H&R Block said it takes client privacy “very seriously, and we have taken steps to prevent the sharing of information via pixels.” Wednesday’s report said H&R Block had testified to using the tracking technology for “at least a couple of years.”

    TaxAct and TaxSlayer didn’t immediately respond to a request for comment. The report said TaxAct had been using Meta’s tools since 2018 and Google’s since about 2014, while TaxSlayer began using Meta’s tools in 2018 and Google’s in 2011. The investigation found that all three tax-prep companies had discontinued their use of Meta’s pixel after The Markup’s report last November.

    Intuit, the maker of TurboTax, received an initial inquiry letter from the lawmakers in December but was not a focus of Wednesday’s report because the company did not use tracking pixels to the same extent, the investigation found.

    Tax preparation firms have faced mounting scrutiny in recent years amid reports that many have turned to data harvesting as a business model and that the largest among them have spent millions lobbying against legislation that could make it easier for Americans to file their tax returns. An IRS report this year found that 72% of Americans would be interested in using a free, electronic tax filing service if it were provided by the agency as an alternative to private online filing services. The IRS plans to launch a pilot version of that service to a limited number of taxpayers in the 2024 tax filing season.

    Google told CNN it prohibits business customers from uploading to its platform sensitive data that could be traced back to a person.

    “We have strict policies and technical features that prohibit Google Analytics customers from collecting data that could be used to identify an individual,” a Google spokesperson said. “Site owners — not Google — are in control of what information they collect and must inform their users of how it will be used. Additionally, Google has strict policies against advertising to people based on sensitive information.”

    Wednesday’s report focuses more heavily on Meta’s use of taxpayer data, the Warren aide told CNN, because Google did not appear to have used the information for its own commercial purposes as overtly as Meta and the investigation was unable to fully determine whether Google may have used the data for other applications.

    The allegations could nevertheless create extensive legal risk for both the tech companies as well as the tax-preparation firms, according to tax and privacy legal experts.

    The tax-prep companies could face billions in fines under US tax law if the federal government decides to sue, said Steven Rosenthal, a senior fellow at the Urban-Brookings Tax Policy Center. In addition, the US government could seek criminal penalties.

    “The scope of ‘taxpayer information’ is broad by design,” Rosenthal said, adding that tax-prep companies can be sued for “knowingly” or “recklessly” leaking that information. “The companies shouldn’t be sharing it in a way that some third party could obtain it.”

    Theoretically, he said, the tax code also affords individual taxpayers the right to file private lawsuits against the tax-prep companies. But most if not all of those firms require customers to submit to mandatory arbitration that could realistically make bringing a private claim more challenging, said the Warren aide.

    Apart from the tax code, both the tech giants as well as the tax-prep firms could also face civil liability from the FTC — which can police data breaches and hold companies accountable for their commitments to user privacy — and potentially from state governments that have their own privacy laws on the books, said Vladeck.

    Depending on the strength of the allegations, the tax-prep companies could quickly be forced into a binding settlement, said a former FTC official who requested anonymity in order to speak more freely.

    “If the facts are really strong, these companies would probably rather settle than go to court. This is very embarrassing,” the former official said. “It could be a mortal blow to the tax prep companies.”

    [ad_2]

    Source link

  • Welcome to the era of viral AI generated ‘news’ images | CNN Business

    Welcome to the era of viral AI generated ‘news’ images | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Pope Francis wearing a massive, white puffer coat. Elon Musk walking hand-in-hand with rival GM CEO Mary Barra. Former President Donald Trump being detained by police in dramatic fashion.

    None of these things actually happened, but AI-generated images depicting them did go viral online over the past week.

    The images ranged from obviously fake to, in some cases, compellingly real, and they fooled some social media users. Model and TV personality Chrissy Teigen, for example, tweeted that she thought the pope’s puffer coat was real, saying, “didn’t give it a second thought. no way am I surviving the future of technology.” The images also sparked a slew of headlines, as news organizations rushed to debunk the false images, especially those of Trump, who was ultimately indicted by a Manhattan grand jury on Thursday but has not been arrested.

    The situation demonstrates a new online reality: the rise of a new crop of buzzy artificial intelligence tools has made it cheaper and easier than ever to create realistic images, as well as audio and videos. And these images are likely to pop up with increasing frequency on social media.

    While these AI tools may enable new means of expressing creativity, the spread of computer-generated media also threatens to further pollute the information ecosystem. That risks adding to the challenges for users, news organizations and social media platforms to vet what’s real, after years of grappling with online misinformation featuring far less sophisticated visuals. There are also concerns that AI-generated images could be used for harassment, or to further drive divided internet users apart.

    “I worry that it will sort of get to a point where there will be so much fake, highly realistic content online that most people will just go with their tribal instincts as a guide to what they think is real, more than actually informed opinions based on verified evidence,” said Henry Ajder, a synthethic media expert who works as an advisor to companies and government agencies, including Meta Reality Labs’ European Advisory Council.

    Images, compared to the AI-generated text that has also recently proliferated thanks to tools like ChatGPT, can be especially powerful in provoking emotions when people view them, said Claire Leibowicz, head of AI and media integrity at the Partnership on AI, a nonprofit industry group. That can make it harder for people to slow down and evaluate whether what they’re looking at is real or fake.

    What’s more, coordinated bad actors could eventually attempt to create fake content in bulk — or suggest that real content is computer-generated — in order to confuse internet users and provoke certain behaviors.

    “The paranoia of an impending Trump … potential arrest created a really useful case study in understanding what the potential implications are, and I think we’re very lucky that things did not go south,” said Ben Decker, CEO of threat intelligence group Memetica. “Because if more people had had that idea en masse, in a coordinated fashion, I think there’s a universe where we could start to see the online to offline effects.”

    Computer-generated image technology has improved rapidly in recent years, from the photoshopped image of a shark swimming through a flooded highway that has been repeatedly shared during natural disasters to the websites that four years ago began churning out mostly unconvincing fake photos of non-existent people.

    Many of the recent viral AI-generated images were created by a tool called Midjourney, a less than year-old platform that allows users to create images based on short text prompts. On its website, Midjourney describes itself as “a small self-funded team,” with just 11 full-time staff members.

    A cursory glance at a Facebook page popular among Midjourney users reveals AI-generated images of a seemingly inebriated Pope Francis, elderly versions of Elvis and Kurt Cobain, Musk in a robotic Tesla bodysuit and many creepy animal creations. And that’s just from the past few days.

    Midjourney has emerged as a popular tool for users to create AI-generated images.

    The latest version of Midjourney is only available to a select number of paid users, Midjourney CEO David Holz told CNN in an email Friday. Midjourney this week paused access to the free trial of its earlier versions due to “extraordinary demand and trial abuse,” according to a Discord post from Holz, but he told CNN it was unrelated to the viral images. The creator of the Trump arrest images also claimed he was banned from the site.

    The rules page on the company’s Discord site asks users: “Don’t use our tools to make images that could inflame, upset, or cause drama. That includes gore and adult content.”

    “Moderation is hard and we’ll be shipping improved systems soon,” Holz told CNN. “We’re taking lots of feedback and ideas from experts and the community and are trying to be really thoughtful.”

    In most cases, the creators of the recent viral images don’t appear to have been acting malevolently. The Trump arrest images were created by the founder of the online investigative journalism outlet Bellingcat, who clearly labeled them as his fabrications, even if other social media users weren’t as discerning.

    There are efforts by platforms, AI technology companies and industry groups to improve the transparency around when a piece of content is generated by a computer.

    Platforms including Meta’s Facebook and Instagram, Twitter and YouTube have policies restricting or prohibiting the sharing of manipulated media that could mislead users. But as use of AI-generated technologies grows, even such policies could threaten to undermine user trust. If, for example, a fake image accidentally slipped through a platform’s detection system, “it could give people false confidence,” Ajder said. “They’ll say, ‘there’s a detection system that says it’s real, so it must be real.’”

    Work is also underway on technical solutions that would, for example, watermark an AI-generated image or include a transparent label in an image’s metadata, so anyone viewing it across the internet would know it was created by a computer. The Partnership on AI has developed a set of standard, responsible practices for synthetic media along with partners like ChatGPT-creator OpenAI, TikTok, Adobe, Bumble and the BBC, which includes recommendations such as how to disclose an image was AI-generated and how companies can share data around such images.

    “The idea is that these institutions are all committed to disclosure, consent and transparency,” Leibowicz said.

    A group of tech leaders, including Musk and Apple co-founder Steve Wozniak, this week wrote an open letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” Still, it’s not clear whether any labs will take such a step. And as the technology rapidly improves and becomes accessible beyond a relatively small group of corporations committed to responsible practices, lawmakers may need to get involved, Ajder said.

    “This new age of AI can’t be held in the hands of a few massive companies getting rich off of these tools, we need to democratize this technology,” he said. “At the same time, there are also very real and legitimate concerns of having a radical open approach where you just open source a tool or have very minimal restrictions on its use is going to lead to a massive scaling of harm … and I think legislation will probably play a role in reigning in some of the more radically open models.”

    [ad_2]

    Source link

  • With the rise of AI, social media platforms could face perfect storm of misinformation in 2024 | CNN Business

    With the rise of AI, social media platforms could face perfect storm of misinformation in 2024 | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Last month, a video posted to Twitter by Florida Gov. Ron DeSantis’ presidential campaign used images that appeared to be generated by artificial intelligence showing former President Donald Trump hugging Dr. Anthony Fauci. The images, which appeared designed to criticize Trump for not firing the nation’s top infectious disease specialist, were tricky to spot: they were shown alongside real images of the pair and with a text overlay saying, “real life Trump.”

    As the images began spreading, fact-checking organizations and sharp-eyed users quickly flagged them as fake. But Twitter, which has slashed much of its staff in recent months under new ownership, did not remove the video. Instead, it eventually added a community note — a contributor-led feature to highlight misinformation on the social media platform — to the post, alerting the site’s users that in the video “3 still shots showing Trump embracing Fauci are AI generated images.”

    Experts in digital information integrity say it’s just the start of AI-generated content being used ahead of the 2024 US Presidential election in ways that could confuse or mislead voters.

    A new crop of AI tools offer the ability to generate compelling text and realistic images — and, increasingly, video and audio. Experts, and even some executives overseeing AI companies, say these tools risk spreading false information to mislead voters, including ahead of the 2024 US election.

    “The campaigns are starting to ramp up, the elections are coming fast and the technology is improving fast,” said Jevin West, a professor at the University of Washington and co-founder of the Center for an Informed Public. “We’ve already seen evidence of the impact that AI can have.”

    Social media companies bear significant responsibility for addressing such risks, experts say, as the platforms where billions of people go for information and where bad actors often go to spread false claims. But they now face a perfect storm of factors that could make it harder than ever to keep up with the next wave of election misinformation.

    Several major social networks have pulled back on their enforcement of some election-related misinformation and undergone significant layoffs over the past six months, which in some cases hit election integrity, safety and responsible AI teams. Current and former US officials have also raised alarms that a federal judge’s decision earlier this month to limit how some US agencies communicate with social media companies could have a “chilling effect” on how the federal government and states address election-related disinformation. (On Friday, an appeals court temporarily blocked the order.)

    Meanwhile, AI is evolving at a rapid pace. And despite calls from industry players and others, US lawmakers and regulators have yet to implement real guardrails for AI technologies.

    “I’m not confident in even their ability to deal with the old types of threats,” said David Evan Harris, an AI researcher and ethics adviser to the Psychology of Technology Institute, who previously worked on responsible AI at Facebook-parent Meta. “And now there are new threats.”

    The major platforms told CNN they have existing policies and practices in place related to misinformation and, in some cases, specifically targeting “synthetic” or computer-generated content, that they say will help them identify and address any AI-generated misinformation. None of the companies agreed to make anyone working on generative AI detection efforts available for an interview.

    The platforms “haven’t been ready in the past, and there’s absolutely no reason for us to believe that they’re going to be ready now,” Bhaskar Chakravorti, dean of global business at The Fletcher School at Tufts University, told CNN.

    Misleading content, especially related to elections, is nothing new. But with the help of artificial intelligence, it’s now possible for anyone to quickly, easily and cheaply create huge quantities of fake content.

    And given AI technology’s rapid improvement over the past year, fake images, text, audio and videos are likely to be even harder to discern by the time the US election rolls around next year.

    “We’ve still got more than a year to go until the election. These tools are going to get better and, in the hands of sophisticated users, they can be very powerful,” said Harris. He added that the kinds of misinformation and election meddling that took place on social media in 2016 and 2020 will likely only be exacerbated by AI.

    The various forms of AI-generated content could be used together to make false information more believable — for example, an AI-written fake article accompanied by an AI-generated photo purporting to show what happened in the report, said Margaret Mitchell, researcher and chief ethics scientist at open-source AI firm Hugging Face.

    AI tools could be useful for anyone wanting to mislead, but especially for organized groups and foreign adversaries incentivized to meddle in US elections. Massive foreign troll farms have been hired to attempt to influence previous elections in the United States and elsewhere, but “now, one person could be in charge of deploying thousands of thousands of generative AI bots that work,” to pump out content across social media to mislead voters, Mitchell, who previously worked at Google, said.

    OpenAI, the maker of the popular AI chatbot ChatGPT, issued a stark warning about the risk of AI-generated misinformation in a recent research paper. An abundance of false information from AI systems, whether intentional or created by biases or “hallucinations” from the systems, has “the potential to cast doubt on the whole information environment, threatening our ability to distinguish fact from fiction,” it said.

    Examples of AI-generated misinformation have already begun to crop up. In May, several Twitter accounts, including some who had paid for a blue “verification” checkmark, shared fake images purporting to show an explosion near the Pentagon. While the images were quickly debunked, their circulation was briefly followed by a dip in the stock market. Twitter suspended at least one of the accounts responsible for spreading the images. Facebook labeled posts about the images as “false information,” along with a fact check.

    A month earlier, the Republican National Committee released a 30-second advertisement responding to President Joe Biden’s official campaign announcement that used AI images to imagine a dystopian United States after the reelection of the 46th president. The RNC ad included the small on-screen disclaimer, “Built entirely with AI imagery,” but some potential voters in Washington D.C. to whom CNN showed the video did not spot it on their first watch.

    Dozens of Democratic lawmakers last week sent a letter calling on the Federal Election Commission to consider cracking down on the use of artificial intelligence technology in political advertisements, warning that deceptive ads could harm the integrity of next year’s elections.

    Ahead of 2024, many of the platforms have said that they will be rolling out plans to protect the election’s integrity, including from the threat of AI-generated content.

    TikTok earlier this year rolled out a policy stipulating that “synthetic” or manipulated media created by AI must be clearly labeled, in addition to its civic integrity policy which prohibits misleading information about electoral processes and its general misinformation policy which prohibits false or misleading claims that could cause “significant harm” to individuals or society.

    YouTube has a manipulated media policy that prohibits content that has been “manipulated or doctored” in a way that could mislead users and “may pose a serious risk of egregious harm.” The platform also has policies against content that could mislead users about how and when to vote, false claims that could discourage voting and content that “encourages others to interfere with democratic processes.” YouTube also says it prominently surfaces reliable news and information about elections on its platform, and that its election-focused team includes members of its trust and safety, product and “Intelligence Desk” teams.

    “Technically manipulated content, including election content, that misleads users and may pose a serious risk of egregious harm is not allowed on YouTube,” YouTube spokesperson Ivy Choi said in a statement. “We enforce our manipulated content policy using machine learning and human review, and continue to improve on this work to stay ahead of potential threats.”

    A Meta spokesperson told CNN that the company’s policies apply to all content on its platforms, including AI-generated content. That includes its misinformation policy, which stipulates that the platform removes false claims that could “directly contribute to interference with the functioning of political processes and certain highly deceptive manipulated media,” and may reduce the spread of other misleading claims. Meta also prohibits ads featuring content that has been debunked by its network of third-party fact checkers.

    TikTok and Meta have also joined a group of tech industry partners coordinated by the non-profit Partnership on AI dedicated to developing a framework for responsible use of synthetic media.

    Asked for comment on this story, Twitter responded with an auto-reply of a poop emoji.

    Twitter has rolled back much of its content moderation in the months since billionaire Elon Musk took over the platform, and instead has leaned more heavily on its “Community Notes” feature which allows users to critique the accuracy of and add context to other people’s posts. On its website, Twitter also says it has a “synthetic media” policy under which it may label or remove “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”

    Still, as is often the case with social media, the challenge is likely to be less a matter of having the policies in place than enforcing them. The platforms largely use a mix of human and automated review to identify misinformation and manipulated media. The companies declined to provide additional details about their AI detection processes, including how many staffers are involved in such efforts.

    But AI experts say they’re worried that the platforms’ detection systems for computer-generated content may have a hard time keeping up with the technology’s advancements. Even some of the companies developing new generative AI tools have struggled to build services that can accurately detect when something is AI-generated.

    Some experts are urging all the social platforms to implement policies requiring that AI-generated or manipulated content be clearly labeled, and calling on regulators and lawmakers to establish guardrails around AI and hold tech companies accountable for the spread of false claims.

    One thing is clear: the stakes for success are high. Experts say that not only does AI-generated content create the risk of internet users being misled by false information; it could also make it harder for them to trust real information about everything from voting to crisis situations.

    “We know that we’re going into a very scary situation where it’s going to be very unclear what has happened and what has not actually happened,” said Mitchell. “It completely destroys the foundation of reality when it’s a question whether or not the content you’re seeing is real.”

    [ad_2]

    Source link

  • Arkansas sues TikTok, ByteDance and Meta over mental health claims | CNN Business

    Arkansas sues TikTok, ByteDance and Meta over mental health claims | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    The state of Arkansas has sued TikTok, its parent ByteDance, and Facebook-parent Meta over claims the companies’ products are harmful to users, in the latest effort by public officials to take social media companies to court over mental-health and privacy concerns.

    All three lawsuits claim the companies have violated the state’s Deceptive Trade Practices Act, and seek millions, if not billions, in potential fines. The suits were filed in Arkansas state court.

    The complaints come amid mounting pressure in Washington on TikTok for its ties to China and as states have grown more aggressive in suing tech companies broadly, particularly on mental health claims. Suits by school districts or county officials in California, Florida, New Jersey, Pennsylvania and Washington state have targeted multiple social media platforms over addiction allegations.

    The suit against Meta particularly zeroes in on the company’s impact to young users’ mental health, alleging that Meta’s implementation of like buttons, photo tagging, an unending news feed and other features are addictive and “intended to manipulate users’ brains by triggering the release of dopamine.”

    In a statement, Meta’s global head of safety, Antigone Davis, said the company has invested in “technology that finds and removes content related to suicide, self-injury or eating disorders before anyone reports it to us.”

    “We want to reassure every parent that we have their interests at heart in the work we’re doing to provide teens with safe, supportive experiences online,” Davis said in the statement. “These are complex issues, but we will continue working with parents, experts and regulators such as the state attorneys general to develop new tools, features and policies that meet the needs of teens and their families.”

    The remaining two suits, both naming ByteDance and TikTok as defendants, target TikTok’s alleged shortcomings in content moderation and also reiterate claims about TikTok’s alleged threat to US national security.

    The first suit alleges that TikTok has misled users by identifying its app as suitable for teens on app stores because of the “abundant” presence of content showing profanity, substance use and nudity. The suit further alleges that TikTok’s Chinese sister app, Douyin, does not make such content available within China.

    “TikTok poses known risks to young teens that TikTok’s parent company itself finds inappropriate for Chinese users who are the same age,” the complaint said. “Yet TikTok pushes salacious and other mature content to all young U.S. users age 13 and up.”

    The second suit against ByteDance and TikTok accuse the companies of having made misleading statements about the reach of Chinese government officials and their purported inability to access TikTok user data. TikTok has migrated US user data to servers operated by the American tech giant Oracle and has established organizational controls intended to prevent unauthorized data access. But, the suit alleges, that does not mean the data is necessarily protected.

    “Neither TikTok’s data storage practices, nor its data security practices, negate the applicability of Chinese law to that data or to the individuals and entities who are subject to Chinese law and have access to that data, or the risk of access by the Chinese Government or Communist Party,” the complaint said.

    The suit also claims TikTok has misrepresented its approach to privacy and security by omitting the potential risks of Chinese government access from its privacy policies and in its statements to app store operators.

    TikTok and ByteDance didn’t immediately respond to a request for comment.

    In a statement announcing the lawsuits, Arkansas Gov. Sarah Huckabee Sanders said the suits reflect a “failed status quo.”

    “We have to hold Big Tech companies accountable for pushing addictive platforms on our kids and exposing them to a world of inappropriate, damaging content,” Sanders said. “These actions are a long time coming. We have watched over the past decade as one social media company after another has exploited our kids for profit and escaped government oversight.”

    [ad_2]

    Source link

  • Federal appeals court tosses state antitrust suit seeking to break up Meta | CNN Business

    Federal appeals court tosses state antitrust suit seeking to break up Meta | CNN Business

    [ad_1]



    CNN
     — 

    A group of states that sued to break up Facebook-parent Meta in 2020 were years too late to file their challenge and failed to make a persuasive case that the company’s data policies harmed competition, a federal appeals court ruled Thursday in a sweeping victory for the tech giant.

    In siding with Meta, the decision by a three-judge panel of the US Court of Appeals for the DC Circuit upheld a lower-court decision tossing out the suit initially filed by New York and dozens of other states.

    The decision is a blow to regulators who have cited Meta as a prime example of the way tech giants have allegedly abused their dominance. And it casts a shadow over a parallel antitrust case against Meta that was brought by the Federal Trade Commission at around the same time.

    The states’ original complaint had sought to unwind Meta’s past acquisitions of Instagram and WhatsApp, accusing the company of a “buy-or-bury” approach that violated antitrust laws.

    In 2021, a federal judge dismissed the complaint, saying that the lawsuit came long after the acquisitions had been completed in 2012 and 2014. Thursday’s appellate decision agreed.

    “An injunction breaking up Facebook, ordering it to divest itself of Instagram and WhatsApp under court supervision, would have severe consequences, consequences that would not have existed if the States had timely brought their suit and prevailed,” wrote Senior Circuit Judge Raymond Randolph.

    In addition, Randolph wrote, state allegations claiming that Meta’s — then Facebook’s — policies placing restrictions on app developers were anticompetitive didn’t hold up.

    The policies in question, Randolph wrote, simply told app developers they could not use Facebook’s platform “to duplicate Facebook’s core products,” and did not rise to the level of an antitrust violation under federal law.

    Although the states argued that Facebook’s policies at the time — which have since been removed — discouraged innovation by the company’s rivals, the complaint failed to establish how widely the policies affected Facebook’s third-party developers.

    “The States thus have not adequately alleged that this policy substantially foreclosed Facebook’s competitors, giving us an additional reason to reject their exclusive dealing theory,” the court held.

    A spokesperson for New York Attorney General Letitia James didn’t immediately respond to a request for comment.

    In a statement, Meta said the state’s case reflected a mischaracterization of “the vibrant competitive ecosystem in which we operate.”

    “In affirming the dismissal of this case, the court noted that this enforcement action was ‘odd’ because we compete in an industry that is experiencing ‘rapid growth and innovation with no end in sight,’ Meta said. “Moving forward, Meta will defend itself vigorously against the FTC’s distortion of antitrust laws and attacks on an American success story that are contrary to the interests of people and businesses who value our services.”

    In spite of Thursday’s decision, Meta must still face a similar lawsuit by the FTC, which also seeks to break up the company in connection with its Instagram and WhatsApp acquisitions.

    Last year, the same federal judge who dismissed the state suit, James Boasberg, allowed the federal suit to proceed. Boasberg had tossed out the FTC suit as well in 2021, saying the agency had failed to make an initial showing that Meta holds a monopoly in personal social networking. But he permitted the FTC to re-file its complaint with changes.

    [ad_2]

    Source link

  • Meta lowers the minimum age for its Quest headsets from 13 to 10 | CNN Business

    Meta lowers the minimum age for its Quest headsets from 13 to 10 | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Facebook-parent Meta plans to lower the minimum age for its virtual reality headsets from 13 years old to 10 years old, despite pressure from lawmakers not to market its VR services to younger users.

    Parents will be able to set up accounts for children as young as 10 years old on Meta’s Quest 2 and Quest 3 headsets starting later this year, the company said in a blog post Friday.

    Preteens will be required to get a parent’s approval to set up an account and download apps onto the device, according to the company. Meta said it will also use children’s ages to “provide age-appropriate experiences” such as recommending suitable apps.

    “There’s a vast array of engaging and educational apps, games, and more across our platform, the majority of which are rated for ages 10 and up,” Meta said in the post.

    The company’s push to lower the minimum age comes as Meta and other social media companies face growing scrutiny over their impact on young users, including their potential to harm teens’ mental health or lead them down harmful content rabbit holes.

    Parents and lawmakers have also specifically raised alarms about the use of VR — and the future version of the internet Meta calls the “metaverse” — by teens and children.

    Earlier this year, two Democratic senators urged Meta to suspend a plan to offer Horizon Worlds, the company’s flagship VR app, to teens between the ages of 13 and 17, arguing the technology could harm young users’ physical and mental health. The lawmakers, Massachusetts Sen. Ed Markey and Connecticut Sen. Richard Blumenthal, called Meta’s plan “unacceptable” in light of the company’s “record of failure to protect children and teens,” in a letter to CEO Mark Zuckerberg.

    But in April, Meta forged ahead with its plan to allow teens as young as 13 in the United States and Canada to use Horizon Worlds, prompting additional outcry from lawmakers and civil society groups.

    Parents told CNN last year about instances of discovering their children were viewing violent and disturbing content in VR and struggling to come up with ways to keep their kids safe.

    Meta is attempting to address some of parents’ concerns.

    In its Friday blog post, Meta said parents will be able to set time limits and enforce breaks for their preteens on the headsets. The accounts of users under 13 will be set to private and have their active status hidden on apps by default unless parents choose to change those settings. Meta also makes it possible to cast content from its VR headsets to a TV or phone screen, so parents can watch what their kids are seeing.

    Meta said it will not serve ads to users in this age group, and that parents can choose whether their child’s data can be used to improve the company’s services. Meta added on Friday that Horizon Worlds will remain restricted to users 13 and older in the United States and Canada (and 18 and older in Europe) when it allows preteens to create parent-manged accounts on the headsets later this year.

    Meta’s headset and Horizon Worlds represent Zuckerberg’s vision for a next-generation internet, where users can interact with each other in virtual spaces resembling real life. The company has so far struggled to attract a mainstream audience for these products.

    Update: This story has been updated to reflect Meta’s plan to continue restricting Horizon Worlds to users 13 and older.

    [ad_2]

    Source link

  • Threads now has ‘tens of millions’ of daily users. But its honeymoon phase may be over | CNN Business

    Threads now has ‘tens of millions’ of daily users. But its honeymoon phase may be over | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Two weeks after Meta launched its Twitter competitor Threads and received an unprecedented amount of user signups, the frenzy around the app appears to have come back to Earth.

    After surpassing 100 million user sign-ups in less than a week, user engagement on Threads has slowed. Threads daily active users fell from 49 million on July 7, two days after its launch, to 23.6 million users last Friday, according to a report published this week by web traffic analysis firm Similarweb. The app’s average usage time also fell from 21 minutes to 6 minutes over the same timeframe.

    The slowdown hints at the challenges ahead for Meta as it looks to not only draw users away from Twitter but build a service that reaches a far larger audience. Threads is already facing some of the common issues that often plague social media platforms, including user retention, spam and some early regulatory scrutiny around its approach to content moderation. It’s also not clear yet how much Meta’s investments in building Threads will actually amount to financial returns for the company.

    “I’m very optimistic about how the Threads community is coming together,” Meta CEO Mark Zuckerberg said in a post on the platform Monday. “Early growth was off the charts, but more importantly 10s of millions of people now come back daily … The focus for the rest of the year is improving the basics and retention.”

    Meta executives acknowledged in the early days after Threads’ launch that getting users to sign up for a buzzy new app is much easier than convincing them to continue engaging there long-term. That’s likely even more true for Threads, which launched as a relatively bare-bones app in an effort to capitalize on a moment of weakness at Twitter and also tapped into Instagram’s network to ease the sign-in process.

    Threads on Tuesday rolled out its first batch of updates to the iOS version of the app, including a translation button, a tab on users’ activity feed dedicated to showing who’s followed them and the option to subscribe and receive notifications from accounts a user doesn’t follow.

    Instagram head Adam Mosseri, who is overseeing the Threads launch, has also hinted at plans to add features such as a desktop version of the app, a feed of only accounts a user follows and an edit button. “We’re clearly way out over our skis on this,” Mosseri said in a Threads post the week of the app’s launch.

    In the meantime, Threads is grappling with a common social media issue — spam. Users have complained of replies to posts filling up with spammy links and offering “giveaways” in exchange for new followers. And on Monday, Mosseri said in a Threads post that the platform was “going to have to get tighter on things like rate limits” because “spam attacks have picked up.”

    This “is going to mean more unintentionally limiting active people (false positives),” Mosseri warned. “If you get caught up [in] those protections let us know.”

    Meta declined to clarify whether Mosseri’s post refers to limits on users’ ability to post or read content, or to provide any additional details. But the comment did prompt some snark from Twitter owner Elon Musk, after backlash to Twitter’s own rate limits — restrictions on how many tweets users can read — helped propel Threads’ early growth.

    Meta shares have jumped more than 6% since the Threads launch, but some analysts who follow the company are skeptical that Threads will quickly contribute to the company’s bottom line, if at all.

    Threads could be a way for Meta to eke additional engagement time out of its massive existing user base. The app could also ultimately supplement Meta’s core advertising business, which could use a boost after facing challenges from a broad decline in the online ad market and changes to Apple’s app privacy practices.

    Meta executives have said they will likely incorporate advertising into the platform, once its user base has reached critical mass. But even if Threads continues to add users, “advertisers could be hesitant and possibly wait before allocating ad dollars to Threads because of their uncertainty about long-run user retention and engagement,” Morningstar senior equity analyst Ali Mogharabi said in a recent investor note.

    Like Twitter, Threads could also struggle to attract advertisers because the nature of a real-time news and public conversations app means the content is sometimes negative or controversial. Even before Musk took over Twitter and alienated advertisers, the platform represented a tiny piece of the ad sales market compared to Meta’s properties.

    Threads, however, likely has a leg up on Twitter because Meta is known as a company that provides clear value for advertisers, said Scott Kessler, global tech sector lead at research firm Third Bridge. If anything, he said, the risk may be that some advertisers may think twice about spending on yet another Meta platform versus diversifying their ad strategy.

    For now, analysts will be awaiting Meta executives’ commentary about Threads during its quarterly earnings call next week, including to see if they offer any hints about whether ads may be rolled out on the app ahead of the crucial holiday shopping season.

    “They launched this in July,” Kessler said. “That should give them enough time to build out sufficient tools for holiday shopping season advertising.”

    [ad_2]

    Source link

  • The city without TikTok offers a window to America’s potential future | CNN Business

    The city without TikTok offers a window to America’s potential future | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Across the United States, more than 150 million people are being faced with the possibility of a new reality: life without TikTok.

    The wildly popular short-form video app has been at the center of an ongoing battle, with lawmakers calling for an outright ban, and the company portraying itself as a critical community space, educational platform and just plain fun.

    In Hong Kong, there’s no need to imagine that reality: TikTok discontinued its services there in 2020.

    Its abrupt departure was met with mixed reactions: disappointment from some users and content creators, but also relief from others who say life is better without the app’s infinite scroll.

    At the time of its exit, TikTok had a relatively modest presence in the city and was not ubiquitous like it is in the US today.

    But the varied reactions to its departure, and the way users have pivoted to other platforms or even real-life offline communities, offer Americans a glimpse into their potential TikTok-less future.

    TikTok announced its exit from Hong Kong in July 2020, a week after China imposed a controversial national security law in the city. The decision came as the app tried to distance itself from China and its Beijing-based parent company ByteDance, in the face of growing pressure in the US under the Trump administration.

    But it meant a jarring halt for creators like Shivani Dukhande, who had roughly 45,000 followers at the time the app left Hong Kong.

    Dukhande, 25, saw her account take off in early 2020 during the pandemic, with lifestyle content such as cooking and wellness videos flourishing on the platform.

    “There were a lot of new creators emerging,” she said. “We used to all collaborate together, we had a chat where we would all speak and share ideas and it created a community.”

    Momentum began to build. Companies started reaching out to Dukhande, paying for sponsored content and collaborating on ad campaigns. Brands began partnering with creators on trending “challenges” in a bid to attract young new consumers.

    “More people were joining and it was becoming such a fun thing to do,” she said. “Then, it just kind of went away one morning.”

    “If it continued, then I probably could have made enough to have quit my 9 to 5,” she said. “If I had the chance to grow, it could have been a potential career path.”

    This is one of the main arguments TikTok has made in recent weeks in the US. In March, as the company’s CEO prepared to testify before Congress, TikTok produced a docuseries highlighting American small business owners who rely on the platform for their livelihoods.

    The platform is used by nearly five million businesses in the US, TikTok said in March. And it’s set to surpass rivals: London-based research firm Omdia projected in November that TikTok’s advertising revenues will exceed the combined video ad revenues of Meta – home of Facebook and Instagram – and YouTube by 2027.

    This is partly because people are spending more time on TikTok. In the second quarter of 2022, TikTok users globally spent an average of 95 minutes per day on the app, according to data analytics firm SensorTower – nearly twice as much time as users spent on Facebook and Instagram.

    Shivani Dukhande had created videos about wellness, lifestyle, food and Hong Kong on her TikTok account.

    But in Hong Kong, other platforms have jumped in to fill the gap. Reels, Instagram’s short-form video product, with similar features as TikTok such as an endless scroll, is growing quickly – and Dukhande has gotten on board.

    She had to rebuild her audience from scratch, and now has 12,500 Instagram followers, but she feels optimistic about its growth. Still, the loss of TikTok was a “missed opportunity,” she said, and the burgeoning community of creators has largely faded from sight.

    “The amount of jobs, the amount of content creation, the amount of marketing opportunities that were there with TikTok – we sort of missed out on that whole chunk of it.”

    But for some people, TikTok’s departure was a welcome change.

    Poppy Anderson, 16, has been using TikTok since its launch in 2018. And, like many others in her generation, she would spend hours “scrolling and scrolling” – even when feeling unfulfilled.

    “It was very easy to kind of find exactly what you like on there, because the [algorithm-run] For You page kept you there,” she said. “And it’s entertaining, but you don’t really get anything from it.”

    She described TikTok as often being a toxic environment that breeds narrow thinking, herd mentality, a misguided “cancel culture” and inappropriate online behavior such as critiquing the bodies of girls and women. Even people she knew in real life began acting differently after joining the app, which strained friendships, she said.

    Martin Poon, 15, also grew weary of TikTok, but it was hard to quit.

    “Everyone was using it, so I feel like there was a sense that you have to use it, you have to be on top of things, you have to know what’s going on. And I think that was stressful to me,” he said.

    Misinformation and misogyny ran rampant on TikTok, with accounts like those of Andrew Tate, the self-styled “alpha male” recently detained in Romania on allegations of human trafficking and rape, gaining popularity among boys at Poon’s school.

    “It’s just concerning how [these accounts] have so much impact on the youth, and it has so much grip on what we think and how it affects our behavior,” said Poon – though he added that misinformation is a major problem on all social media platforms, not just TikTok.

    Experts have long worried about the impact of TikTok on young people’s mental health, with one study claiming the app may surface potentially harmful content related to suicide and eating disorders to teenagers within minutes of them creating an account.

    In response to growing pressure, TikTok recently announced a one-hour daily screentime limit for users under 18, though users will be able to turn off this default setting.

    Anderson acknowledged some positives about TikTok, like open conversations about mental health. Still, she was glad when the app became inaccessible. Falling asleep became easier without the lure of TikTok. “I didn’t have the self control to get off it on my own,” she said.

    For Poon and his friend Ava Chan, also 15, TikTok’s disappearance sparked new beginnings.

    When the app left in 2020, they were doing online classes, isolated from friends and bored at home. At the time, Instagram Reels and YouTube Shorts had yet to arrive in Hong Kong.

    “We had to figure out how to use our time other than being on TikTok,” said Chan. “For us, that was exploring our passions more.”

    For both, that came in advocating for the neurodiverse community. They launched a club at school that spreads education and awareness about neurodiversity, as well as participating in volunteer activities with neurodiverse people.

    Both said it lent them a sense of purpose, and as time went on, they saw other benefits.

    Their friends, who would previously spend time filming and watching TikToks together, began having more face-to-face conversations. They noticed peers begin exercising outdoors more, which was made easier as Covid restrictions lifted. Their mental health improved.

    Of course, being teenagers, they’re not off social media entirely and use it as a tool to promote their club – but it’s far from the previous hours of scrolling. And while they occasionally wonder what’s happening on TikTok outside Hong Kong, the allure of it is lost when nobody else around them uses it either.

    “A lot of people, they’ve just kind of forgotten about it,” said Anderson. “People move to different platforms – or just move on.”

    [ad_2]

    Source link

  • How Meta got caught in tensions between the US and EU | CNN Business

    How Meta got caught in tensions between the US and EU | CNN Business

    [ad_1]



    CNN
     — 

    Facebook-parent Meta has perhaps become the most high-profile casualty of a long-running privacy dispute between Europe and the United States — but it may not be the last.

    Meta has been fined a record-breaking €1.2 billion ($1.3 billion) by European Union regulators for violating EU privacy laws by transferring the personal data of Facebook users to servers in the United States. Meta said Monday it would appeal the ruling, including the fine.

    The historic fine against Meta — and a potentially game-changing legal order that could force Meta to stop transferring EU users’ data to the United States — isn’t just a one-off decision limited to this one company or its individual business practices. It reflects bigger, unresolved tensions between Europe and the United States over data privacy, government surveillance and regulation of internet platforms.

    Those underlying and fundamental disagreements, which have simmered for years, have now come to a head, casting a significant shadow over thousands of businesses that depend on processing EU data in the United States.

    Beyond its huge economic implications, however, the fine has once again highlighted Europe’s deep mistrust of US surveillance powers — right as the US government is trying to build its own case against foreign-linked apps such as TikTok over similar surveillance concerns.

    The origins of Meta’s fine this week trace back to a 2020 ruling by Europe’s top court.

    In that decision, the European Court of Justice struck down a complex transatlantic framework Meta and many other companies had been relying on until then to legally move EU user data to US servers in the ordinary course of running their businesses.

    That framework, known as Privacy Shield, was itself the outgrowth of European complaints that US authorities didn’t do enough to protect the privacy of EU citizens. At the time Privacy Shield was created, the world was still reeling from disclosures made by National Security Agency leaker Edward Snowden. His disclosures highlighted the vast reach of US surveillance programs such as PRISM, which allowed the NSA to snoop on the electronic communications of foreign nationals as they used tech tools built by Google, Microsoft, and Yahoo, among others.

    PRISM relied on a basic fact of internet architecture: Much of the world’s online communications take place on US-based platforms that route their data through US servers, with few legal protections or recourse for either foreigners or Americans swept up in the tracking.

    A 2013 European Parliament report on the PRISM program captured the EU’s sense of alarm, noting the “very strong implications” for EU citizens.

    “PRISM seems to have allowed an unprecedented scale and depth in intelligence gathering,” the report said, “which goes beyond counter-terrorism and beyond espionage activities carried out by liberal regimes in the past. This may lead towards an illegal form of Total Information Awareness where data of millions of people are subject to collection and manipulation by the NSA.”

    Privacy Shield was a 2016 US-EU agreement designed to address those concerns by making US companies certifiably accountable for their handling of EU user data. For a time, it seemed as if Privacy Shield could be a lasting solution facilitating the growth of the internet and a globally connected society, one in which the free flow of data would not be impeded.

    But when the European Court of Justice invalidated that framework in 2020, it reiterated longstanding surveillance concerns and insisted that Privacy Shield still didn’t provide EU citizens’ personal information the same level of protection in the US that it enjoys in EU countries, a standard required under GDPR, the EU’s signature privacy law.

    The loss of Privacy Shield created enormous uncertainty for the more than 5,300 businesses that rely on the smooth transfer of data across borders. The US government has said transatlantic data flows support the more than $7 trillion dollars of economic activity that occurs every year between the United States and the European Union. And the US Chamber of Commerce has estimated that transatlantic data transfers account for about half of all data transfers in both the US and the EU.

    The Biden administration has moved to implement a successor to Privacy Shield that contains some changes to US surveillance practices, and if it is fully implemented in time, it could prevent Meta and other companies from having to suspend transatlantic data transfers or some of their European operations.

    But it’s unclear whether those changes will be enough to be accepted by the EU, or whether the new data privacy framework could avoid its own court challenge.

    The possibility that US-EU data transfers may be seriously disrupted is refocusing scrutiny on US surveillance law just as the US government has been sounding its own alarms about Chinese government surveillance.

    US officials have warned that China could seek to use data collected from TikTok or other foreign-linked companies to benefit the country’s intelligence or propaganda campaigns, using the personal information to identify spying targets or to manipulate public opinion through targeted disinformation.

    But US moral authority on the issue risks being eroded by the EU criticism, a problem for the US government that may only be compounded by its own missteps.

    Just last week, a federal court described how the FBI improperly accessed a vast intelligence database meant for surveilling foreign nationals in a bid to gather information on US Capitol rioters and those who protested the 2020 killing of George Floyd.

    The improper access, which was not “reasonably likely” to retrieve foreign intelligence information or evidence of a crime, according to a Justice Department assessment described in the court’s opinion, has only inflamed domestic critics of US surveillance law, and could give ammunition to EU critics.

    The intelligence database at issue was authorized under Section 702 of the Foreign Intelligence Surveillance Act — the same law used to justify the NSA’s PRISM program and which the EU has repeatedly cited as a danger to its citizens and a reason to suspect transatlantic data sharing.

    While the US distinguishes itself from China based on commitments to open and democratic governance, the EU’s concerns about the US are not much different in kind: They come from a place of deep mistrust of broad surveillance authority and suspicions about the potential misuse of user data.

    For years, civil liberties advocates have alleged that Section 702 enables warrantless spying on Americans on an enormous scale. Now, the FBI incident may only further validate EU fears; add to the existing concerns that led to Meta’s fine; contribute to the potential unraveling of the US-EU data relationship; and damage US credibility in its push to warn about the hypothetical risks of letting TikTok data flow to China.

    If a new transatlantic data agreement is delayed or falls apart, Meta won’t be the only company stuck with the bill. Thousands of other companies may get caught in the middle, and the United States will have to hope nobody looks too closely at why while still trying to make a case against TikTok.

    [ad_2]

    Source link

  • ‘It’s an especially bad time’: Tech layoffs are hitting ethics and safety teams | CNN Business

    ‘It’s an especially bad time’: Tech layoffs are hitting ethics and safety teams | CNN Business

    [ad_1]


    New York
    CNN
     — 

    In the wake of the 2016 presidential election, as online platforms began facing greater scrutiny for their impacts on users, elections and society, many tech firms started investing in safeguards.

    Big Tech companies brought on employees focused on election safety, misinformation and online extremism. Some also formed ethical AI teams and invested in oversight groups. These teams helped guide new safety features and policies. But over the past few months, large tech companies have slashed tens of thousands of jobs, and some of those same teams are seeing staff reductions.

    Twitter eliminated teams focused on security, public policy and human rights issues when Elon Musk took over last year. More recently, Twitch, a livestreaming platform owned by Amazon, laid off some employees focused on responsible AI and other trust and safety work, according to former employees and public social media posts. Microsoft cut a key team focused on ethical AI product development. And Facebook-parent Meta suggested that it might cut staff working in non-technical roles as part of its latest round of layoffs.

    Meta, according to CEO Mark Zuckerberg, hired “many leading experts in areas outside engineering.” Now, he said, the company will aim to return “to a more optimal ratio of engineers to other roles,” as part of cuts set to take place in the coming months.

    The wave of cuts has raised questions among some inside and outside the industry about Silicon Valley’s commitment to providing extensive guardrails and user protections at a time when content moderation and misinformation remain challenging problems to solve. Some point to Musk’s draconian cuts at Twitter as a pivot point for the industry.

    “Twitter making the first move provided cover for them,” said Katie Paul, director of the online safety research group the Tech Transparency Project. (Twitter, which also cut much of its public relations team, did not respond to a request for comment.)

    To complicate matters, these cuts come as tech giants are rapidly rolling out transformative new technologies like artificial intelligence and virtual reality — both of which have sparked concerns about their potential impacts on users.

    “They’re in a super, super tight race to the top for AI and I think they probably don’t want teams slowing them down,” said Jevin West, associate professor in the Information School at the University of Washington. But “it’s an especially bad time to be getting rid of these teams when we’re on the cusp of some pretty transformative, kind of scary technologies.”

    “If you had the ability to go back and place these teams at the advent of social media, we’d probably be a little bit better off,” West said. “We’re at a similar moment right now with generative AI and these chatbots.”

    When Musk laid off thousands of Twitter employees following his takeover last fall, it included staffers focused on everything from security and site reliability to public policy and human rights issues. Since then, former employees, including ex-head of site integrity Yoel Roth — not to mention users and outside experts — have expressed concerns that Twitter’s cuts could undermine its ability to handle content moderation.

    Months after Musk’s initial moves, some former employees at Twitch, another popular social platform, are now worried about the impacts recent layoffs there could have on its ability to combat hate speech and harassment and to address emerging concerns from AI.

    One former Twitch employee affected by the layoffs and who previously worked on safety issues said the company had recently boosted its outsourcing capacity for addressing reports of violative content.

    “With that outsourcing, I feel like they had this comfort level that they could cut some of the trust and safety team, but Twitch is very unique,” the former employee said. “It is truly live streaming, there is no post-production on uploads, so there is a ton of community engagement that needs to happen in real time.”

    Such outsourced teams, as well as automated technology that helps platforms enforce their rules, also aren’t as useful for proactive thinking about what a company’s safety policies should be.

    “You’re never going to stop having to be reactive to things, but we had started to really plan, move away from the reactive and really be much more proactive, and changing our policies out, making sure that they read better to our community,” the employee told CNN, citing efforts like the launch of Twitch’s online safety center and its Safety Advisory Council.

    Another former Twitch employee, who like the first spoke on condition of anonymity for fear of putting their severance at risk, told CNN that cutting back on responsible AI work, despite the fact that it wasn’t a direct revenue driver, could be bad for business in the long run.

    “Problems are going to come up, especially now that AI is becoming part of the mainstream conversation,” they said. “Safety, security and ethical issues are going to become more prevalent, so this is actually high time that companies should invest.”

    Twitch declined to comment for this story beyond its blog post announcing layoffs. In that post, Twitch noted that users rely on the company to “give you the tools you need to build your communities, stream your passions safely, and make money doing what you love” and that “we take this responsibility incredibly seriously.”

    Microsoft also raised some alarms earlier this month when it reportedly cut a key team focused on ethical AI product development as part of its mass layoffs. Former employees of the Microsoft team told The Verge that the Ethics and Society AI team was responsible for helping to translate the company’s responsible AI principles for employees developing products.

    In a statement to CNN, Microsoft said the team “played a key role” in developing its responsible AI policies and practices, adding that its efforts have been ongoing since 2017. The company stressed that even with the cuts, “we have hundreds of people working on these issues across the company, including net new, dedicated responsible AI teams that have since been established and grown significantly during this time.”

    Meta, maybe more than any other company, embodied the post-2016 shift toward greater safety measures and more thoughtful policies. It invested heavily in content moderation, public policy and an oversight board to weigh in on tricky content issues to address rising concerns about its platform.

    But Zuckerberg’s recent announcement that Meta will undergo a second round of layoffs is raising questions about the fate of some of that work. Zuckerberg hinted that non-technical roles would take a hit and said non-engineering experts help “build better products, but with many new teams it takes intentional focus to make sure our company remains primarily technologists.”

    Many of the cuts have yet to take place, meaning their impact, if any, may not be felt for months. And Zuckerberg said in his blog post announcing the layoffs that Meta “will make sure we continue to meet all our critical and legal obligations as we find ways to operate more efficiently.”

    Still, “if it’s claiming that they’re going to focus on technology, it would be great if they would be more transparent about what teams they are letting go of,” Paul said. “I suspect that there’s a lack of transparency, because it’s teams that deal with safety and security.”

    Meta declined to comment for this story or answer questions about the details of its cuts beyond pointing CNN to Zuckerberg’s blog post.

    Paul said Meta’s emphasis on technology won’t necessarily solve its ongoing issues. Research from the Tech Transparency Project last year found that Facebook’s technology created dozens of pages for terrorist groups like ISIS and Al Qaeda. According to the organization’s report, when a user listed a terrorist group on their profile or “checked in” to a terrorist group, a page for the group was automatically generated, although Facebook says it bans content from designated terrorist groups.

    “The technology that’s supposed to be removing this content is actually creating it,” Paul said.

    At the time the Tech Transparency Project report was published in September, Meta said in a comment that, “When these kinds of shell pages are auto-generated there is no owner or admin, and limited activity. As we said at the end of last year, we addressed an issue that auto-generated shell pages and we’re continuing to review.”

    In some cases, tech firms may feel emboldened to rethink investments in these teams by a lack of new laws. In the United States, lawmakers have imposed few new regulations, despite what West described as “a lot of political theater” in repeatedly calling out companies’ safety failures.

    Tech leaders may also be grappling with the fact that even as they built up their trust and safety teams in recent years, their reputation problems haven’t really abated.

    “All they keep getting is criticized,” said Katie Harbath, former director of public policy at Facebook who now runs tech consulting firm Anchor Change. “I’m not saying they should get a pat on the back … but there comes a point in time where I think Mark [Zuckerberg] and other CEOs are like, is this worth the investment?”

    While tech companies must balance their growth with the current economic conditions, Harbath said, “sometimes technologists think that they know the right things to do, they want to disrupt things, and aren’t always as open to hearing from outside voices who aren’t technologists.”

    “You need that right balance to make sure you’re not stifling innovation, but making sure that you’re aware of the implications of what it is that you’re building,” she said. “We won’t know until we see how things continue to operate moving forward, but my hope is that they at least continue to think about that.”

    [ad_2]

    Source link

  • Meta’s Threads app rolls out first big batch of updates | CNN Business

    Meta’s Threads app rolls out first big batch of updates | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Meta’s Twitter rival app Threads on Tuesday rolled out its first major batch of updates since its launch two weeks ago as it works to maintain momentum.

    The new features include a translation button and a tab on users’ activity feed dedicated to showing who’s followed them, according to a post from Cameron Roth, a software engineer working on Threads.

    All new features should be available to iOS Threads users by the end of Tuesday, Roth said.

    Threads users have been clamoring for updates since its launch. The new app attracted over 100 million user sign-ups in less than a week, but it still lacks many of the features popular on Twitter and other platforms, including direct messaging and a robust search function.

    User engagement on Threads has dipped since its first week, according to web traffic analysis firm Similarweb. And Meta executives have teased plans to improve the app in hopes of getting users to keep coming back.

    “Early growth was off the charts, but more importantly 10s of millions of people now come back daily … The focus for the rest of the year is improving the basics and retention,” Meta CEO Mark Zuckerberg said in a Threads post Monday.

    Tuesday’s updates also include the ability to subscribe and receive notifications from accounts a user doesn’t follow and a “+” button that lets users follow new accounts from the replies on a post, as well as bug fixes and other improvements.

    Instagram head Adam Mosseri, who is overseeing Threads, has also hinted at plans to introduce a desktop version of the app as well as a feed of only accounts a user follows and an edit button.

    [ad_2]

    Source link

  • Arkansas governor signs sweeping bill imposing a minimum age limit for social media usage | CNN Business

    Arkansas governor signs sweeping bill imposing a minimum age limit for social media usage | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Arkansas Gov. Sarah Huckabee Sanders has signed a sweeping bill imposing a minimum age limit for social media usage, in the latest example of states taking more aggressive steps intended to protect teens online.

    But even as Sanders signed the bill into law on Wednesday afternoon, the legislation appeared to contain vast loopholes and exemptions benefiting companies that lobbied on the bill and raising questions about how much of the industry it truly covers.

    The legislation, known as the Social Media Safety Act and taking effect in September, is aimed at giving parents more control over their kids’ social media usage, according to lawmakers. It defines social media companies as any online forum that lets users create public profiles and interact with each other through digital content.

    It requires companies that operate those services to verify the ages of all new users and, if the users are under 18 years old, to obtain a parent’s consent before allowing them to create an account. To perform the age checks, the law relies on third-party companies to verify users’ personal information, such as a driver’s license or photo ID.

    “While social media can be a great tool and a wonderful resource, it can have a massive negative impact on our kids,” Sanders said at a press conference before signing the bill.

    Utah finalized a similar law last month, raising concerns among some users and advocacy groups that the legislation could make user data less secure, internet access less private and infringe upon younger users’ basic rights.

    The push by states to legislate on social media comes after years of mounting scrutiny of the industry and claims that it has harmed users’ well-being and mental health, particularly among teens.

    Despite its seemingly universal scope, however, the new law, also known as SB396, includes numerous carveouts for certain types of digital services and, in some cases, individual companies. And although its sponsors have said the law is specifically meant to apply to certain platforms, including TikTok, parts of the legislative language appear to result in the exact opposite effect.

    In the final days of negotiation over the bill, Arkansas lawmakers approved an amendment that created several categorical exemptions from the age verification requirements. Media companies that “exclusively” offer subscription content; social media platforms that permit users to “generate short video clips of dancing, voice overs, or other acts of entertainment”; and companies that “exclusively offer” video gaming-focused social networking features were exempted.

    Another amendment carved out companies that sell cloud storage services, business cybersecurity services or educational technology and that simultaneously derive less than 25% of their total revenue from running a social media platform.

    Sen. Tyler Dees, a lead co-sponsor of the legislation, explained in remarks on the Arkansas senate floor on April 6 that the exemptions and tweaks to the bill, some of which he said were made in consultation with Apple, Meta and Google, were intended to shield non-social media services from the bill’s age requirements and to focus attention on new accounts created by children, not existing adult accounts.

    “There’s other services that Google offers … like cloud storage, et cetera,” Dees said. “So that’s really the intent of carving out — like LinkedIn, that is a social – I’m sorry, that is a business networking site, and so that’s the intent of those bills.”

    Microsoft-owned LinkedIn is apparently exempt from SB396 under a provision that carves out companies that provide “career development opportunities, including professional networking, job skills, learning certifications, and job posting and application services.”

    Other lawmakers have questioned whether the legislation — which has now become law — exempts a giant of the social media industry: YouTube, whose auto-play features and algorithmic recommendation engine have been accused of promoting extremism and radicalizing viewers.

    The confusion over YouTube appears to stem from the carveout for businesses that offer cloud storage and that make less than 25% of their revenue from social media.

    What is unclear is whether YouTube is subject to SB396 because it is a distinct company within Google whose revenue comes almost entirely from operating a social media platform, or whether it is not covered because YouTube is a part of Google and Google is exempt because it derives only a small share of its revenues from YouTube.

    In response to questions by CNN, Dees said SB396 targets platforms including Facebook, Instagram and TikTok, but omitted any mention of Google and declined to answer whether YouTube specifically would be covered by the law.

    “The purpose of this bill was to empower parents and protect kids from social media platforms, like Facebook, Instagram, TikTok and Snapchat,” Dees said in a statement. “We worked with stakeholders to ensure that email, text messaging, video streaming, and networking websites were not covered by the bill.”

    In remarks at Wednesday’s bill signing, Sanders told reporters that Google and Amazon are exempted from the law, implying that YouTube will not be subject to the age verification requirements imposed on other major social media sites.

    Meanwhile, Dees’ statement appeared to contradict the language in SB396 that purports to exempt any company that “allows a user to generate short video clips of dancing, voice overs, or other acts of entertainment in which the primary purpose is not educational or informative” — content that can be commonly found on TikTok, Snapchat and the other social media platforms Deese named.

    According to Meta spokesperson, “We want teens to be safe online. We’ve developed more than 30 tools to support teens and families, including tools that let parents and teens work together to limit the amount of time teens spend on Instagram, and age-verification technology that helps teens have age-appropriate experiences.”

    Meta “automatically set teens’ accounts to private when they join Instagram, we’ve further restricted the options advertisers have to reach teens, as well as the information we use to show ads to teens… and we don’t allow content that promotes suicide, self-harm or eating disorders,” according to the spokesperson, who added: “We’ll continue to work closely with experts, policymakers and parents on these important issues.”

    Spokespeople for Snapchat, TikTok and YouTube didn’t immediately respond to a request for comment.

    [ad_2]

    Source link

  • How Elon Musk upended Twitter and his own reputation in 6 months as CEO | CNN Business

    How Elon Musk upended Twitter and his own reputation in 6 months as CEO | CNN Business

    [ad_1]


    New York
    CNN
     — 

    When Elon Musk first agreed to buy Twitter, he promised to make the company “better than ever,” with greater transparency, fewer bots, a stronger business and more of what he called “free speech.”

    But six months after Musk took control of Twitter, the future of the company and the platform have never been less certain.

    After acquiring the social media platform for $44 billion in late October, Musk reportedly now values Twitter at around $20 billion — and some who track the company believe even that estimate is likely high. Musk repeatedly warned that Twitter could be at risk of filing for bankruptcy only to claim he had brought it back from the brink thanks to his slashing costs, both by laying off 80% of Twitter’s staff and allegedly by failing to pay some of its bills, according to multiple lawsuits. But it’s not clear just how and when Musk might return Twitter to growth.

    He has antagonized journalists and news outlets that have long been central to the platform’s success, overseen policy changes that threaten to make Twitter less safe or reliable, made the platform less transparent to researchers and scared away many top advertisers. Musk’s primary plan to grow Twitter’s business through an overhauled subscription strategy has resulted in much chaos but only a limited number of actual subscriptions.

    In the process, Musk has also upended his own reputation. Once known by much of the public primarily for his innovative efforts to launch rockets and build electric cars, Musk has instead spent much of the past six months in the headlines for controversial policy and feature changes at Twitter, draconian cuts to staff resulting in frequent service disruptions, and briefly banning several prominent journalists. He’s also tweeted a long list of eccentric remarks from his personal Twitter account, including sharing conspiracy theories and publicly mocking a Twitter worker with a disability who was unsure whether he’d been laid off.

    “If he had done nothing except cut costs, then Twitter would have been okay,” said Leslie Miley, a former Twitter engineering manager who started its product safety and security team and left the company in 2015. He has since held roles at Google, Microsoft and the Obama Foundation. “If you had just let everyone go, treated them with respect, and just let the service run for two years, you probably would be okay.”

    Now, though, Miley said he expects Twitter will “eventually go down the road of MySpace.”

    “It’s going to take a little bit longer … [but] I think Twitter is on its way to irrelevance,” he said, “there is no strategy to acquire or retain users because you are offering them no value.”

    Twitter, which has slashed much of its public relations team under Musk, responded to CNN’s request for comment on this story with the auto-reply from its press email that it has used for weeks: a poop emoji.

    For years, what differentiated Twitter from other social platforms was that it served as a central hub for real-time news. It was a place for ordinary people to read and even engage in conversation with celebrities, business leaders and other newsmakers.

    Many of Musk’s recent moves at the platform threaten to undermine that purpose, not to mention the larger information ecosystem — and it’s not clear the efforts will improve the company’s business.

    “Twitter has never been perfect, it had a lot of problems but it was critical global infrastructure for information that Elon Musk is now systematically, frankly, vandalizing,” former Twitter chair of global news Vivian Schiller told CNN in a recent interview.

    Most recently, Musk removed the legacy blue check marks that verified the identities of prominent users, saying he would instead make the checks available only to those who pay $8 per month for Twitter Blue in the interest of “treating everyone equally.”

    “There shouldn’t be a different standard for celebrities,” Musk said in a tweet earlier this month.

    But the move may make it easier for bad actors to impersonate high-profile people and harder for users to trust the veracity and authenticity of information on the platform. What’s more, Musk then decided to sponsor the blue checks for certain celebrities, including Stephen King and LeBron James, in effect creating exactly the “different standard” for famous users he’d professed to want to avoid.

    Now, Musk says content from verified users will be promoted on the platform, potentially making it harder for users who can’t afford a subscription, or simply don’t want to pay Musk for one, to find an audience on the platform. And the new paid verification system won’t necessarily rid the platform of bots, an issue Musk spent months railing on while trying to get out of the acquisition deal last year, according to Filippo Menczer, a computer science professor at Indiana University and director of the Observatory on Social Media.

    “You can create fake accounts and pay $8 [for a blue check] … so if you are a well-funded bad actor, you can do more damage now than you could before,” Menczer said. “And if you are a reliable source and you’re not well-funded, your information will not be as visible as before.”

    Menczer added that the result could be “less free speech, because you’re drowning out the speech of regular people [with speech] by people who either have the technical skills or the money to manipulate the system.”

    Twitter’s move to charge users of its API will also make it harder for researchers to identify and warn the platform about inauthentic activity, Menczer said, and could disrupt other positive uses of the platform that contributed to its reputation as a news hub. Weather agencies, for example, have warned that the change could make it harder for them to release automated emergency weather alerts.

    Any social network lives or dies based on its ability to retain and attract users — and there’s real reason for Twitter to be worried.

    A number of users, celebrities and media organizations have said they plan to leave Twitter over Musk’s recent policy changes — which often appear to be made on a whim without any real principles.

    NPR, BBC and CBC left Twitter after opposing a controversial new “government-funded media” label that they say was misleading. CenterLink, a global nonprofit that represents hundreds of centers providing services to LGBTQ communities, said it would no longer use Twitter after the platform removed protections for transgender users from its hateful conduct policy. And some high-profile users, such as bullying activist Monica Lewinsky, have threatened to exit the platform over the blue check change, now that they may be at greater risk of impersonation on Twitter.

    There remain few alternatives that offer similar features and scale to Twitter, but a growing list of upstart competitors has emerged since Musk’s takeover. At least one large rival, Facebook-parent Meta, has also confirmed it’s working on a service that sounds a lot like Twitter.

    “Almost everything he said he was going to do, he has screwed up in any number of ways,” Miley said. “If it weren’t so damaging to people and organizations who have depended upon the platform, it would be funny. But it’s not actually funny because it has degraded people’s ability to communicate effectively.”

    All of the chaos has made it difficult to convince advertisers, which previously made up 90% of Twitter’s revenue, to rejoin the platform, after many halted spending in the wake of Musk’s takeover over concerns about increased hate speech, as well as confusion about layoffs and the platform’s future direction.

    Just 43% of Twitter’s top 1,000 advertisers as of September — the month before Musk’s takeover — were still advertising on the platform in April, according to data from market intelligence firm Sensor Tower.

    Musk, for his part, has said that Twitter’s usage has increased since his takeover and that advertisers are steadily returning to the platform. But because he took the company private, he is not obligated to make financial disclosures and followers of the company are left to take him at his word.

    Musk built his reputation by overhauling Tesla, helping to launch a widespread shift away from gas cars to electric vehicles and growing SpaceX into a space transport juggernaut. Now, he appears to be attempting a similar overhaul at Twitter — upending the tried-and-true digital advertising business in favor of a subscription model that no other social media platform has yet been able to find large scale success with.

    “I give him some credit for trying a different business model, I think the business model based on user data is quite abusive,” said Luigi Zingales, professor at the University of Chicago Booth School of Business, although Musk has also attempted to improve Twitter’s targeted advertising business.

    Some other tech companies have followed his lead in some places. Facebook-parent Meta copied Twitter by launching a paid verification option. And Meta, along with a number of other tech companies, have undergone multiple rounds of cost-cutting since last fall. Twitter appears to have given cover for some of these ideas, and other firms’ somewhat more principled approaches made them look better by comparison.

    For Twitter and Musk, the stakes for success are high: Musk’s relationships with banks and investors for future endeavors could hinge in part on his performance at the social media firm, which he took on billions of dollars in debt to purchase. Banks “will sit down and say, what kind of cred does this guy have? Will we find him making these shoot-from-the-lip sort of dictates that, in fact, throw our money down a hole?” said Columbia Business School management professor William Klepper.

    Any change to Musk’s reputation from his time leading Twitter could also ultimately have ripple effects for his broader business empire, causing potential investors, recruits and customers to think twice about betting on one of his companies. Tesla

    (TSLA)
    shareholders recently complained to the company’s board that Musk appears “overcommitted.”

    “His reputation has been diminished significantly with Twitter … and once you lose it, it’s very difficult to recover,” Klepper said. “It would be a good opportunity for [Musk] to rethink whether or not … he’s really leadership material.”

    Musk in December pledged to step down as Twitter CEO after millions of users voted in favor of his exit in a poll he posted to the platform. But for now, he remains “Chief Twit.”

    [ad_2]

    Source link

  • Meta sells Giphy at a significant loss after UK breakup order | CNN Business

    Meta sells Giphy at a significant loss after UK breakup order | CNN Business

    [ad_1]



    CNN
     — 

    Stock-photo website Shutterstock on Tuesday said it will acquire Giphy and its online repository of animated images for $53 million, after UK antitrust regulators forced Meta to spin off the company last year.

    The value of the deal is sharply lower than the $315 million Meta was widely reported to have paid to acquire Giphy in 2020.

    UK officials had alleged that Meta’s acquisition would reduce competition in advertising and social media, and an appeals court upheld that decision last year, prompting Meta to say it would sell Giphy to comply with the UK’s breakup order.

    The deal will add GIFs and reaction stickers to Shutterstock’s digital content library while expanding Shutterstock’s access to Giphy’s 1.7 billion users, the company said in Tuesday’s announcement.

    The transaction is expected to close in June.

    [ad_2]

    Source link

  • Elon Musk rebrands Twitter as X | CNN Business

    Elon Musk rebrands Twitter as X | CNN Business

    [ad_1]


    New York
    CNN
     — 

    In a radical rebranding, Twitter owner Elon Musk has replaced Twitter’s iconic bird logo with X.

    Musk made the shock announcement of his plans early Sunday. By Monday morning US time, he tweeted that X.com now points to Twitter.com.

    “Interim X logo goes live later today,” he wrote, shortly before sharing a photo of Twitter’s headquarters lit up by a giant new X.

    The Twitter website now features the same logo, while the familiar blue bird is gone.

    Previously, Musk said he was bidding “adieu to the twitter brand and, gradually, all the birds.”

    Twitter

    (TWTR)
    , founded in 2006, has used its vivid, globally recognized blue bird emblem for more than a decade.

    The renaming could be seen as something of a brand overhaul “Hail Mary” for the company: Musk in recent months has repeatedly warned that Twitter, facing steep losses in ad revenue, was on the edge of bankruptcy.

    Increasing the pressure, earlier this month rival social media platform Threads launched from Facebook

    (FB)
    parent Meta. It surpassed 100 million user sign-ups in its first week.

    Twitter had 238 million active users prior to being taken private by Musk in October 2022.

    One of the world’s richest men, Musk was once best known for his innovative efforts through companies SpaceX and Tesla

    (TSLA)
    to launch rockets and build electric cars.

    Now, many of the headlines he makes are for his eccentric remarks on his personal Twitter account – often sharing conspiracy theories and getting into public spats on the social media platform.

    Musk overhauled the site after acquiring it for $44 billion in late October, then followed with mass layoffs, disputes over millions of dollars allegedly owed in severance and Musk’s note to employees that remaining at the company would mean “working long hours at high intensity.” He wrote: “Only exceptional performance will constitute a passing grade.”

    The upheaval prompted organizations, including the Anti-Defamation League, Free Press and GLAAD, to pressure brands to rethink advertising on Twitter.

    The groups pointed to the mass layoffs as a key factor in their thinking, citing fears that Musk’s cuts would make Twitter’s election-integrity policies effectively unenforceable, even if they technically remain active.

    Musk also began overseeing controversial policy changes which led to frequent service disruptions at Twitter and upended his own reputation in the process.

    In June, Musk named Linda Yaccarino, a former NBCUniversal marketing executive, CEO of the company.

    She commented on the name change on Twitter Sunday afternoon: “It’s an exceptionally rare thing – in life or in business – that you get a second chance to make another big impression. Twitter made one massive impression and changed the way we communicate. Now, X will go further, transforming the global town square.”

    As the new venture begins, it faces challenges. Musk recently disclosed that the platform still has a negative cash flow due to a 50% drop in advertising revenue and heavy debt loads.

    Criticizing the exit, or pause, of such Twitter advertisers as General Mills

    (GIS)
    , Macy’s

    (M)
    and some car companies that compete with Tesla, Musk has called himself a “free speech absolutist” and said he wanted to buy Twitter to bolster users’ ability to speak freely on the platform.

    Musk explained his approach to free speech by saying: “Is someone you don’t like allowed to say something you don’t like? And if that is the case, then we have free speech.”

    He added that Twitter would “be very reluctant to delete things” and that the platform would aim to allow all legal speech. Many users have worried that could mean a rise in hate speech.

    Meanwhile, the initial frenzy around rival Threads appears to have come back to earth, especially as it has been plagued with spam and lacks several user-friendly features Twitter, or, now X, offers.

    Adam Mosseri, who is overseeing the Threads launch for Meta, has hinted at plans to add features such as a desktop version of the app, a feed of only accounts a user follows and an edit button.

    Its ability to draw advertising support is, as yet, unproven.

    [ad_2]

    Source link

  • Meta could become even more dominant in social media with Threads | CNN Business

    Meta could become even more dominant in social media with Threads | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    In less than 48 hours, Meta’s Twitter rival Threads has surpassed 70 million sign-ups, upended the social media landscape and appears to have rattled Twitter enough that it is now threatening legal action against Meta.

    But even as users signed up for Threads in droves, with some clearly eager to flee the chaos of Elon Musk’s Twitter, the sudden success of Meta’s app could raise a new set of concerns.

    Meta has long been criticized for its market dominance, and for allegedly trying to choke off competition by copying and killing rival applications. Now, some competition experts and even some Threads users worry that if the new app’s traction continues, it may simply lead to the accumulation of even more power and dominance for Meta and its CEO Mark Zuckerberg.

    “The prospect of total monopoly by Meta, yikes,” wrote one user. “It’s a real problem for society when a few dozen people and companies own every single thing so that no alternative paradigms can exist that they don’t co-opt from the cradle,” replied another.

    Twitter had always been much smaller than Meta’s platforms, but it had an outsized influence in tech, media and politics. As Twitter faltered under Musk, though, a cottage industry emerged of smaller apps trying to capture some of its magic. Now more than any of them, Meta seems best positioned to claim the crown.

    Threads’ blockbuster launch this week highlights the uncomfortable reality of the modern digital economy: To potentially beat some of the biggest players in the industry, you might have to be a giant yourself.

    The overnight success of Threads is a testament both to the dissatisfaction with Musk’s ownership of Twitter and to the unique power and reach of one of Meta’s most important properties: Instagram.

    Instagram has more than two billion users, far more than the 238 million users Twitter reported having in the months before Musk took over. When new users sign up for Threads, which they do using an Instagram account, the app prompts them to follow all of their existing Instagram contacts with a single tap. It’s optional, but is easy to accept, and it takes a conscious decision to decline.

    By promoting Threads through Instagram, and by sharing Instagram user data with Threads to let people instantly recreate their social networks, Meta has significantly greased the onboarding process. That frictionless experience has allowed Threads to leapfrog what’s known in the industry as the “cold start” problem, in which a new platform struggles to gain new users because there are no other users there to attract them.

    Thanks to the Instagram integration, “that biggest problem, the chicken-egg problem, has been solved from the jump,” Reddit co-founder and venture investor Alexis Ohanian said in a video Thursday (posted, naturally, on Threads).

    That Threads appeared to clear that hurdle easily, Ohanian said, makes him “bullish” on the new app.

    But that same innovation that made signing up so many users so quickly may raise competition concerns, particularly in Europe where new antitrust rules for digital platforms are set to go into effect in a matter of months.

    “From a competition perspective this can be problematic because Meta can use it to leverage its market power and raise barriers to entry, as other rivals would not have the customer base Meta has via Instagram,” said Agustin Reyna, director of legal and economic affairs at the Brussels-based consumer advocacy organization BEUC.

    Under the EU’s Digital Markets Act (DMA), “digital gatekeepers” — a term that’s expected to cover Meta and/or its subsidiaries — will be prohibited from combining a user’s data from multiple platforms without consent, Reyna said. Another restriction forbids requiring users to sign up for one platform as a condition of using another.

    Instagram CEO Adam Mosseri appeared to acknowledge those issues this week in an interview with The Verge. Threads won’t be launching in the EU for now, he said, because of “complexities with complying with some of the laws coming into effect next year” — a statement The Verge suggested was a reference to the DMA.

    The DMA was passed specifically to deal with the antitrust concerns raised by large tech platforms. That Threads apparently cannot (yet) comply with rules designed to protect competition underscores uncertainty about the app’s potential competitive impact.

    Meta’s approach to Threads could also revive longstanding criticisms about the company’s alleged practice of copying and killing rivals, particularly as Twitter has warned Meta it may sue over claims of trade secret theft (an allegation Meta denies).

    The issue isn’t limited to the realm of social media. As the world races to develop artificial intelligence, Threads represents a huge new opportunity for Meta to gather training data for its own AI technology, in a way that could help it catch up to industry leaders such as OpenAI and Google. That could complicate any attempt at a comprehensive analysis of what Threads means for competition in tech.

    Part of what makes the debate so complicated is Threads’ seemingly very real threat to Twitter.

    If Threads puts pressure on Twitter to improve its service, that is a form of competition between apps, said Geoffrey Manne, founder of the Portland, Oregon-based International Center for Law and Economics.

    But, he added, if it leads to a concentration of power in the social media industry more broadly, it could mean a reduction in competition overall. It all depends on how you define the market.

    “I’m inclined to say it does both simultaneously, and the ultimate consequences aren’t so clear,” Manne said.

    Rather than viewing it through the lens of a social media market, one helpful way to look at the issue is from the perspective of the advertising market, he said. It’s possible that once Threads introduces advertising — which Zuckerberg has said won’t happen until the app has increased to significant scale — Threads simply reinforces Meta’s advertising market power, Manne said. That could lead to further antitrust scrutiny for Meta even if the question about competition in social media is ambiguous.

    Jeff Blattner, a former DOJ antitrust official, said it can only benefit consumers to have Threads as a rival to Twitter.

    “Two platforms run by maniac billionaires are better than one,” he wrote on Threads — though if Threads is so successful as to effectively knock out Twitter altogether, then in some ways the original question about Meta’s dominance will still stand.

    Threads has one thing going for it that may nip any competition concerns in the bud: A commitment to integrate with the same open protocols used by other distributed social media alternatives, such as Mastodon.

    That would give users the option to migrate their accounts, along with all their follower data intact, to a rival like Mastodon that isn’t controlled by Meta.

    While that interoperability isn’t available yet, Mosseri has repeatedly highlighted it as a priority on his to-do list.

    When and if it happens, that could be a significant step. What may appear now as an audience grab by Meta could someday wind up being how millions of people were onboarded to a massive, decentralized social networking infrastructure that is not controlled by any single company, individual or organization.

    “This is why we think interoperability requirements are so important,” said Charlotte Slaiman, a competition expert at the Washington-based consumer group Public Knowledge. If users could port their entire social graph from one rival to another whenever they wanted, she said, “we could have more fair competition based on the quality of the product, not just incumbency advantage.”

    [ad_2]

    Source link

  • Twitter’s rebrand is the next stage in Elon Musk’s vision for the company. But does anyone want it? | CNN Business

    Twitter’s rebrand is the next stage in Elon Musk’s vision for the company. But does anyone want it? | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Elon Musk’s move over the weekend to rebrand Twitter and replace its iconic bird logo with an X is just the latest step in his effort to make over the billionaire’s longtime favorite platform in his image.

    When Musk bought Twitter late last year, he laid out a vision for an “everything” app called X, where users could communicate, shop, consume entertainment and more. Last June — prior to his takeover — Musk told Twitter employees that the platform should be more like China’s WeChat, where he said users “basically live on” the app because “it’s so usable and helpful to daily life.”

    The vision for the rebrand may go all the way back to Musk’s creation of the original X.com in 1999, which Musk hoped would be an all-in-one financial platform and which eventually became PayPal.

    Despite Musk’s longstanding ambitions — and the heightened stakes since he shelled out $44 billion to purchase the social network — ditching Twitter’s branding in service of a future super app is a significant risk.

    Twitter still has a long way to go if Musk wants to build out the kind of services WeChat is known for — everything from ordering groceries and booking yoga classes to paying bills and chatting with friends. And that’s not to mention the financial and competitive challenges the company faces merely existing in its current form, let alone launching a massive expansion. It’s also not clear how much demand there is for such a super app outside of China, given that efforts by other platforms to simply sell users on added shopping features have been slow to take off.

    “While Musk’s vision is to turn ‘X’ into an ‘everything app,’ this takes time, money, and people -— three things that the company no longer has,” Mike Proulx, research director and vice president at Forrester, said in an investor note. By ditching Twitter’s name, Proulx added, Musk “will have singlehandedly wiped out over fifteen years of a brand name that has secured its place in our cultural lexicon,” leaving him to start fresh at a precarious time for the company.

    The X branding has already started taking over Twitter.

    Musk — who bought Twitter with a company called X Corp. — tweeted on Sunday that X.com now redirects to Twitter. (Musk reportedly bought the X.com domain back from PayPal in 2017.)

    On Sunday night, the new stylized X logo was projected onto the company’s headquarters. And by Monday, the bird logo had been replaced by an X on Twitter’s website. Musk even told followers that tweets should instead be called “x’s.”

    On Sunday, CEO Linda Yaccarino seemed to confirm Musk’s vision for the company. “X is the future state of unlimited interactivity — centered in audio, video, messaging, payments/banking — creating a global marketplace for ideas, goods, services, and opportunities,” Yaccarino said in a tweet.

    Walter Isaacson, the legendary tech journalist who has been shadowing Musk to write his biography, tweeted on Sunday that Musk told him even before the Twitter acquisition that he wanted to use the social platform to fulfill his original, decades-old vision for X.com. “I am very excited about finally implementing X.com as it should have been done, using Twitter as an accelerant!” Musk texted Isaacson at 3:30 a.m. one morning last October, just ahead of his takeover, according to the writer.

    On Monday, Musk explained the move in a tweet saying, “The Twitter name made sense when it was just 140 character messages going back and forth – like birds tweeting – but now you can post almost anything, including several hours of video.”

    “In the months to come, we will add comprehensive communications and the ability to conduct your entire financial world,” Musk said. “The Twitter name does not make sense in that context.”

    (The rebrand also seems to be a continuation of a sort of obsession with the letter “X,” which also features in the name of one of Tesla’s cars, the Model X; the name of his rocket company, SpaceX; the name of his new artificial intelligence firm, xAI; and the name of two of his children, X Æ A-Xii and Exa Dark Sideræl.)

    In recent weeks, Twitter has quietly begun its effort to build out a payments business called Twitter Payments — the company was granted money transmitter licenses in four US states since last month, including Arizona and Michigan. Musk has discussed his desire to promote longer videos on Twitter. And he’s tried to shift Twitter’s business model away from advertising by allowing users to pay for verification, a strategy that has resulted in some chaos but only a limited number of actual subscriptions.

    Still, Musk faces obvious hurdles to turning Twitter into a fully-developed super app. Since acquiring Twitter, Musk has fired around 80% of its staff, scared away many of the advertisers that made up its core user base and frustrated many of its users with controversial policy decisions. And now, Twitter faces steep competition from Meta’s rival app Threads, which launched to stunning success, although its usage has petered off slightly in recent days.

    Musk last week also said that Twitter still has negative cash flow because of a 50% decline in ad revenue.

    Even if Musk does add new features to Twitter, many US tech platforms have struggled to succeed in imitating WeChat. Deloitte said in a report published last year that Western markets are unlikely to see “a single, dominant super-app like WeChat in the near term” because the services such apps would aim to bundle together, such as digital payments and ride hailing, already “have too many well-established players.”

    A 2019 effort by the social media giant then known as Facebook to create its own digital currency and payments system that the company said would make it easier to buy things online officially flopped last year following intense regulatory scrutiny. And both TikTok and Instagram have reportedly scaled back their ambitions to incorporate e-commerce onto their platforms after their shopping features failed to gain significant traction with users.

    And until Musk rolls out significant changes to the platform, observers of the company say ditching Twitter’s well-known brand is a risky move.

    “To rebrand without significant new features seems like a desperate attempt for attention,” especially in the wake of Meta’s launch of Threads, said Joshua White, assistant professor of finance at Vanderbilt University. “This is akin to buying Coke and changing the bottle and name without changing the formula — likely a mistake.”

    [ad_2]

    Source link

  • Alvarado Street Bakery Sees Ads Rejected by Facebook, Responds With Lighthearted Protest Campaign

    Alvarado Street Bakery Sees Ads Rejected by Facebook, Responds With Lighthearted Protest Campaign

    [ad_1]

    Press Release



    updated: Sep 20, 2018

    Alvarado Street Bakery, a cooperatively owned organic bakery that produces sprouted-grain breads for customers across the country, is launching a new campaign poking fun at Facebook for rejecting Facebook ads and posts promoting its new Sprouted Wheat Hemp & Hops Bread.

    Seeing its ads and boosted posts were rejected by Facebook on the grounds that “Facebook doesn’t allow the promotion of illegal drugs,” Alvarado Street Bakery appealed to Facebook by clarifying that its products are made with organic hemp and hops, not illegal, hallucinogenic cannabis. While a few ads were eventually approved, some ads were still rejected — so Alvarado Street set out to poke the bear.

    With so many Russian trolls still lurking out there, it was surprising to see Facebook continually spending their energy rejecting our humble little organic bread. They made life a lot harder for our team to promote our new dizzyingly delicious new product, so we decided to have a little fun with our predicament.

    Michael Girkout, President, Marketing Director

    “With so many Russian trolls still lurking out there, it was surprising to see Facebook continually spending their energy rejecting our humble little organic bread,” says Michael Girkout, president of Alvarado Street. “They made life a lot harder for our team to promote our new dizzyingly delicious new product, so we decided to have a little fun with our predicament.”

    The digital campaign will feature pseudo-political campaign tactics to both poke fun at Facebook in a lighthearted way while building awareness for the Hemp & Hops product. To let people know the company is intent on doing good and not just stirring up trouble, $1 of every Hemp & Hops sale from Sept.15 to Oct. 15 will be donated to Feeding America, the nonprofit national network of food banks.

    Customers can find Alvarado Street Bakery’s Sprouted Hemp & Hops Bread at their local grocery store and learn more about the campaign at AlvaradoStreetBakery.com/bread/hh.

    ABOUT ALVARADO STREET BAKERY

    For nearly 40 years, Alvarado Street has been making bread the right way — sourcing organic grains direct from family farms and sprouting them to life for healthier, more delicious breads. Solar-powered and worker-owned, Alvarado Street Bakery products can be found across the nation, as well as in select global markets. Learn more at www.alvaradostreetbakery.com.

    Contact: 

    Jim Canterbury 
    707-291-3352
    ​jim@alvaradostreetbakery.com

    Source: Alvarado Street Bakery

    [ad_2]

    Source link